<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[cloud/devops]]></title><description><![CDATA[PRAFUL PATEL ☁️🚀, Highly skilled and motivated Cloud/DevOps Engineer with a proven track record of designing, implementing, and managing robust cloud infrastru]]></description><link>https://praful.cloud</link><generator>RSS for Node</generator><lastBuildDate>Mon, 20 Apr 2026 22:27:34 GMT</lastBuildDate><atom:link href="https://praful.cloud/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Invoice/Receipt Intake using AWS (S3 + Textract + DynamoDB) + n8n + Slack.]]></title><description><![CDATA[Option A — Fastest + clean (recommended for Week 1)
Flow

User uploads invoice to S3 bucket ai-intake-docs

S3 event triggers EventBridge rule

EventBridge sends event to API Gateway HTTP API

API Gateway calls n8n Webhook /s3-intake

n8n pulls the f...]]></description><link>https://praful.cloud/invoicereceipt-intake-using-aws-s3-textract-dynamodb-n8n-slack</link><guid isPermaLink="true">https://praful.cloud/invoicereceipt-intake-using-aws-s3-textract-dynamodb-n8n-slack</guid><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Mon, 12 Jan 2026 16:37:25 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-option-a-fastest-clean-recommended-for-week-1">Option A — Fastest + clean (recommended for Week 1)</h2>
<p><strong>Flow</strong></p>
<ol>
<li><p>User uploads invoice to <strong>S3</strong> bucket <code>ai-intake-docs</code></p>
</li>
<li><p>S3 event triggers <strong>EventBridge</strong> rule</p>
</li>
<li><p>EventBridge sends event to <strong>API Gateway HTTP API</strong></p>
</li>
<li><p>API Gateway calls <strong>n8n Webhook</strong> <code>/s3-intake</code></p>
</li>
<li><p>n8n pulls the file from S3 (GetObject)</p>
</li>
<li><p>n8n calls <strong>Textract AnalyzeExpense</strong></p>
</li>
<li><p>n8n maps fields → <code>{vendor,total,date,line_items}</code></p>
</li>
<li><p>n8n stores result in <strong>DynamoDB</strong> table <code>ai_results</code></p>
</li>
<li><p>n8n posts summary to <strong>Slack</strong></p>
</li>
</ol>
<p><strong>Why this is good</strong></p>
<ul>
<li><p>Near real-time</p>
</li>
<li><p>Easy to demo</p>
</li>
<li><p>No queue complexity</p>
</li>
<li><p>Works well for weekly posting</p>
</li>
</ul>
<hr />
<h2 id="heading-option-b-enterprise-buffered-upgrade">Option B — Enterprise buffered (upgrade)</h2>
<p><strong>Flow</strong><br />S3 upload → EventBridge → <strong>SQS</strong> → n8n (poll SQS) → S3 GetObject → Textract → DynamoDB → Slack</p>
<p><strong>Why it’s better</strong></p>
<ul>
<li><p>Handles bursts (100s uploads)</p>
</li>
<li><p>Retry/replay is easier</p>
</li>
<li><p>Keeps n8n from getting hammered</p>
</li>
</ul>
<hr />
<h1 id="heading-2-aws-resources-you-need-both-options">2) AWS resources you need (both options)</h1>
<h3 id="heading-s3">S3</h3>
<ul>
<li><p>Bucket: <code>ai-intake-docs</code></p>
</li>
<li><p>Folder convention: <code>invoices/YYYY/MM/...</code></p>
</li>
<li><p>Encryption: SSE-S3 (default)</p>
</li>
<li><p>Block public access: ON</p>
</li>
</ul>
<h3 id="heading-dynamodb">DynamoDB</h3>
<ul>
<li><p>Table: <code>ai_results</code></p>
</li>
<li><p>Partition key: <code>job_id</code> (String)</p>
</li>
<li><p>Optional attributes:</p>
<ul>
<li><code>s3_bucket</code>, <code>s3_key</code>, <code>vendor</code>, <code>total</code>, <code>invoice_date</code>, <code>created_at</code>, <code>raw_textract</code></li>
</ul>
</li>
</ul>
<h3 id="heading-textract">Textract</h3>
<ul>
<li>Use API: <code>AnalyzeExpense</code> (best for invoices/receipts)</li>
</ul>
<h3 id="heading-iam">IAM</h3>
<ul>
<li><p>n8n needs permissions:</p>
<ul>
<li><p>s3:GetObject on bucket objects</p>
</li>
<li><p>textract:AnalyzeExpense</p>
</li>
<li><p>dynamodb:PutItem on table</p>
</li>
</ul>
</li>
</ul>
<hr />
<h1 id="heading-3-step-by-step-implementation-option-a">3) Step-by-step implementation (Option A)</h1>
<h2 id="heading-step-a1-create-s3-bucket">Step A1 — Create S3 bucket</h2>
<ol>
<li><p>AWS Console → S3 → Create bucket → <code>ai-intake-docs</code></p>
</li>
<li><p>Block public access: <strong>enabled</strong></p>
</li>
<li><p>Default encryption: <strong>enabled</strong></p>
</li>
<li><p>(Optional) create folder <code>invoices/</code></p>
</li>
</ol>
<hr />
<h2 id="heading-step-a2-create-dynamodb-table">Step A2 — Create DynamoDB table</h2>
<ol>
<li><p>DynamoDB → Create table:</p>
<ul>
<li><p>Table name: <code>ai_results</code></p>
</li>
<li><p>Partition key: <code>job_id</code> (String)</p>
</li>
</ul>
</li>
<li><p>Leave defaults (on-demand is fine)</p>
</li>
</ol>
<hr />
<h2 id="heading-step-a3-create-api-gateway-http-api-webhook-gateway">Step A3 — Create API Gateway HTTP API (Webhook gateway)</h2>
<ol>
<li><p>API Gateway → Create API → <strong>HTTP API</strong></p>
</li>
<li><p>Add integration:</p>
<ul>
<li><strong>URL</strong> = your n8n webhook endpoint<br />  Example: <a target="_blank" href="https://n8n.yourdomain.com/webhook/s3-intake"><code>https://n8n.yourdomain.com/webhook/s3-intake</code></a></li>
</ul>
</li>
<li><p>Add route:</p>
<ul>
<li><code>POST /s3-events</code></li>
</ul>
</li>
<li><p>Deploy stage: <code>$default</code></p>
</li>
</ol>
<p><strong>Security (good enough for portfolio)</strong></p>
<ul>
<li><p>Add an <strong>API key</strong> or a shared secret header (recommended)</p>
</li>
<li><p>In n8n, check header like <code>x-shared-secret</code></p>
</li>
</ul>
<hr />
<h2 id="heading-step-a4-create-eventbridge-rule-for-s3-uploads">Step A4 — Create EventBridge rule for S3 uploads</h2>
<ol>
<li><p>EventBridge → Rules → Create rule</p>
</li>
<li><p>Event source: <strong>AWS events</strong></p>
</li>
<li><p>Pattern:</p>
</li>
</ol>
<pre><code class="lang-plaintext">{
  "source": ["aws.s3"],
  "detail-type": ["Object Created"],
  "detail": {
    "bucket": { "name": ["ai-intake-docs"] }
  }
}
</code></pre>
<ol start="4">
<li><p>Target: <strong>API Gateway</strong></p>
<ul>
<li><p>Choose your HTTP API</p>
</li>
<li><p>Route: <code>POST /s3-events</code></p>
</li>
</ul>
</li>
</ol>
<p>Now: every new upload triggers your API Gateway → n8n.</p>
<hr />
<h2 id="heading-step-a5-create-iam-credentials-for-n8n">Step A5 — Create IAM credentials for n8n</h2>
<p>If n8n is on EC2: best is <strong>instance role</strong>.<br />If local: use an <strong>IAM user access key</strong>.</p>
<h3 id="heading-minimal-iam-policy-week-1">Minimal IAM policy (Week 1)</h3>
<pre><code class="lang-plaintext">{
  "Version": "2012-10-17",
  "Statement": [
    { "Effect": "Allow", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::ai-intake-docs/*" },
    { "Effect": "Allow", "Action": ["textract:AnalyzeExpense"], "Resource": "*" },
    { "Effect": "Allow", "Action": ["dynamodb:PutItem"], "Resource": "arn:aws:dynamodb:*:*:table/ai_results" }
  ]
}
</code></pre>
<p>Attach it to:</p>
<ul>
<li><p>EC2 instance role used by n8n OR</p>
</li>
<li><p>IAM user used by n8n AWS credentials</p>
</li>
</ul>
<hr />
<h1 id="heading-4-n8n-workflow-import-ready-mapping-code">4) n8n workflow (import-ready) + mapping code</h1>
<h2 id="heading-what-the-webhook-payload-looks-like">What the webhook payload looks like</h2>
<p>EventBridge sends something like:</p>
<pre><code class="lang-plaintext">{
  "detail": {
    "bucket": { "name": "ai-intake-docs" },
    "object": { "key": "invoices/2026/01/invoice1.pdf" }
  }
}
</code></pre>
<hr />
<h2 id="heading-import-ready-n8n-workflow-json">Import-ready n8n workflow JSON</h2>
<p><strong>Workflow name:</strong> <code>Week1 - S3→Textract→DynamoDB→Slack</code><br /><strong>Webhook path:</strong> <code>/s3-intake</code></p>
<blockquote>
<p>After import, set credentials:</p>
</blockquote>
<ul>
<li><p>AWS credential in S3/Textract/DynamoDB nodes</p>
</li>
<li><p>Slack credential in Slack node</p>
</li>
<li><p>Optionally add shared secret check</p>
</li>
</ul>
<pre><code class="lang-plaintext">{
  "name": "Week1 - S3→Textract→DynamoDB→Slack",
  "nodes": [
    {
      "parameters": {
        "path": "s3-intake",
        "httpMethod": "POST",
        "responseMode": "lastNode"
      },
      "name": "Webhook (/s3-intake)",
      "type": "n8n-nodes-base.webhook",
      "typeVersion": 2,
      "position": [200, 300]
    },
    {
      "parameters": {
        "jsCode": "const detail = $json.detail || {};\nconst bucket = detail.bucket?.name;\nconst key = detail.object?.key;\n\nif (!bucket || !key) {\n  throw new Error('Missing bucket/key in event payload');\n}\n\nreturn [{ bucket, key, received_at: new Date().toISOString() }];"
      },
      "name": "Extract S3 Bucket+Key",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [430, 300]
    },
    {
      "parameters": {
        "operation": "getObject",
        "bucketName": "={{$json.bucket}}",
        "objectKey": "={{$json.key}}"
      },
      "name": "S3 GetObject",
      "type": "n8n-nodes-base.awsS3",
      "typeVersion": 1,
      "position": [670, 300],
      "credentials": {
        "aws": { "id": "YOUR_AWS_CRED", "name": "AWS Account" }
      }
    },
    {
      "parameters": {
        "operation": "analyzeExpense",
        "binaryPropertyName": "data"
      },
      "name": "Textract AnalyzeExpense",
      "type": "n8n-nodes-base.awsTextract",
      "typeVersion": 1,
      "position": [920, 300],
      "credentials": {
        "aws": { "id": "YOUR_AWS_CRED", "name": "AWS Account" }
      }
    },
    {
      "parameters": {
        "jsCode": "function pick(fields, label) {\n  const f = fields.find(x =&gt; (x.Type?.Text || '').toLowerCase() === label.toLowerCase());\n  const v = f?.ValueDetection?.Text || null;\n  return v;\n}\n\nconst tex = $json;\nconst doc = tex.ExpenseDocuments?.[0];\nif (!doc) throw new Error('No ExpenseDocuments returned by Textract');\n\nconst summaryFields = doc.SummaryFields || [];\nconst lineItems = [];\n\nconst groups = doc.LineItemGroups || [];\nfor (const g of groups) {\n  for (const li of (g.LineItems || [])) {\n    const lf = li.LineItemExpenseFields || [];\n    const desc = lf.find(x =&gt; (x.Type?.Text || '').toLowerCase() === 'item')?.ValueDetection?.Text\n      || lf.find(x =&gt; (x.Type?.Text || '').toLowerCase() === 'description')?.ValueDetection?.Text\n      || null;\n    const qty = lf.find(x =&gt; (x.Type?.Text || '').toLowerCase() === 'quantity')?.ValueDetection?.Text || null;\n    const price = lf.find(x =&gt; (x.Type?.Text || '').toLowerCase() === 'price')?.ValueDetection?.Text\n      || lf.find(x =&gt; (x.Type?.Text || '').toLowerCase() === 'unit_price')?.ValueDetection?.Text\n      || null;\n    const amount = lf.find(x =&gt; (x.Type?.Text || '').toLowerCase() === 'amount')?.ValueDetection?.Text || null;\n\n    if (desc || amount || price) lineItems.push({ desc, qty, price, amount });\n  }\n}\n\nconst vendor = pick(summaryFields, 'VENDOR_NAME') || pick(summaryFields, 'SUPPLIER_NAME');\nconst total  = pick(summaryFields, 'TOTAL') || pick(summaryFields, 'AMOUNT_DUE');\nconst date   = pick(summaryFields, 'INVOICE_RECEIPT_DATE') || pick(summaryFields, 'DATE');\nconst invoiceId = pick(summaryFields, 'INVOICE_RECEIPT_ID') || pick(summaryFields, 'INVOICE_ID');\n\nconst job_id = `${Date.now()}-${Math.random().toString(16).slice(2)}`;\n\nreturn [{\n  job_id,\n  vendor,\n  total,\n  invoice_date: date,\n  invoice_id: invoiceId,\n  line_items: lineItems,\n  raw_textract: tex\n}];"
      },
      "name": "Map Textract → Fields",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [1170, 300]
    },
    {
      "parameters": {
        "operation": "put",
        "tableName": "ai_results",
        "simple": true,
        "item": {
          "job_id": "={{$json.job_id}}",
          "use_case": "invoice_intake",
          "vendor": "={{$json.vendor || ''}}",
          "total": "={{$json.total || ''}}",
          "invoice_date": "={{$json.invoice_date || ''}}",
          "invoice_id": "={{$json.invoice_id || ''}}",
          "created_at": "={{new Date().toISOString()}}",
          "result_json": "={{JSON.stringify({vendor:$json.vendor,total:$json.total,invoice_date:$json.invoice_date,invoice_id:$json.invoice_id,line_items:$json.line_items})}}"
        }
      },
      "name": "DynamoDB PutItem",
      "type": "n8n-nodes-base.awsDynamoDb",
      "typeVersion": 1,
      "position": [1420, 300],
      "credentials": {
        "aws": { "id": "YOUR_AWS_CRED", "name": "AWS Account" }
      }
    },
    {
      "parameters": {
        "authentication": "predefinedCredentialType",
        "resource": "message",
        "operation": "post",
        "channel": "finance-alerts",
        "text": "={{`🧾 Invoice Processed\\nVendor: ${$json.vendor || 'Unknown'}\\nTotal: ${$json.total || 'Unknown'}\\nDate: ${$json.invoice_date || 'Unknown'}\\nItems: ${(JSON.parse($json.result_json).line_items || []).length}\\nJob: ${$json.job_id}`}}"
      },
      "name": "Slack Alert",
      "type": "n8n-nodes-base.slack",
      "typeVersion": 2,
      "position": [1670, 300],
      "credentials": {
        "slackApi": { "id": "YOUR_SLACK_CRED", "name": "Slack account" }
      }
    }
  ],
  "connections": {
    "Webhook (/s3-intake)": { "main": [[{ "node": "Extract S3 Bucket+Key", "type": "main", "index": 0 }]] },
    "Extract S3 Bucket+Key": { "main": [[{ "node": "S3 GetObject", "type": "main", "index": 0 }]] },
    "S3 GetObject": { "main": [[{ "node": "Textract AnalyzeExpense", "type": "main", "index": 0 }]] },
    "Textract AnalyzeExpense": { "main": [[{ "node": "Map Textract → Fields", "type": "main", "index": 0 }]] },
    "Map Textract → Fields": { "main": [[{ "node": "DynamoDB PutItem", "type": "main", "index": 0 }]] },
    "DynamoDB PutItem": { "main": [[{ "node": "Slack Alert", "type": "main", "index": 0 }]] }
  },
  "active": false
}
</code></pre>
<h3 id="heading-notes-about-that-workflow">Notes about that workflow</h3>
<ul>
<li><p>The <strong>Textract node name/type</strong> may differ slightly depending on your n8n version and installed AWS nodes.</p>
</li>
<li><p>If your n8n build doesn’t have <code>awsTextract</code> or <code>awsDynamoDb</code>, tell me your n8n version and I’ll convert these to <strong>HTTP Request</strong> nodes calling AWS APIs directly (still works).</p>
</li>
</ul>
<hr />
<h1 id="heading-5-local-demo-without-eventbridgeapi-gateway-fast-test">5) Local demo without EventBridge/API Gateway (fast test)</h1>
<p>Before wiring AWS events, you can trigger n8n manually with:</p>
<pre><code class="lang-plaintext">curl -X POST https://n8n.yourdomain.com/webhook/s3-intake \
  -H "Content-Type: application/json" \
  -d '{
    "detail": {
      "bucket": {"name":"ai-intake-docs"},
      "object": {"key":"invoices/test-invoice.jpg"}
    }
  }'
</code></pre>
<p>Upload <code>test-invoice.jpg</code> to S3 first, then run the curl. You’ll see Slack alert + DynamoDB record.</p>
<hr />
<h1 id="heading-6-hardening-checklist-so-it-looks-enterprise">6) Hardening checklist (so it looks enterprise)</h1>
<h2 id="heading-security">Security</h2>
<ul>
<li><p>Add a shared secret header in API Gateway → n8n:</p>
<ul>
<li>Header: <code>x-shared-secret: &lt;random&gt;</code></li>
</ul>
</li>
<li><p>In n8n, add a Code node at start to verify header.</p>
</li>
<li><p>Use <strong>instance role</strong> (no static keys) if n8n runs on EC2.</p>
</li>
</ul>
<h2 id="heading-reliability">Reliability</h2>
<ul>
<li><p>Add a “try/catch” style branch:</p>
<ul>
<li>On failure → log to DynamoDB <code>use_case=invoice_intake_error</code> + Slack failure channel</li>
</ul>
</li>
<li><p>Add idempotency:</p>
<ul>
<li>job_id = hash(bucket+key+etag) so re-uploads don’t duplicate</li>
</ul>
</li>
</ul>
<hr />
<h1 id="heading-7-optional-upgrade-option-b-with-sqs">7) Optional upgrade (Option B with SQS)</h1>
<p>When you’re ready to level up:</p>
<ul>
<li><p>EventBridge target: SQS queue</p>
</li>
<li><p>n8n: SQS Trigger (poll) → process messages</p>
</li>
<li><p>This gives you buffering + retry + DLQ</p>
</li>
</ul>
<p>If you want, I’ll provide:</p>
<ul>
<li><p>SQS + DLQ setup</p>
</li>
<li><p>EventBridge rule target SQS</p>
</li>
<li><p>n8n SQS-trigger workflow JSON</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🚀 Building a Scalable Serverless CRUD App with AWS]]></title><description><![CDATA[📋 Introduction
Welcome to my comprehensive guide on building a Serverless CRUD Application using AWS! In this post, we'll dive deep into how to create and deploy a fully serverless, scalable, and cost-effective CRUD system using a variety of AWS ser...]]></description><link>https://praful.cloud/building-a-scalable-serverless-crud-app-with-aws</link><guid isPermaLink="true">https://praful.cloud/building-a-scalable-serverless-crud-app-with-aws</guid><category><![CDATA[AWS, Serverless, CRUD, AWS Lambda, API Gateway, DynamoDB, CloudFormation, SAM, CloudWatch, CloudFront, Cognito, CI/CD, Frontend Development, Infrastructure as Code]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Sun, 02 Mar 2025 01:57:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1740884147193/0952f0f7-1284-4841-9f2e-2814142d88d7.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction"><strong>📋 Introduction</strong></h2>
<p>Welcome to my comprehensive guide on building a <strong>Serverless CRUD Application</strong> using AWS! In this post, we'll dive deep into how to create and deploy a fully serverless, scalable, and cost-effective CRUD system using a variety of AWS services. Whether you're a developer eager to explore serverless architectures or looking for a rapid prototyping solution, this guide is for you! 😊</p>
<p><strong>Follow the GitHub Documentation and code for full implementation details:</strong><br />🔹 <a target="_blank" href="https://github.com/prafulpatel16/serverless-crud-app">GitHub Repo: serverless-crud-app</a></p>
<hr />
<h2 id="heading-table-of-contentshttpsgithubcomprafulpatel16serverless-crud-app"><a target="_blank" href="https://github.com/prafulpatel16/serverless-crud-app">Table of Contents</a></h2>
<ol>
<li><p><a target="_blank" href="https://github.com/prafulpatel16/serverless-crud-app">About th</a><a target="_blank" href="https://chatgpt.com/c/67ba60ff-a6b4-8011-bb41-ed1aac85eb9f#about-the-project">e Project</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/67ba60ff-a6b4-8011-bb41-ed1aac85eb9f#architecture">Architecture</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/67ba60ff-a6b4-8011-bb41-ed1aac85eb9f#project-structure">Project Structure</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/67ba60ff-a6b4-8011-bb41-ed1aac85eb9f#prerequisites--setup">Prerequisites &amp; Setup</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/67ba60ff-a6b4-8011-bb41-ed1aac85eb9f#documentations">Documentations</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/67ba60ff-a6b4-8011-bb41-ed1aac85eb9f#detailed-workflow">Detailed Workflow</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/67ba60ff-a6b4-8011-bb41-ed1aac85eb9f#crud-operations">CRUD Operations</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/67ba60ff-a6b4-8011-bb41-ed1aac85eb9f#deployment--configuration">Deployment &amp; Configuration</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/67ba60ff-a6b4-8011-bb41-ed1aac85eb9f#local-testing">Local Testing</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/67ba60ff-a6b4-8011-bb41-ed1aac85eb9f#aws-services-used">AWS Services Used</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/67ba60ff-a6b4-8011-bb41-ed1aac85eb9f#contributing">Contributing</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/67ba60ff-a6b4-8011-bb41-ed1aac85eb9f#license">License</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/67ba60ff-a6b4-8011-bb41-ed1aac85eb9f#author">Author</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/67ba60ff-a6b4-8011-bb41-ed1aac85eb9f#additional-resources">Additional Resources</a></p>
</li>
</ol>
<hr />
<h2 id="heading-about-the-project">About the Project 📝</h2>
<p>The <strong>Serverless CRUD App</strong> is a complete backend and frontend solution that leverages a fully serverless architecture to manage Create, Read, Update, and Delete operations. Built with AWS SAM, CloudFormation, and a structured backend API, this project seamlessly integrates with a simple HTML/JavaScript frontend hosted on Amazon S3.</p>
<p><strong>Why build a Serverless CRUD App?</strong></p>
<ul>
<li><p><strong>Scalability:</strong> Automatically scales based on demand.</p>
</li>
<li><p><strong>Cost-Effective:</strong> Pay only for what you use.</p>
</li>
<li><p><strong>Simplicity:</strong> Focus on business logic instead of server maintenance.</p>
</li>
<li><p><strong>High Availability:</strong> AWS services ensure robust, fault-tolerant operations.</p>
</li>
</ul>
<p><strong>When is this useful?</strong></p>
<ul>
<li><p>Rapid prototyping of applications.</p>
</li>
<li><p>Developing low to medium traffic apps.</p>
</li>
<li><p>Optimizing costs for intermittent workloads.</p>
</li>
</ul>
<hr />
<h2 id="heading-architectural-flow">Architectural Flow🏗️</h2>
<p>The app is built on a modern AWS serverless stack. Here's an architectural overview:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740875487636/fc09ef64-1ef6-43a8-bd05-9760a7646318.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740875559929/ff175795-69c1-481d-a432-670950a73610.jpeg" alt class="image--center mx-auto" /></p>
<h3 id="heading-key-components">Key Components:</h3>
<ul>
<li><p><strong>Frontend (S3):</strong> Hosts the static HTML, CSS, and JS.</p>
</li>
<li><p><strong>API Gateway:</strong> Exposes RESTful endpoints for CRUD operations.</p>
</li>
<li><p><strong>Lambda Functions:</strong> Execute CRUD logic and handle API requests.</p>
</li>
<li><p><strong>DynamoDB:</strong> NoSQL database storing items (e.g., <code>id</code>, <code>name</code>, <code>age</code>).</p>
</li>
</ul>
<p>Frontend UI</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740875970134/21122c9c-5d78-4dfe-980c-9bdac3f49f6f.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-project-structure">Project Structure 📁</h2>
<pre><code class="lang-plaintext">serverless-crud-app/
├── backend/
│   ├── create/         # Lambda function for creating resources
│   ├── read/           # Lambda function for reading a single resource
│   ├── update/         # Lambda function for updating resources
│   ├── delete/         # Lambda function for deleting resources
│   ├── list/           # Lambda function for listing resources
│   └── documentations/ # Detailed API and infrastructure docs
├── frontend/
│   └── index.html      # Static frontend UI
├── pipelines/          # CI/CD pipeline configurations
├── cloudformation.yaml # CloudFormation/SAM stack configuration
├── template.yaml       # AWS SAM template definition
└── README.md           # Project documentation
</code></pre>
<ul>
<li><p><strong>backend/</strong>: Contains all Lambda function code and documentation.</p>
</li>
<li><p><strong>frontend/</strong>: Hosts the static UI assets.</p>
</li>
<li><p><strong>pipelines/</strong>: Holds configurations for automated CI/CD deployments.</p>
</li>
<li><p><strong>cloudformation.yaml / template.yaml</strong>: Define the AWS infrastructure as code.</p>
</li>
<li><p><strong>README.md</strong>: Provides an overview and detailed documentation for the project.</p>
</li>
</ul>
<hr />
<h2 id="heading-prerequisites-amp-setup">Prerequisites &amp; Setup 🛠️</h2>
<p>Before you begin, ensure you have:</p>
<ul>
<li><p><strong>AWS Account:</strong> Required to deploy AWS resources.</p>
</li>
<li><p><strong>AWS CLI:</strong> <a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html">Installation Guide</a> and configure with <code>aws configure</code>.</p>
</li>
<li><p><strong>AWS SAM CLI / CloudFormation:</strong> For packaging and deploying the serverless application.</p>
</li>
<li><p><strong>Node.js / Python:</strong> For frontend and backend development/testing.</p>
</li>
</ul>
<hr />
<h2 id="heading-documentations">Documentations 📜</h2>
<p>Additional documentation files can be found in the <code>documentations/</code> directory:</p>
<ul>
<li><p><a target="_blank" href="https://chatgpt.com/c/documentations/0.0.pre-requisite.md">0.0 Pre-requisite</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/documentations/0.1.IAM-roles.md">0.1 IAM Roles</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/documentations/1.API-document.md">1. API Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/documentations/2.Api-testing.md">2. API Testing</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/documentations/3.app-functionality.md">3. App Functionality</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/documentations/4.frontend-doc.md">4. Frontend Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://chatgpt.com/c/documentations/5.sam-infra-documentation.md">5. SAM Infra Documentation</a></p>
</li>
</ul>
<hr />
<h2 id="heading-detailed-workflow">Detailed Workflow 🔄</h2>
<ol>
<li><p><strong>User Accesses the Frontend:</strong><br /> The user opens the static site hosted on Amazon S3 (optionally via CloudFront).</p>
</li>
<li><p><strong>User Chooses a CRUD Operation:</strong></p>
<ul>
<li><p><strong>Create:</strong> Fill out ID, Name, and Age fields.</p>
</li>
<li><p><strong>Read:</strong> Enter an ID to fetch details.</p>
</li>
<li><p><strong>Update:</strong> Enter an ID along with fields to update.</p>
</li>
<li><p><strong>Delete:</strong> Enter an ID to remove an item.</p>
</li>
<li><p><strong>List:</strong> Retrieve all items from the DynamoDB table.</p>
</li>
</ul>
</li>
<li><p><strong>Request Routing via API Gateway:</strong><br /> The frontend sends an HTTP request to API Gateway, which routes it to the appropriate Lambda function.</p>
</li>
<li><p><strong>Lambda Function Processing:</strong><br /> The Lambda function performs the CRUD operation on the DynamoDB table and returns a JSON response.</p>
</li>
<li><p><strong>Response Displayed on the Frontend:</strong><br /> The result (success or error) is displayed to the user in the UI.</p>
</li>
</ol>
<hr />
<h2 id="heading-crud-operations">CRUD Operations ⚙️</h2>
<ol>
<li><p><strong>Create (POST /create):</strong></p>
<ul>
<li><p><strong>Frontend:</strong> Sends an array of items (e.g., <code>[{ id, name, age }, ...]</code>).</p>
</li>
<li><p><strong>Lambda:</strong> Inserts each item into DynamoDB using <code>put_item</code>.</p>
</li>
</ul>
</li>
<li><p><strong>Read (GET /read?id=123):</strong></p>
<ul>
<li><p><strong>Frontend:</strong> Appends <code>?id=123</code> to the URL.</p>
</li>
<li><p><strong>Lambda:</strong> Retrieves an item using <code>get_item(Key={"id": ...})</code>.</p>
</li>
</ul>
</li>
<li><p><strong>Update (PUT /update):</strong></p>
<ul>
<li><p><strong>Frontend:</strong> Sends a JSON payload with <code>id</code> and fields to update.</p>
</li>
<li><p><strong>Lambda:</strong> Uses <code>update_item</code> with dynamic update expressions.</p>
</li>
</ul>
</li>
<li><p><strong>Delete (DELETE /delete):</strong></p>
<ul>
<li><p><strong>Frontend:</strong> Sends a JSON payload with <code>"Key": { "id": "..." }</code>.</p>
</li>
<li><p><strong>Lambda:</strong> Calls <code>delete_item(Key={"id": ...})</code>.</p>
</li>
</ul>
</li>
<li><p><strong>List (GET /list):</strong></p>
<ul>
<li><p><strong>Frontend:</strong> Issues a GET request to retrieve all items.</p>
</li>
<li><p><strong>Lambda:</strong> Scans or queries the DynamoDB table to list items.</p>
</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-deployment-amp-configuration">Deployment &amp; Configuration 🚀</h2>
<ol>
<li><p><strong>CloudFormation / AWS SAM:</strong><br /> Run:</p>
<pre><code class="lang-sh"> sam build &amp;&amp; sam deploy --guided
</code></pre>
<p> This command deploys your backend resources, including Lambda functions, DynamoDB, and API Gateway.</p>
</li>
<li><p><strong>S3 Frontend Hosting:</strong></p>
<ul>
<li><p>Upload your <code>frontend/index.html</code> (and assets) to an S3 bucket.</p>
</li>
<li><p>Enable <strong>Static Website Hosting</strong> on the bucket.</p>
</li>
<li><p>Optionally, configure <strong>CloudFront</strong> for better global distribution.</p>
</li>
</ul>
</li>
<li><p><strong>Environment Variables:</strong><br /> Set <code>TABLE_NAME</code> in your Lambda configuration to the name of your DynamoDB table.</p>
</li>
<li><p><strong>CORS Configuration:</strong><br /> Ensure each Lambda response includes <code>Access-Control-Allow-Origin: *</code>. API Gateway should be configured with Lambda Proxy Integration to pass these headers through.</p>
</li>
</ol>
<hr />
<h2 id="heading-aws-services-used">AWS Services Used 🌐</h2>
<p>This project leverages several key AWS services:</p>
<ul>
<li><p><strong>AWS S3:</strong> For hosting the static frontend.</p>
</li>
<li><p><strong>Amazon API Gateway:</strong> To expose RESTful endpoints.</p>
</li>
<li><p><strong>AWS Lambda:</strong> For executing backend CRUD logic.</p>
</li>
<li><p><strong>AWS CloudFormation / SAM:</strong> For defining and deploying infrastructure as code.</p>
</li>
<li><p><strong>Amazon DynamoDB:</strong> As a scalable NoSQL database.</p>
</li>
<li><p><strong>AWS CloudWatch:</strong> For logging and monitoring.</p>
</li>
<li><p><strong>AWS X-Ray:</strong> For tracing and debugging Lambda invocations.</p>
</li>
<li><p><strong>AWS IAM:</strong> To securely manage permissions and roles.</p>
</li>
<li><p><strong>AWS CloudFront:</strong> For global content delivery (optional).</p>
</li>
<li><p><strong>AWS Cognito:</strong> (Optional) For user authentication and authorization if needed.<br />  POSTMAN</p>
</li>
</ul>
<hr />
<h2 id="heading-features">Features ✨</h2>
<ul>
<li><p><strong>🛠️ Serverless Backend:</strong> Leverages AWS Lambda to handle CRUD operations.</p>
</li>
<li><p><strong>📜 Infrastructure as Code:</strong> Built with AWS SAM and CloudFormation templates.</p>
</li>
<li><p><strong>🎨 Modern Frontend UI:</strong> A simple HTML/JavaScript interface for interacting with the backend.</p>
</li>
<li><p><strong>🚀 CI/CD Pipelines:</strong> Automated deployments via AWS CodePipeline or GitHub Actions.</p>
</li>
<li><p><strong>📖 Comprehensive Documentation:</strong> Detailed API docs, testing guides, and architecture overviews.</p>
</li>
</ul>
<hr />
<h2 id="heading-local-testing">Local Testing 🧪</h2>
<ol>
<li><p><strong>Using Postman:</strong></p>
<ul>
<li><p><strong>Create:</strong> <code>POST /create</code> with JSON body: <code>{"items": [{"id":"123","name":"Test","age":25}]}</code></p>
</li>
<li><p><strong>Read:</strong> <code>GET /read?id=123</code></p>
</li>
<li><p><strong>Update:</strong> <code>PUT /update</code> with JSON body: <code>{"id":"123","name":"NewName","age":30}</code></p>
</li>
<li><p><strong>Delete:</strong> <code>DELETE /delete</code> with JSON body: <code>{"Key":{"id":"123"}}</code></p>
</li>
<li><p><strong>List:</strong> <code>GET /list</code></p>
</li>
</ul>
</li>
<li><p><strong>CloudWatch Logs:</strong><br /> Use CloudWatch to monitor Lambda logs and debug issues.</p>
</li>
</ol>
<hr />
<h2 id="heading-contributing">Contributing 🤝</h2>
<p>Contributions are welcome! Here’s how you can contribute:</p>
<ol>
<li><p><strong>Fork &amp; Clone:</strong> Fork the repository on GitHub and clone it locally.</p>
</li>
<li><p><strong>Create a Feature Branch:</strong></p>
<pre><code class="lang-sh"> git checkout -b feature/my-awesome-feature
</code></pre>
</li>
<li><p><strong>Commit &amp; Push:</strong> Make your changes, commit, and push them to your fork.</p>
</li>
<li><p><strong>Open a Pull Request:</strong> Provide detailed descriptions of your changes and open a PR against the <code>main</code> branch.</p>
</li>
</ol>
<p>For additional guidelines, please refer to the <a target="_blank" href="https://chatgpt.com/c/documentations/CONTRIBUTING.md">CONTRIBUTING.md</a> file.</p>
<hr />
<h2 id="heading-license">License 📄</h2>
<p>This project is licensed under the <a target="_blank" href="https://chatgpt.com/c/LICENSE">MIT License</a>.</p>
<hr />
<h2 id="heading-author">Author 👨‍💻</h2>
<p>Created and maintained by <strong>Praful Patel</strong>.</p>
<ul>
<li><p><a target="_blank" href="https://github.com/prafulpatel16">Praful's GitHub</a></p>
</li>
<li><p><a target="_blank" href="https://www.praful.cloud/">Praful's Blog</a></p>
</li>
</ul>
<hr />
<h2 id="heading-additional-resources">Additional Resources 📚</h2>
<ul>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/lambda/">AWS Lambda Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/apigateway/">Amazon API Gateway Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/dynamodb/">Amazon DynamoDB Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html">AWS SAM Documentation</a></p>
</li>
</ul>
<hr />
<p><strong>Happy Building!</strong><br />If you have any questions or need further details, feel free to open an issue on GitHub or reach out directly.</p>
]]></content:encoded></item><item><title><![CDATA[🚀 Migrating to AWS : Database Migration using AWS DMS]]></title><description><![CDATA[📋 Introduction
Migrating databases from on-premises to AWS is a critical step in cloud adoption. AWS Database Migration Service (DMS) helps streamline this process by providing a secure, scalable, and automated solution for full-load and ongoing rep...]]></description><link>https://praful.cloud/migrating-to-aws-database-migration-using-aws-dms</link><guid isPermaLink="true">https://praful.cloud/migrating-to-aws-database-migration-using-aws-dms</guid><category><![CDATA[#AWSMigration #DMS #DatabaseMigration #AWSRDS #CloudComputing #DevOps #CloudMigration #AWSDatabase #ZeroDowntimeMigration #MySQLtoAWS #AWSDMSBestPractices #CloudSecurity #MigrationStrategy #AWSCloud #DataReplication #ITInfrastructure #CloudOptimization #AmazonWebServices]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Tue, 04 Feb 2025 19:23:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738695853529/859c151c-e0ed-4d30-9822-76a79f594a96.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">📋 <strong>Introduction</strong></h2>
<p>Migrating databases from <strong>on-premises</strong> to <strong>AWS</strong> is a critical step in cloud adoption. <strong>AWS Database Migration Service (DMS)</strong> helps streamline this process by providing a <strong>secure, scalable, and automated</strong> solution for <strong>full-load and ongoing replication</strong> of databases.</p>
<p>Following <strong>Phase 1 - Application Discovery &amp; TCO Analysis</strong>, we now move to <strong>Phase 2: AWS Database Migration</strong> to efficiently transition <strong>MySQL databases</strong> from an <strong>on-premises</strong> setup to <strong>Amazon RDS (Relational Database Service)</strong>.</p>
<hr />
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738696054924/78c0d974-40ba-4ade-9a4d-bf854c3f5f1f.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-key-objectives-of-phase-2">📌 <strong>Key Objectives of Phase 2</strong></h2>
<p>✅ <strong>Provision AWS RDS as the target database.</strong><br />✅ <strong>Create a DMS Replication Instance</strong> to facilitate data migration.<br />✅ <strong>Configure Source &amp; Target DMS Endpoints</strong> for secure connections.<br />✅ <strong>Enable Binary Logs in MySQL</strong> for ongoing replication.<br />✅ <strong>Execute AWS DMS Migration Task</strong> for database replication.</p>
<p>By the end of this phase, the <strong>on-prem MySQL database</strong> will be <strong>fully migrated</strong> to AWS RDS, ensuring <strong>data integrity, security, and scalability</strong>.</p>
<hr />
<h2 id="heading-follow-the-github-documentation-for-full-details">📑 <strong>Follow the GitHub Documentation for Full Details</strong></h2>
<p>🔹 <strong>GitHub Repo:</strong> <a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/blob/master/README.md">AWS Migration Project</a><br />📌 <strong>Phase 2: AWS Database Migration</strong><br />➡️ <strong>Pre-Requisite:</strong> <a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/blob/master/migration/B-Phase%202-AWS%20Database-migration/0.Pre-requisite.md">GitHub Link</a><br />➡️ <strong>Database Migration:</strong> <a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/blob/master/migration/B-Phase%202-AWS%20Database-migration/1.Database-migration.md">GitHub Link</a><br />➡️ <strong>Troubleshooting:</strong> <a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/blob/master/migration/B-Phase%202-AWS%20Database-migration/3.Troubleshooting.md">GitHub Link</a>  </p>
<p>🔹 <strong>🎥 Watch the Video Tutorial on YouTube</strong>: <a target="_blank" href="https://youtu.be/RsjnwFSk6LU">Click Here</a></p>
<hr />
<h2 id="heading-on-premises-database-overview">🌍 <strong>On-Premises Database Overview</strong></h2>
<p>Before migration, let’s understand the existing <strong>on-prem</strong> database setup:</p>
<p>📌 <strong>Database Server Details</strong><br />🔹 <strong>Database:</strong> MySQL 5.7<br />🔹 <strong>Operating System:</strong> Ubuntu 24.04 LTS<br />🔹 <strong>Storage:</strong> 8 GB SSD<br />🔹 <strong>Tables:</strong> <code>obbs.tbladmin</code>, <code>obbs.tblbooking</code>, <code>obbs.tblcontact</code>, etc.<br />🔹 <strong>Replication Type:</strong> <strong>Full Load + Ongoing Replication (CDC)</strong></p>
<p>💡 <strong>Challenges with On-Premises Databases:</strong><br />❌ <strong>High Maintenance Costs</strong> – Requires manual upgrades and monitoring.<br />❌ <strong>Limited Scalability</strong> – Hard to scale with increasing data load.<br />❌ <strong>Backup &amp; Recovery Issues</strong> – No automated snapshot capabilities.</p>
<p>✅ <strong>Why AWS RDS?</strong><br />✔️ Fully managed database with automated backups.<br />✔️ High availability with Multi-AZ deployment.<br />✔️ Auto-scaling capabilities for dynamic workloads.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738696439017/b15f2453-1f9c-4a2f-9982-fbad909f41de.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738696451351/43664643-fe8a-454f-bde5-41438f79b9a6.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738696458867/0dc8f32f-9bbd-45a4-bac5-609e18a4665c.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738696467724/190909a6-96cb-4103-9f31-3396604efba5.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738696475247/c5b092a6-bdca-4c4f-8724-16c795ebf4fe.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-step-1-create-target-rds-database-in-aws">🏗 <strong>Step 1: Create Target RDS Database in AWS</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738695958958/45522fe3-8551-4ad9-b385-2d1c7bfdb148.png" alt class="image--center mx-auto" /></p>
<p>📌 The <strong>first step</strong> is to <strong>provision an Amazon RDS MySQL instance</strong> as the <strong>target database</strong>.</p>
<h3 id="heading-steps-to-create-rds-instance">🛠 <strong>Steps to Create RDS Instance</strong></h3>
<p>1️⃣ <strong>Go to AWS Console → Amazon RDS → Create Database</strong><br />2️⃣ <strong>Select MySQL as Engine Type</strong><br />3️⃣ <strong>Choose Multi-AZ Deployment for High Availability</strong><br />4️⃣ <strong>Set Master Username &amp; Password</strong><br />5️⃣ <strong>Configure Security Groups for Database Access</strong><br />6️⃣ <strong>Enable Automated Backups &amp; Monitoring</strong><br />7️⃣ <strong>Click on Create Database</strong></p>
<p>📌 <strong>Once the RDS instance is running, note the endpoint.</strong><br />✅ Example: <code>database-1.cpioo8iee1me.us-west-2.rds.amazonaws.com</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738696596534/73d5c688-2dfb-4575-9f16-51094458c627.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-step-2-create-aws-dms-replication-instance">🚀 <strong>Step 2: Create AWS DMS Replication Instance</strong></h2>
<p>📌 AWS DMS Replication Instance acts as a <strong>bridge</strong> to replicate data from the <strong>source MySQL database</strong> to <strong>Amazon RDS</strong>.</p>
<h3 id="heading-steps-to-create-a-dms-replication-instance">🛠 <strong>Steps to Create a DMS Replication Instance</strong></h3>
<p>1️⃣ <strong>Go to AWS DMS Console → Replication Instances → Create Replication Instance</strong><br />2️⃣ <strong>Choose Instance Type:</strong> <code>dms.t3.medium</code> (for moderate workloads)<br />3️⃣ <strong>Set Storage:</strong> 100 GB (adjust based on database size)<br />4️⃣ <strong>Select VPC &amp; Security Groups:</strong> Ensure RDS and On-Prem Server are accessible<br />5️⃣ <strong>Click Create Replication Instance</strong></p>
<p>📌 <strong>Once the replication instance is available, proceed to configuring endpoints.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738696027049/ad274d61-600b-4cca-a121-6ac77584ef6a.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-step-3-configure-source-amp-target-endpoints">🔄 <strong>Step 3: Configure Source &amp; Target Endpoints</strong></h2>
<h3 id="heading-source-endpoint-on-prem-mysql">🔹 <strong>Source Endpoint (On-Prem MySQL)</strong></h3>
<p>1️⃣ <strong>Go to AWS DMS Console → Endpoints → Create Endpoint</strong><br />2️⃣ <strong>Choose Source Type:</strong> MySQL<br />3️⃣ <strong>Enter Source Database Endpoint:</strong> <code>&lt;on-prem-db-IP&gt;:3306</code><br />4️⃣ <strong>Enter Username &amp; Password</strong><br />5️⃣ <strong>Test Connection → If Successful, Save Endpoint</strong></p>
<h3 id="heading-target-endpoint-amazon-rds-mysql">🔹 <strong>Target Endpoint (Amazon RDS MySQL)</strong></h3>
<p>1️⃣ <strong>Go to AWS DMS Console → Endpoints → Create Endpoint</strong><br />2️⃣ <strong>Choose Target Type:</strong> MySQL<br />3️⃣ <strong>Enter Amazon RDS Endpoint:</strong> <code>database-1.cpioo8iee1me.us-west-2.rds.amazonaws.com</code><br />4️⃣ <strong>Enter Username &amp; Password</strong><br />5️⃣ <strong>Test Connection → If Successful, Save Endpoint</strong></p>
<p>✅ <strong>With both endpoints successfully configured, we move to DMS Migration Task.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738696084074/b8f57957-be38-4e37-a9ee-ec70c74f7151.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-step-4-execute-dms-migration-task">🚀 <strong>Step 4: Execute DMS Migration Task</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738695984225/1e64fbec-b940-4341-9e91-d51191da45a3.png" alt class="image--center mx-auto" /></p>
<p>📌 <strong>Create a migration task in AWS DMS to transfer data from On-Prem MySQL to AWS RDS.</strong></p>
<h3 id="heading-steps-to-create-dms-migration-task">🛠 <strong>Steps to Create DMS Migration Task</strong></h3>
<p>1️⃣ <strong>Go to AWS DMS Console → Database Migration Tasks → Create Task</strong><br />2️⃣ <strong>Select Source &amp; Target Endpoints</strong><br />3️⃣ <strong>Choose Replication Type:</strong> <code>Full Load + Ongoing Replication (CDC)</code><br />4️⃣ <strong>Enable Logging &amp; CloudWatch Monitoring</strong><br />5️⃣ <strong>Click Start Migration Task</strong></p>
<p>📌 <strong>DMS will now start migrating the data. Check CloudWatch logs for progress.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738696123320/41f432df-72e6-416a-ae5a-9aedbef19d62.png" alt class="image--center mx-auto" /></p>
<p>Status: Created</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738696137030/46d0e539-c59d-43d4-88e9-be11a692a975.png" alt class="image--center mx-auto" /></p>
<p>Status: Load Complete</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738696157309/114f65b4-de09-43ae-96b3-67aa1b44402a.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738696179086/9983dbdf-5f99-4458-8529-baf89331eb6c.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738696186465/d8fba581-3bb1-41e1-a07b-8fc50684458d.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-step-5-testing-amp-validation">✅ <strong>Step 5: Testing &amp; Validation</strong></h2>
<p>📌 <strong>Once migration is complete, validate the database in AWS RDS.</strong></p>
<h3 id="heading-steps-to-verify-data-migration">🛠 <strong>Steps to Verify Data Migration</strong></h3>
<p>1️⃣ <strong>Connect to AWS RDS using MySQL Workbench or CLI.</strong></p>
<pre><code class="lang-bash">mysql -h database-1.cpioo8iee1me.us-west-2.rds.amazonaws.com -u admin -p
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738696298453/24896bac-7eb9-49c4-a1e5-4d688aa3263d.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738696309596/974e40e9-d38a-426a-867a-579a0142599d.png" alt class="image--center mx-auto" /></p>
<p>2️⃣ <strong>Check database and tables.</strong></p>
<pre><code class="lang-sql"><span class="hljs-keyword">SHOW</span> <span class="hljs-keyword">DATABASES</span>;
<span class="hljs-keyword">USE</span> obbs;
<span class="hljs-keyword">SHOW</span> <span class="hljs-keyword">TABLES</span>;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738696322136/34ad604d-2f91-47b3-8f3b-1b77ba83b249.png" alt class="image--center mx-auto" /></p>
<p>3️⃣ <strong>Verify Data Consistency</strong></p>
<pre><code class="lang-sql"><span class="hljs-keyword">SELECT</span> <span class="hljs-keyword">COUNT</span>(*) <span class="hljs-keyword">FROM</span> obbs.tbladmin;
</code></pre>
<p>✅ <strong>Data successfully migrated from on-prem to AWS RDS!</strong> 🎉</p>
<hr />
<h2 id="heading-summary-of-phase-2">🎯 <strong>Summary of Phase 2</strong></h2>
<p>✅ <strong>Provisioned AWS RDS MySQL as the Target Database.</strong><br />✅ <strong>Created AWS DMS Replication Instance for Data Transfer.</strong><br />✅ <strong>Configured Source &amp; Target Endpoints for Secure Migration.</strong><br />✅ <strong>Executed DMS Migration Task for Full Load &amp; Ongoing Replication.</strong><br />✅ <strong>Verified Data Consistency Post-Migration.</strong></p>
<p>📌 <strong>Next Step:</strong> <strong>Phase 3 - Application Migration</strong></p>
<p>🔗 <strong>GitHub Repo:</strong> <a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/blob/master/README.md">AWS Migration Project</a><br />🎥 <strong>Watch on YouTube:</strong> <a target="_blank" href="https://youtu.be/RsjnwFSk6LU">Click Here</a></p>
<p>🚀 <strong>Stay tuned for more AWS migration insights!</strong>  </p>
<p>#AWS #DMS #DatabaseMigration #CloudComputing #DevOps #RDS #AmazonWebServices</p>
]]></content:encoded></item><item><title><![CDATA[Migrating to AWS with Application Discovery Service (ADS)]]></title><description><![CDATA[📋 Overview
The Application Discovery Service (ADS) is the first step in migrating your workloads to AWS. It helps analyze your on-premises environment, identifying dependencies, resource usage, and configurations to streamline your migration journey...]]></description><link>https://praful.cloud/aws-migration-application-discovery-tco-analysis</link><guid isPermaLink="true">https://praful.cloud/aws-migration-application-discovery-tco-analysis</guid><category><![CDATA[AWS CloudMigration AWSMigration DevOps ApplicationDiscoveryService CloudComputing InfrastructureAsCode CostOptimization]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Wed, 29 Jan 2025 17:50:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738172409886/b78c2760-382c-4aac-9711-63d581e0528c.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-overview"><strong>📋 Overview</strong></h3>
<p>The <strong>Application Discovery Service (ADS)</strong> is the first step in migrating your workloads to <strong>AWS</strong>. It helps analyze your <strong>on-premises environment</strong>, identifying dependencies, resource usage, and configurations to streamline your migration journey.</p>
<p>Cloud migration is a <strong>strategic decision</strong> that requires <strong>proper planning, cost analysis, and infrastructure assessment</strong> before execution. This blog serves as a <strong>step-by-step guide</strong> to migrating workloads from an <strong>on-premises environment</strong> to AWS using <strong>Application Discovery Service (ADS)</strong>.</p>
<h3 id="heading-follow-the-detailed-documentation-on-github"><strong>📌 Follow the Detailed Documentation on GitHub</strong></h3>
<p>For in-depth <strong>step-by-step guidance</strong> on AWS migration using <strong>Application Discovery Service (ADS)</strong> and <strong>TCO analysis</strong>, refer to the full project documentation on GitHub.</p>
<p>🔗 <strong>GitHub Repository:</strong> <a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01">AWS Migration Project</a></p>
<p>🔗 <a target="_blank" href="https://youtu.be/RsjnwFSk6LU">Watch the Project Video on YouTube</a></p>
<p><a target="_blank" href="https://youtu.be/RsjnwFSk6LU">📄 <strong>Detailed Guides</strong> <strong>Include:</strong><br />✅ <strong>Pha</strong></a><strong>se 1:</strong> AWS Ap<a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/blob/master/migration/A-Phase%201-AWS%20Application%20Discovery%20%26%20TCO%20Analysis/2.HighLevel-TCO-Analysis.md">plication Discovery &amp;</a> TCO Analysis 📊<br />✅ <strong>Phase 2:</strong> Migration Planning &amp; Execution 🚀<br />✅ <strong>Phase 3:</strong> Post-Migration Optimization &amp; Scaling 📈</p>
<h3 id="heading-what-is-aws-application-discovery-service-ads"><strong>🌍 What is AWS Application Discovery Service (ADS)?</strong></h3>
<p>ADS is the <strong>first step</strong> in migrating workloads to AWS. It helps <strong>analyze on-premises infrastructure</strong>, identifyi<a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01">ng:<br />✅ <strong>Resource utiliz</strong></a><strong>ation</strong> (CPU, memory, storage, and networking).<br />✅ <strong>Application dependencies</strong> to ensure smooth transition.<br />✅ <strong>Right-sizing AWS infrastructure</strong> for cost optimization.</p>
<h3 id="heading-key-focus-areas-of-this-blog"><strong>📌 Key Focus Areas of This Blog</strong></h3>
<p>1️⃣ <strong>Phase 1: Discovery &amp; TCO Analysis</strong></p>
<ul>
<li><p><strong>Analyze on-premises workloads</strong> using ADS.</p>
</li>
<li><p><strong>Evaluate cost savings</strong> through a <strong>Total Cost of Ownership (TCO) analysis</strong>.</p>
</li>
<li><p><strong>Plan AWS infrastructure (EC2, RDS, Auto Scaling, Storage).</strong></p>
</li>
</ul>
<h1 id="heading-on-premises-infrastructure-setup-for-aws-migration">🌐 On-Premises Infrastructure Setup for AWS Migration</h1>
<p>Before migrating workloads to AWS, it is crucial to <strong>understand the existing on-premises environment</strong>. This helps in determining <strong>infrastructure dependencies, resource utilization, and compatibility with AWS services</strong>. 🚀</p>
<h2 id="heading-on-premises-setup-overview"><strong>🏢 On-Premises Setup Overview</strong></h2>
<p>🔹 <strong>Web Application Server</strong></p>
<ul>
<li><p>Hosted on a <strong>local server</strong> running <strong>Apache or NGINX</strong>.</p>
</li>
<li><p>Handles HTTP requests and serves dynamic/static content.</p>
</li>
<li><p>Connected to the <strong>database server</strong> for backend processing.</p>
</li>
</ul>
<p>🔹 <strong>Database Server</strong></p>
<ul>
<li><p>Uses <strong>MySQL</strong> as the primary database.</p>
</li>
<li><p>Stores application data, user information, and business transactions.</p>
</li>
<li><p>Communicates with the web application over a <strong>local network</strong>.</p>
</li>
</ul>
<h1 id="heading-web-application-ux-online-banquet-booking-system">🎉 Web Application UX – Online Banquet Booking System</h1>
<p>A seamless <strong>user experience (UX)</strong> is the backbone of any successful <strong>online banquet booking system</strong>. Whether users are <strong>reserving banquet halls for weddings, corporate events, or private parties</strong>, the platform should be <strong>intuitive, responsive, and hassle-free</strong>.</p>
<p>Let’s explore the <strong>ideal UX flow</strong> for an <strong>Online Banquet Booking System</strong> and how it ensures a smooth journey for users. 🚀</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738103159353/9153516a-c0da-4f4a-85ee-063f7f35dec2.png?auto=compress,format&amp;format=webp" alt /></p>
<h2 id="heading-key-features-for-an-intuitive-ux"><strong>🖥️ Key Features for an Intuitive UX</strong></h2>
<h3 id="heading-1-home-page-first-impressions-matter">1️⃣ <strong>Home Page – First Impressions Matter!</strong> 🏡</h3>
<ul>
<li><p>A <strong>visually appealing landing page</strong> with a search bar for quick venue lookup.</p>
</li>
<li><p>High-quality <strong>images &amp; videos</strong> showcasing banquet halls.</p>
</li>
<li><p>Featured halls with <strong>ratings, price range, and capacity</strong>.</p>
</li>
<li><p><strong>Call-to-Action (CTA)</strong>: <em>“Find Your Perfect Venue”</em> 🔎</p>
</li>
</ul>
<h3 id="heading-2-venue-search-amp-filtering">2️⃣ <strong>Venue Search &amp; Filtering</strong> 🔍</h3>
<ul>
<li>Smart search with <strong>filters</strong> for:<br />  ✅ Location 📍<br />  ✅ Date &amp; Availability 📅<br />  ✅ Capacity 🏢<br />  ✅ Budget 💰<br />  ✅ Amenities (WiFi, Parking, Catering) 🍽️</li>
</ul>
<h3 id="heading-3-venue-details-page">3️⃣ <strong>Venue Details Page</strong> 🏢</h3>
<ul>
<li><p><strong>Detailed descriptions</strong> of banquet halls with images, pricing, and amenities.</p>
</li>
<li><p><strong>Customer reviews &amp; ratings</strong> ⭐⭐⭐⭐⭐</p>
</li>
<li><p><strong>Availability calendar</strong> to check open slots.</p>
</li>
<li><p>A <strong>"Book Now" button</strong> for seamless reservations.</p>
</li>
</ul>
<h3 id="heading-4-seamless-booking-flow">4️⃣ <strong>Seamless Booking Flow</strong> 🛒</h3>
<ul>
<li><p><strong>Step 1</strong>: User selects a <strong>date &amp; time slot</strong>.</p>
</li>
<li><p><strong>Step 2</strong>: Chooses <strong>add-ons (catering, decoration, music, etc.)</strong> 🎶</p>
</li>
<li><p><strong>Step 3</strong>: Provides <strong>personal details &amp; special requests</strong>.</p>
</li>
<li><p><strong>Step 4</strong>: <strong>Online payment integration</strong> 💳 (Stripe, PayPal, UPI).</p>
</li>
<li><p><strong>Step 5</strong>: Confirmation email &amp; SMS 📩.</p>
</li>
</ul>
<h3 id="heading-5-user-dashboard-amp-booking-management">5️⃣ <strong>User Dashboard &amp; Booking Management</strong> 🛠️</h3>
<ul>
<li><p>Users can <strong>view, modify, or cancel bookings</strong>.</p>
</li>
<li><p>Download <strong>invoices &amp; event details</strong> 📜.</p>
</li>
<li><p>Track upcoming reservations.</p>
</li>
<li><p>Loyalty rewards &amp; offers for repeat customers. 🎁</p>
</li>
</ul>
<h3 id="heading-6-admin-panel-for-venue-owners">6️⃣ <strong>Admin Panel for Venue Owners</strong> 🏢</h3>
<ul>
<li><p>Manage <strong>venue listings, pricing, and availability</strong>.</p>
</li>
<li><p>Track <strong>customer bookings &amp; payments</strong>.</p>
</li>
<li><p>Generate <strong>reports on revenue &amp; performance</strong> 📊.</p>
</li>
</ul>
<h2 id="heading-challenges-with-on-premises-infrastructure"><strong>🔍 Challenges with On-Premises Infrastructure</strong></h2>
<p>❌ <strong>High Maintenance Costs</strong> – Requires constant hardware upgrades and monitoring.<br />❌ <strong>Scalability Issues</strong> – Limited ability to scale resources dynamically.<br />❌ <strong>Security Risks</strong> – On-premises setups often require manual security configurations.<br />❌ <strong>Disaster Recovery Concerns</strong> – Risk of data loss without cloud backups or redundancy.</p>
<h2 id="heading-next-steps-migrating-to-aws-ec2-amp-rds"><strong>🔜 Next Steps: Migrating to AWS EC2 &amp; RDS</strong></h2>
<p>With the <strong>AWS Application Discovery data</strong> analyzed, the next step is to <strong>migrate workloads to AWS</strong> efficiently. In <strong>Phase 2</strong>, we will:</p>
<p>✅ Choose the right <strong>EC2 instance types &amp; storage</strong><br />✅ Set up <strong>Auto Scaling &amp; Load Balancers</strong><br />✅ Optimize database migration with <strong>RDS or Aurora</strong></p>
<h2 id="heading-whttpsgithubcomprafulpatel16mgn-aws-project01hy-migrate-to-aws"><a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01"><strong>💡 W</strong></a><strong>hy Migrate to AWS?</strong></h2>
<p>✅ <strong>Elas</strong><a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01"><strong>tic Compute Scali</strong></a><strong>ng</strong> – Easily scale web servers using <strong>EC2 Auto Scaling</strong>.<br />✅ <strong>Managed Databases</strong> – Leverage <strong>Amazon RDS</strong> for automated bac<a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01">kups and maintena</a>nce.<br />✅ <strong>High Availability</strong> – Deploy across multiple <strong>AWS Regions &amp; Availability Zones</strong>.<br />✅ <strong>Lower TCO (Total Cost of Ownership</strong><a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01"><strong>)</strong> – Reduce upfron</a>t infrastructure costs.</p>
<h2 id="heading-what-is-aws-application-dischttpsgithubcomprafulpatel16mgn-aws-project01overy-service"><strong>🏗 What is AWS</strong> <a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01"><strong>Application Disc</strong></a><strong>overy Service?</strong></h2>
<p>AWS <strong>Application Discovery Service (ADS)</strong> helps enterprises <strong>automate the collection of on-premises system metadata</strong> before migrating workloads to AWS.</p>
<h3 id="heading-how-aws-ads-works"><strong>How AWS ADS Works</strong></h3>
<p>🔹 Scans <strong>on-premises servers</strong> and collects <strong>CPU, memory, storage, and network data</strong>.<br />🔹 Identifies <strong>application dependencies</strong> and usage patterns.<br />🔹 Provides insights for <strong>right-sizing AWS infrastructure</strong> post-migration.</p>
<p>AWS ADS supports <strong>two discovery methods</strong>:<br />1️⃣ <strong>Agent-Based Discovery</strong> (For deep-level data collection, including system processes).<br />2️⃣ <strong>Agentless Discovery</strong> (For VMware-based environments).</p>
<p>➡️ <strong>Detailed Documentation</strong>: <a target="_blank" href="https://docs.aws.amazon.com/application-discovery/latest/userguide/what-is-appdiscovery.html"><strong>AWS ADS Official Guide</strong></a></p>
<hr />
<h2 id="heading-why-use-aws-ads-for-migration">🎯 <strong>Why Use AWS ADS for Migration?</strong></h2>
<p>✅ <strong>Comprehensive Data Collection</strong>: Automates <strong>server discovery</strong> and provides real-time metrics.<br />✅ <strong>Migration Planning</strong>: Helps identify <strong>dependencies</strong> and application workloads.<br />✅ <strong>Optimized AWS Resources</strong>: Right-size EC2 and RDS instances post-migration.<br />✅ <strong>Cost-Saving</strong>: Eliminates over-provisioning by analyzing <strong>actual resource usage</strong>.</p>
<h2 id="heading-pre-requisites-amp-deployment"><strong>⚙️ Pre-Requisites &amp; Deployment</strong></h2>
<p>Before <strong>deploying AWS ADS</strong>, ensure the following:</p>
<h3 id="heading-1-pre-requisite-setup">🔹 <strong>1️⃣ Pre-Requisite Setup</strong></h3>
<p>✅ Configure <strong>IAM Roles &amp; Policies</strong> for ADS to access on-premises servers.<br />✅ Verify <strong>network connectivity</strong> between your <strong>on-premises servers</strong> and AWS.<br />✅ Ensure your <strong>firewall rules</strong> allow ADS communication.</p>
<p>📄 <strong>Step-by-Step Guide</strong>: <a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/blob/master/migration/A-Discover/0.Pre-requisite.md"><strong>Pre-Requisite Setup</strong></a></p>
<hr />
<h3 id="heading-2-deploy-aws-ads">🔹 <strong>2️⃣ Deploy AWS ADS</strong></h3>
<p>🚀 <strong>Steps to deploy</strong>:<br />1️⃣ Install <strong>AWS Discovery Agent</strong> on on-premises servers.<br />2️⃣ Configure <strong>agent-based or agentless discovery</strong> (depending on your infrastructure).<br />3️⃣ Set up <strong>AWS Migration Hub</strong> to monitor discovery insights.<br />4️⃣ Verify <strong>ADS is collecting data</strong> from your on-premises workloads.</p>
<p>📄 <strong>Detailed Guide</strong>: <a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/blob/master/migration/A-Phase%201-AWS%20Application%20Discovery%20%26%20TCO%20Analysis/1.Deploy.md"><strong>AWS ADS Deployment</strong></a></p>
<hr />
<h2 id="heading-key-benefits-of-aws-ads">🎯 <strong>Key Benefits of AWS ADS</strong></h2>
<h3 id="heading-1-data-driven-migration-decisions">✅ <strong>1. Data-Driven Migration Decisions</strong></h3>
<p>✔️ Collect real-time data on <strong>CPU, memory, and disk utilization</strong>.<br />✔️ Avoid assumptions and <strong>right-size AWS instances</strong> based on actual resource usage.</p>
<h3 id="heading-2-application-dependency-mapping">🔗 <strong>2. Application Dependency Mapping</strong></h3>
<p>✔️ Identify <strong>interconnected workloads</strong> that must be migrated together.<br />✔️ Reduce post-migration downtime and compatibility issues.</p>
<h3 id="heading-3-cost-optimization">💰 <strong>3. Cost Optimization</strong></h3>
<p>✔️ Prevent <strong>over-provisioning AWS resources</strong>.<br />✔️ Ensure <strong>efficient workload placement</strong> based on real-world usage metrics.</p>
<h3 id="heading-4-faster-migration-planning">📊 <strong>4. Faster Migration Planning</strong></h3>
<p>✔️ Reduce planning time using <strong>automated discovery insights</strong>.<br />✔️ Eliminate the need for <strong>manual infrastructure assessments</strong>.</p>
<h1 id="heading-phase-1-aws-application-discovery-amp-tco-analysis">🚀 Phase 1: AWS Application Discovery &amp; TCO Analysis</h1>
<p>Cloud migration is more than just moving workloads to the cloud. <strong>Understanding your existing infrastructure, dependencies, and costs</strong> is critical before making the move. In <strong>Phase 1</strong>, we leveraged <strong>AWS Application Discovery Service (ADS)</strong> to analyze our environment and performed a <strong>Total Cost of Ownership (TCO) Analysis</strong> to estimate the potential savings and efficiency improvements.</p>
<p>This blog walks you through the <strong>key steps, tools used, and what we achieved</strong> in this phase! 📊</p>
<hr />
<h2 id="heading-key-steps-in-phase-1">📌 Key Steps in Phase 1</h2>
<p>🔹 <a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/blob/master/migration/A-Phase%201-AWS%20Application%20Discovery%20%26%20TCO%20Analysis"><strong>AWS Application Discovery &amp; TCO Analysis</strong></a></p>
<ul>
<li><p>📄 <a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/blob/master/migration/A-Phase%201-AWS%20Application%20Discovery%20%26%20TCO%20Analysis/0.Pre-requisite.md"><strong>Pre-Requisite: Setup Requirements</strong></a></p>
</li>
<li><p>📄 <a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/blob/master/migration/A-Phase%201-AWS%20Application%20Discovery%20%26%20TCO%20Analysis/1.Deploy.md"><strong>Discovery Service Deployment: Collecting Data</strong></a></p>
</li>
<li><p>📄 <a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/blob/master/migration/A-Phase%201-AWS%20Application%20Discovery%20%26%20TCO%20Analysis/2.HighLevel-TCO-Analysis.md"><strong>High-Level TCO Analysis: Cost Breakdown</strong></a></p>
</li>
<li><p>📄 <a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/tree/master/docs"><strong>Complete AWS Migration Documentation</strong></a></p>
</li>
</ul>
<hr />
<h2 id="heading-what-we-achieved-in-phase-1">🔍 What We Achieved in Phase 1</h2>
<h3 id="heading-1-setting-up-aws-migration-prerequisites">✅ <strong>1. Setting Up AWS Migration Prerequisites</strong></h3>
<p>📄 <a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/blob/master/migration/A-Phase%201-AWS%20Application%20Discovery%20%26%20TCO%20Analysis/1.Deploy.md"><strong>Discovery Service Deployment: Collecting Data</strong></a></p>
<ul>
<li>Configured <strong>AWS Migration Hub</strong> to track migration progress 🌍.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738173631339/e0d343a3-0859-4354-a657-3aa7b53e9546.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Created necessary <strong>IAM roles and permissions</strong> for secure access 🔐.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738173654667/eb9e9e12-d760-442d-8744-72685a4878b0.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Defined <strong>AWS Application Discovery Service (ADS) setup</strong> to analyze infrastructure.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738173682129/31c67e0a-0066-49ea-84cf-6beb2a98840e.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-2-deploying-aws-discovery-service">🛠 <strong>2. Deploying AWS Discovery Service</strong></h3>
<ul>
<li>Installed <strong>ADS Agents</strong> on <strong>Web and Database servers</strong> to collect system data 🖥️📊.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738173702516/6850738f-0e1c-4de9-8580-f69c12763863.png" alt class="image--center mx-auto" /></p>
<p>Data Collectors:</p>
<p>WebServer</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738173873910/aa328dbe-cfeb-408b-af0b-360577a4b4bc.png" alt class="image--center mx-auto" /></p>
<p>Technical Information</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738174170251/10b4f3c5-8eb2-4651-a5b5-dc40b509ec81.png" alt class="image--center mx-auto" /></p>
<p>Performance Information</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738174186640/0c4f3adc-d8f9-4bf6-bc2a-37666a691a93.png" alt class="image--center mx-auto" /></p>
<p>DbServer</p>
<p>Technical Information</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738174227138/786f3146-6c43-4554-8f7a-df6b4618f5da.png" alt class="image--center mx-auto" /></p>
<p>Performance Information</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738174257982/e62d6310-3654-4b40-a3d2-2a8977f1c7de.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Verified <strong>network dependencies</strong>, CPU utilization, memory usage, and active processes.</li>
</ul>
<p>WebServer</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738173737332/d500ed8d-d3bf-421c-a1f7-a30703f1d72d.png" alt class="image--center mx-auto" /></p>
<p>DbServer</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738173923016/ffdc4941-9f91-4386-9fc9-3923d82a1021.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Organized servers into <strong>Application Groups</strong> for better classification.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738173780255/f0565d44-a8e3-444b-80a6-5a45ade1f229.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738174280279/901c3807-3862-4648-ae6e-fedc23d07700.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-3-high-level-tco-total-cost-of-ownership-analysis">💰 <strong>3. High-Level TCO (Total Cost of Ownership) Analysis</strong></h3>
<ul>
<li><p>Compared <strong>on-premise costs vs. AWS projected costs</strong> across:</p>
<ul>
<li><p>Compute (EC2 vs. on-prem servers) ⚙️</p>
</li>
<li><p>Storage (S3, EBS vs. local storage) 📦</p>
</li>
<li><p>Networking (AWS Data Transfer vs. existing network costs) 🌐</p>
</li>
<li><p>Database (AWS RDS vs. on-prem SQL/Oracle) 🗄️</p>
</li>
</ul>
</li>
<li><p>Identified <strong>potential cost savings and ROI</strong> for cloud migration 📉.</p>
</li>
</ul>
<hr />
<h2 id="heading-tools-used-in-this-phase">🛠️ Tools Used in This Phase</h2>
<p>🔹 <strong>AWS Application Discovery Service (ADS)</strong> – Collected system performance, network dependencies, and server data.<br />🔹 <strong>AWS Migration Hub</strong> – Centralized dashboard for tracking server inventory and migration status.<br />🔹 <strong>IAM (Identity and Access Management)</strong> – Created roles and permissions for secure access.<br />🔹 <strong>CloudTrail</strong> – Monitored activities and logs related to migration actions.<br />🔹 <strong>Application Groups</strong> – Used in Migration Hub to organize and manage servers efficiently.</p>
<hr />
<h3 id="heading-high-level-tco-analysis-for-webserver-amp-databaseserver-migration-to-aws">📊 <strong>High-Level TCO Analysis for WebServer &amp; DatabaseServer Migration to AWS</strong></h3>
<hr />
<h2 id="heading-overview-1"><strong>📋 Overview</strong></h2>
<p>As organizations migrate their <strong>Web Servers</strong> and <strong>Database Servers</strong> from <strong>on-premises infrastructure</strong> to <strong>AWS Cloud</strong>, performing a <strong>Total Cost of Ownership (TCO) analysis</strong> is crucial. This document evaluates the <strong>on-prem vs AWS cloud cost breakdown</strong>, helping decision-makers <strong>optimize costs, improve scalability, and enhance security</strong>.</p>
<h3 id="heading-what-this-guide-covers"><strong>✅ What This Guide Covers</strong></h3>
<ul>
<li><p>📊 <strong>TCO breakdown of on-prem vs. AWS</strong></p>
</li>
<li><p>💻 <strong>Recommended AWS services for Web &amp; Database servers</strong></p>
</li>
<li><p>💰 <strong>Projected cost savings and optimization strategies</strong></p>
</li>
</ul>
<p>For a <strong>detailed documentation</strong> on the <strong>TCO (Total Cost of Ownership) Analysis</strong>, visit:</p>
<p>📄 <a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/blob/master/migration/A-Phase%201-AWS%20Application%20Discovery%20%26%20TCO%20Analysis/2.HighLevel-TCO-Analysis.md"><strong>📌 High-Level TCO Analysis Documentation</strong></a></p>
<h2 id="heading-current-on-premises-infrastructure-overview"><strong>📌 Current On-Premises Infrastructure Overview</strong></h2>
<h2 id="heading-web-server-analysis-ip-10-0-1-82"><strong>📌 Web Server Analysis (ip-10-0-1-82) 💻</strong></h2>
<p>🔹 <strong>Application</strong>: WebApp<br />🔹 <strong>OS</strong>: Ubuntu 24.04.1 LTS<br />🔹 <strong>CPU</strong>: 1 vCPU (x86_64)<br />🔹 <strong>RAM</strong>: 1 GB<br />🔹 <strong>Storage</strong>: 8 GB SSD<br />🔹 <strong>Hypervisor</strong>: Xen<br />🔹 <strong>Network Interfaces</strong>: 1</p>
<h3 id="heading-performance-metrics"><strong>Performance Metrics</strong> 📊</h3>
<ul>
<li><p><strong>CPU Usage</strong>: 0.35% 🖥️</p>
</li>
<li><p><strong>RAM Usage</strong>: 38.77% 💾</p>
</li>
<li><p><strong>Disk Reads</strong>: 1.59 KBPS 📥</p>
</li>
<li><p><strong>Disk Writes</strong>: 2.82 KBPS 📤</p>
</li>
</ul>
<p>💡 <strong>Observations</strong>:<br />✅ CPU usage is minimal, meaning we don’t need a high-end instance.<br />✅ RAM consumption is moderate (38.77%), but might need an upgrade in production.<br />✅ Disk I/O is <strong>low</strong>, so an <strong>EBS gp3 volume</strong> would be a good fit post-migration.</p>
<h2 id="heading-database-server-analysis-ip-10-0-2-254"><strong>🗄️ Database Server Analysis (ip-10-0-2-254)</strong></h2>
<p>🔹 <strong>Application</strong>: DatabaseServer<br />🔹 <strong>OS</strong>: Ubuntu 24.04.1 LTS<br />🔹 <strong>CPU</strong>: 1 vCPU (x86_64)<br />🔹 <strong>RAM</strong>: 1 GB<br />🔹 <strong>Storage</strong>: 8 GB SSD<br />🔹 <strong>Hypervisor</strong>: Xen<br />🔹 <strong>Network Interfaces</strong>: 1</p>
<h3 id="heading-performance-metrics-1"><strong>Performance Metrics</strong> 📊</h3>
<ul>
<li><p><strong>CPU Usage</strong>: 0.31% 🖥️</p>
</li>
<li><p><strong>RAM Usage</strong>: 43.34% 💾</p>
</li>
<li><p><strong>Disk Reads</strong>: 0.86 KBPS 📥</p>
</li>
<li><p><strong>Disk Writes</strong>: 1.79 KBPS 📤</p>
</li>
</ul>
<p>💡 <strong>Observations</strong>:<br />✅ <strong>Low CPU utilization</strong> suggests no need for a compute-heavy instance.<br />✅ <strong>Higher RAM usage (43.34%)</strong> indicates potential memory constraints.<br />✅ <strong>Disk I/O is light</strong>, making <strong>gp3 SSD storage</strong> an optimal choice.</p>
<h2 id="heading-migration-considerations-amp-recommendations"><strong>🛠️ Migration Considerations &amp; Recommendations</strong></h2>
<p>Based on this <strong>performance analysis</strong>, here’s our recommended AWS migration plan:</p>
<p>🔹 <strong>EC2 Instance Sizing</strong>:</p>
<ul>
<li><p>Web Server → <strong>t3.micro</strong> (2 vCPU, 1GB RAM)</p>
</li>
<li><p>Database Server → <strong>t3.small</strong> (2 vCPU, 2GB RAM) (Upgrade for better performance)</p>
</li>
</ul>
<p>🔹 <strong>Storage Optimization</strong>:</p>
<ul>
<li>Use <strong>gp3 EBS volumes</strong> for both servers to balance <strong>cost and performance</strong>.</li>
</ul>
<p>🔹 <strong>Auto Scaling</strong>:</p>
<ul>
<li>Configure <strong>AWS Auto Scaling</strong> to handle unexpected traffic spikes.</li>
</ul>
<p>🔹 <strong>Database Optimization</strong>:</p>
<ul>
<li>Consider <strong>Amazon RDS</strong> for managed PostgreSQL/MySQL instead of self-managed DB.</li>
</ul>
<h2 id="heading-tco-breakdown-on-premises-vs-aws"><strong>📊 TCO Breakdown: On-Premises vs. AWS</strong></h2>
<h3 id="heading-on-premises-cost-estimate">💰 <strong>On-Premises Cost Estimate</strong></h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Category</strong></td><td><strong>Estimated Annual Cost (USD)</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Hardware (CPU, RAM, Storage, Network)</strong></td><td>$1000</td></tr>
<tr>
<td><strong>Power &amp; Cooling</strong></td><td>$250</td></tr>
<tr>
<td><strong>Network Infrastructure (Firewall, Router, Bandwidth)</strong></td><td>$400</td></tr>
<tr>
<td><strong>IT Maintenance (Admin, Patching, Backups)</strong></td><td>$800</td></tr>
<tr>
<td><strong>Security &amp; Compliance</strong></td><td>$500</td></tr>
<tr>
<td><strong>Backup &amp; Disaster Recovery</strong></td><td>$300</td></tr>
<tr>
<td><strong>Total On-Premises Cost</strong></td><td><strong>~$3250 per year</strong></td></tr>
</tbody>
</table>
</div><h3 id="heading-aws-cost-estimate-ec2-rds"><strong>☁️ AWS Cost Estimate (EC2 + RDS)</strong></h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>AWS Service</strong></td><td><strong>Estimated Monthly Cost (USD)</strong></td><td><strong>Estimated Annual Cost (USD)</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>t4g.micro EC2 (WebServer)</strong></td><td>$7.70</td><td>$92.40</td></tr>
<tr>
<td><strong>t4g.nano EC2 (DatabaseServer)</strong></td><td>$3.85</td><td>$46.20</td></tr>
<tr>
<td><strong>10GB EBS (gp3 Storage)</strong></td><td>$2.00</td><td>$24.00</td></tr>
<tr>
<td><strong>Amazon RDS (db.t4g.micro, 10GB SSD, Multi-AZ)</strong></td><td>$18.00</td><td>$216.00</td></tr>
<tr>
<td><strong>Data Transfer (Low-Traffic Estimate)</strong></td><td>$4.00</td><td>$48.00</td></tr>
<tr>
<td><strong>Amazon CloudWatch (Basic Monitoring)</strong></td><td>Free</td><td>Free</td></tr>
<tr>
<td><strong>Backup (EBS Snapshot + RDS Automated Backups)</strong></td><td>$5.00</td><td>$60.00</td></tr>
<tr>
<td><strong>Total AWS Cost</strong></td><td><strong>$40.55</strong></td><td><strong>~$486.60 per year</strong></td></tr>
</tbody>
</table>
</div><h2 id="heading-tco-savings-analysis"><strong>📉 TCO Savings Analysis</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Metric</strong></td><td><strong>On-Premises</strong></td><td><strong>AWS</strong></td><td><strong>Savings (%)</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Annual Cost</strong></td><td>~$3250</td><td>~$486</td><td><strong>85% Reduction</strong></td></tr>
<tr>
<td><strong>Scalability</strong></td><td>Low</td><td>High</td><td><strong>Flexible Auto-Scaling</strong></td></tr>
<tr>
<td><strong>Maintenance</strong></td><td>High</td><td>Low</td><td><strong>Managed Services</strong></td></tr>
<tr>
<td><strong>Security</strong></td><td>Manual</td><td>AWS IAM, WAF</td><td><strong>Better Compliance</strong></td></tr>
<tr>
<td><strong>Performance Monitoring</strong></td><td>Manual</td><td>CloudWatch</td><td><strong>Automated Insights</strong></td></tr>
</tbody>
</table>
</div><p>✅ <strong>AWS Migration reduces costs by ~85%</strong><br />✅ <strong>Significant reduction in maintenance, networking, and power costs</strong><br />✅ <strong>AWS RDS provides auto-scaling &amp; automatic failover for databases</strong></p>
<h2 id="heading-next-steps-how-to-perform-a-detailed-tco-analysis"><strong>📌 Next Steps - How to Perform a Detailed TCO Analysis</strong></h2>
<h3 id="heading-step-1-identify-current-costs"><strong>Step 1: Identify Current Costs</strong></h3>
<p>📌 <strong>Analyze</strong> existing <strong>hardware, software, networking, maintenance, security</strong> costs.<br />📌 <strong>Calculate</strong> power, cooling, IT support, disaster recovery expenses.</p>
<h3 id="heading-step-2-map-aws-services"><strong>Step 2: Map AWS Services</strong></h3>
<p>📌 Identify <strong>right-sized EC2 instances, Amazon RDS, storage options</strong>.<br />📌 Estimate <strong>AWS Networking, Backup, and Security Costs</strong>.</p>
<h3 id="heading-step-3-calculate-aws-tco"><strong>Step 3: Calculate AWS TCO</strong></h3>
<p>📌 Use <strong>AWS Pricing Calculator</strong> → <a target="_blank" href="https://calculator.aws/#/"><strong>AWS</strong></a> <a target="_blank" href="https://calculator.aws/#/"><strong>TCO Calculator</strong><br />📌 C</a>ompare <strong>on-premises annual costs vs AWS</strong> <a target="_blank" href="https://calculator.aws/#/"><strong>annual costs</strong>.</a></p>
<h3 id="heading-sthttpscalculatorawsep-4-optimize-aws-costshttpscalculatoraws"><a target="_blank" href="https://calculator.aws/#/"><strong>St</strong></a><strong>ep 4: Optimize</strong> <a target="_blank" href="https://calculator.aws/#/"><strong>AWS Costs</strong></a></h3>
<p><a target="_blank" href="https://calculator.aws/#/">📌 Con</a>sider <strong>Savings Plans, Reserved Instances, or Spot Instances</strong> for long-term savings.<br />📌 Implement <strong>Auto-Scaling &amp; Monitoring</strong> to avoid ove<a target="_blank" href="https://calculator.aws/#/">r-provisioning.</a></p>
<p><a target="_blank" href="https://calculator.aws/#/"><br /><strong>🎯 Conclusion</strong></a></p>
<p><a target="_blank" href="https://calculator.aws/#/">Mi</a>grating <strong>Web Servers &amp; Database Servers</strong> to AWS provides:<br />✅ <strong>85% cost savings</strong> compared to on-prem infrastructu<a target="_blank" href="https://calculator.aws/#/">re<br />✅ <strong>Scalability &amp;</strong></a> <strong>high ava</strong><a target="_blank" href="https://calculator.aws/#/"><strong>ilability</strong> via AWS</a> Auto-Scaling, RDS Multi-AZ<br />✅ <strong>Reduced IT maintenance</strong> using managed AWS services<br />✅ <strong>Enhanced s</strong><a target="_blank" href="https://calculator.aws/#/"><strong>ecurity &amp; monitori</strong></a><strong>ng</strong> with AW<a target="_blank" href="https://calculator.aws/#/">S IAM, WAF, and Cl</a>oudWatch</p>
<p>🚀 <strong>Ready to migrate?</strong> Start by running <strong>AWS Application Discovery Service (ADS)</strong> for automatic migration insights.</p>
<h2 id="heading-references"><strong>📖 References</strong></h2>
<ul>
<li><p><a target="_blank" href="https://calculator.aws/#/"><strong>AWS TCO Calculator</strong></a></p>
</li>
<li><p><a target="_blank" href="https://calculator.aws/#/"><strong>AWS EC2</strong></a> <a target="_blank" href="https://aws.amazon.com/ec2/pricing/"><strong>Pr</strong></a><a target="_blank" href="https://calculator.aws/#/"><strong>icing</strong></a></p>
</li>
<li><p><a target="_blank" href="https://calculator.aws/#/"><strong>AWS Mi</strong></a><a target="_blank" href="https://aws.amazon.com/ec2/pricing/"><strong>gration</strong></a> <a target="_blank" href="https://calculator.aws/#/"><strong>Hub</strong></a></p>
</li>
<li><p><a target="_blank" href="https://calculator.aws/#/"><strong>AWS Wel</strong></a><a target="_blank" href="https://aws.amazon.com/ec2/pricing/"><strong>l-Ar</strong></a><a target="_blank" href="https://calculator.aws/#/"><strong>chitected Frame</strong></a><a target="_blank" href="https://aws.amazon.com/ec2/pricing/"><strong>work</strong></a></p>
<p>  <a target="_blank" href="https://aws.amazon.com/migration-hub/features/">📌 <strong>AWS Application Discovery Service (ADS)</strong><br />  📌 <strong>AWS Database Migration Service (DMS)</strong><br />  📌 <strong>AWS Migration Hub</strong></a></p>
</li>
</ul>
<h2 id="heading-phase-2-dathttpsawsamazoncomarchitecturewell-architectedabase-migration-amp-executionhttpsdocsawsamazoncomapplication-discoverylatestuserguidewhat-is-appdiscoveryhtml"><a target="_blank" href="https://aws.amazon.com/architecture/well-architected/">Phase 2 - Dat</a><a target="_blank" href="https://docs.aws.amazon.com/application-discovery/latest/userguide/what-is-appdiscovery.html">abase Migration &amp; Execution</a></h2>
<p><a target="_blank" href="https://docs.aws.amazon.com/application-discovery/latest/userguide/what-is-appdiscovery.html">With <strong>Phase 1</strong> complete</a><a target="_blank" href="https://aws.amazon.com/dms/">, we now have <strong>full visibility</strong> into o</a><a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/blob/master/migration/A-Discover/2.HighLevel-TCO%20Analysis.md">ur infrastr</a>ucture, dependenc<a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/blob/master/migration/A-Discover/2.HighLevel-TCO%20Analysis.md">ies, and costs. The next step is <strong>migrati</strong></a><strong>on planning and execution</strong>!</p>
<p>📌 <strong>In Phase 2, w</strong><a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01/blob/master/migration/A-Discover/2.HighLevel-TCO%20Analysis.md"><strong>e will:</strong><br />✅ Choose the right <strong>AWS services</strong></a> <strong>for hosting applications</strong><br />✅ Develop an <strong>automated infrastructure strategy (Terraform, CloudFormation)</strong><br />✅ Optimize our <strong>AWS cost management for long-term savings</strong></p>
<p>💬 <strong>Have questions about AWS migration? Drop a comment below!</strong><br />📢 Follow for more <strong>real-world AWS Cloud Migration</strong> insights! 🚀</p>
<h3 id="heading-why-this-blog"><strong>💡 Why This Blog?</strong></h3>
<p>This guide is <strong>designed for IT professionals, cloud architects, and DevOps engineers</strong> who want to:<br />✅ <strong>Understand AWS migration best practices.</strong><br />✅ <strong>Perform detailed cost comparisons between on-prem and AWS.</strong><br />✅ <strong>Use AWS services effectively to ensure a smooth migration.</strong></p>
<p>Whether you’re planning a <strong>small-scale migration</strong> or a <strong>large enterprise transition</strong>, this blog will <strong>help you navigate each phase efficiently</strong>!</p>
<p>📢 <strong>Stay tuned for real-world AWS migration insights and hands-on tutorials!</strong> 🚀</p>
<p>🔗 <strong>GitHub Repository</strong>: <a target="_blank" href="https://github.com/prafulpatel16/mgn-aws-project01">mgn-aws-project01</a></p>
<h2 id="heading-author"><strong>🧑‍💻 Author</strong></h2>
<p>👨‍💻 Created and maintained by <strong>Praful Patel</strong>.<br />🔗 <a target="_blank" href="https://github.com/prafulpatel16"><strong>GitHub</strong></a> | 🌍 <a target="_blank" href="https://www.praful.cloud/"><strong>Tech Blog</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[AWS Serverless Project: Video Upload and Playback Application]]></title><description><![CDATA[Overview
This project showcases a fully serverless video upload and playback application built using AWS services. The application enables users to upload videos, store them in Amazon S3, and manage metadata in DynamoDB. Videos can then be fetched an...]]></description><link>https://praful.cloud/aws-serverless-project-video-upload-and-playback-application</link><guid isPermaLink="true">https://praful.cloud/aws-serverless-project-video-upload-and-playback-application</guid><category><![CDATA[AWS Serverless S3 DynamoDB Lambda API Gateway Cloud Computing Web Development Frontend Development Backend Development Scalable Applications DevOps Cloud Architecture]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Tue, 31 Dec 2024 23:07:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1735686224544/1d11aa6f-8214-47f2-8cc3-f0bc74dbc246.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-overview">Overview</h2>
<p>This project showcases a fully serverless video upload and playback application built using AWS services. The application enables users to upload videos, store them in Amazon S3, and manage metadata in DynamoDB. Videos can then be fetched and played seamlessly through a modern web interface. The architecture is scalable, secure, and cost-effective.</p>
<p>👉 <strong>Follow Project on GitHub Repo Link</strong>: <a target="_blank" href="https://github.com/prafulpatel16/video-app-aws-serverless/blob/master/README.md">https://github.com/prafulpatel16/video-app-aws-serverless/blob/master/README.md</a> <a target="_blank" href="https://github.com/prafulpatel16/video-app-aws-serverless/blob/master/README.md">🚀✨</a></p>
<hr />
<h2 id="heading-architecture-diagramhttpsgithubcomprafulpatel16video-app-aws-serverlessblobmasterreadmemd"><a target="_blank" href="https://github.com/prafulpatel16/video-app-aws-serverless/blob/master/README.md">Architecture Diagram</a></h2>
<hr />
<h2 id="heading-key-featureshttpsgithubcomprafulpatel16video-app-aws-serverlessblobmasterreadmemd"><a target="_blank" href="https://github.com/prafulpatel16/video-app-aws-serverless/blob/master/README.md">Key Features</a></h2>
<ol>
<li><p><a target="_blank" href="https://github.com/prafulpatel16/video-app-aws-serverless/blob/master/README.md">Scalable serverless architecture.</a></p>
</li>
<li><p>Upload videos directly from the frontend to S3.</p>
</li>
<li><p>Store video metadata in DynamoDB.</p>
</li>
<li><p>Fetch and play videos via a modern web interface.</p>
</li>
</ol>
<hr />
<h2 id="heading-tech-stack">Tech Stack</h2>
<ul>
<li><p><strong>Frontend</strong>: HTML, CSS, JavaScript</p>
</li>
<li><p><strong>Backend</strong>: AWS Lambda, API Gateway</p>
</li>
<li><p><strong>Database</strong>: DynamoDB</p>
</li>
<li><p><strong>Storage</strong>: S3</p>
</li>
<li><p><strong>Monitoring</strong>: CloudWatch</p>
</li>
</ul>
<hr />
<h2 id="heading-step-by-step-implementation">Step-by-Step Implementation</h2>
<h3 id="heading-1-setting-up-amazon-s3-for-video-storage">1. Setting Up Amazon S3 for Video Storage</h3>
<ol>
<li><p><strong>Create an S3 Bucket</strong>:</p>
<pre><code class="lang-bash"> aws s3 mb s3://video-upload-bucket
</code></pre>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735677371189/9b88a24f-b29c-403a-9d5e-400719a94f2a.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><strong>Configure CORS</strong>:</p>
<pre><code class="lang-json"> [
     {
         <span class="hljs-attr">"AllowedHeaders"</span>: [<span class="hljs-string">"*"</span>],
         <span class="hljs-attr">"AllowedMethods"</span>: [<span class="hljs-string">"GET"</span>, <span class="hljs-string">"PUT"</span>, <span class="hljs-string">"POST"</span>],
         <span class="hljs-attr">"AllowedOrigins"</span>: [<span class="hljs-string">"*"</span>]
     }
 ]
</code></pre>
</li>
<li><p><strong>Enable Static Website Hosting</strong>:</p>
<ul>
<li><p>Go to the <strong>Properties</strong> tab in the S3 console.</p>
</li>
<li><p>Set <strong>Index Document</strong> to <code>index.html</code>.</p>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735677394849/4e475772-8850-4bfc-ba93-d3b02ed809ef.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><strong>Sync Frontend Files</strong>:</p>
<pre><code class="lang-bash"> aws s3 sync static-web/ s3://video-upload-bucket
</code></pre>
</li>
</ol>
<hr />
<h3 id="heading-2-setting-up-dynamodb-for-metadata-storage">2. Setting Up DynamoDB for Metadata Storage</h3>
<ol>
<li><p><strong>Create a DynamoDB Table</strong>:</p>
<pre><code class="lang-bash"> aws dynamodb create-table \
     --table-name video-metadata \
     --attribute-definitions AttributeName=videoId,AttributeType=S \
     --key-schema AttributeName=videoId,KeyType=HASH \
     --billing-mode PAY_PER_REQUEST
</code></pre>
<hr />
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735677423780/3ad61461-8e54-4ab5-a7d2-dc7dec9d69f2.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735677432723/596036a8-fe7c-4cd8-ba30-b44b2174e6fe.png" alt class="image--center mx-auto" /></p>
<p>3. Creating Lambda Functions</p>
<h4 id="heading-upload-handler">Upload Handler</h4>
<p>This function uploads videos to S3 and saves metadata in DynamoDB.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> boto3
<span class="hljs-keyword">import</span> json
<span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> uuid

s3 = boto3.client(<span class="hljs-string">'s3'</span>)
dynamodb = boto3.resource(<span class="hljs-string">'dynamodb'</span>)

BUCKET_NAME = os.environ[<span class="hljs-string">'BUCKET_NAME'</span>]
TABLE_NAME = os.environ[<span class="hljs-string">'TABLE_NAME'</span>]

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">lambda_handler</span>(<span class="hljs-params">event, context</span>):</span>
    <span class="hljs-keyword">try</span>:
        file_content = event[<span class="hljs-string">'body'</span>]
        file_name = <span class="hljs-string">f"<span class="hljs-subst">{uuid.uuid4()}</span>.mp4"</span>

        <span class="hljs-comment"># Upload video to S3</span>
        s3.put_object(Bucket=BUCKET_NAME, Key=file_name, Body=file_content)

        <span class="hljs-comment"># Save metadata to DynamoDB</span>
        table = dynamodb.Table(TABLE_NAME)
        table.put_item(
            Item={
                <span class="hljs-string">'videoId'</span>: file_name,
                <span class="hljs-string">'url'</span>: <span class="hljs-string">f"https://<span class="hljs-subst">{BUCKET_NAME}</span>.s3.amazonaws.com/<span class="hljs-subst">{file_name}</span>"</span>
            }
        )

        <span class="hljs-keyword">return</span> {
            <span class="hljs-string">"statusCode"</span>: <span class="hljs-number">200</span>,
            <span class="hljs-string">"headers"</span>: {<span class="hljs-string">"Access-Control-Allow-Origin"</span>: <span class="hljs-string">"*"</span>},
            <span class="hljs-string">"body"</span>: json.dumps({<span class="hljs-string">"message"</span>: <span class="hljs-string">"File uploaded successfully"</span>})
        }
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        <span class="hljs-keyword">return</span> {
            <span class="hljs-string">"statusCode"</span>: <span class="hljs-number">500</span>,
            <span class="hljs-string">"headers"</span>: {<span class="hljs-string">"Access-Control-Allow-Origin"</span>: <span class="hljs-string">"*"</span>},
            <span class="hljs-string">"body"</span>: json.dumps({<span class="hljs-string">"error"</span>: str(e)})
        }
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735677451775/cf2c9a39-8aa3-4306-a13e-6fa997dd81fc.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-fetch-handler">Fetch Handler</h4>
<p>This function retrieves video metadata from DynamoDB.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> boto3
<span class="hljs-keyword">import</span> json
<span class="hljs-keyword">import</span> os

dynamodb = boto3.resource(<span class="hljs-string">'dynamodb'</span>)
TABLE_NAME = os.environ[<span class="hljs-string">'TABLE_NAME'</span>]

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">lambda_handler</span>(<span class="hljs-params">event, context</span>):</span>
    <span class="hljs-keyword">try</span>:
        table = dynamodb.Table(TABLE_NAME)
        response = table.scan()

        <span class="hljs-keyword">return</span> {
            <span class="hljs-string">"statusCode"</span>: <span class="hljs-number">200</span>,
            <span class="hljs-string">"headers"</span>: {<span class="hljs-string">"Access-Control-Allow-Origin"</span>: <span class="hljs-string">"*"</span>},
            <span class="hljs-string">"body"</span>: json.dumps(response[<span class="hljs-string">'Items'</span>])
        }
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        <span class="hljs-keyword">return</span> {
            <span class="hljs-string">"statusCode"</span>: <span class="hljs-number">500</span>,
            <span class="hljs-string">"headers"</span>: {<span class="hljs-string">"Access-Control-Allow-Origin"</span>: <span class="hljs-string">"*"</span>},
            <span class="hljs-string">"body"</span>: json.dumps({<span class="hljs-string">"error"</span>: str(e)})
        }
</code></pre>
<hr />
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735677486956/e9780dd6-9db1-48c6-85ae-8f7d96d571b9.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-4-configuring-api-gateway">4. Configuring API Gateway</h3>
<ol>
<li><p><strong>Create an API</strong>:</p>
<ul>
<li><p>Go to <strong>API Gateway</strong> &gt; <strong>Create API</strong>.</p>
</li>
<li><p>Choose <strong>HTTP API</strong> and name it <code>video-app-api</code>.</p>
</li>
</ul>
</li>
<li><p><strong>Add Routes</strong>:</p>
<ul>
<li><p>POST <code>/upload</code> → <code>video-upload-handler</code>.</p>
</li>
<li><p>GET <code>/fetch</code> → <code>video-fetch-handler</code>.</p>
</li>
</ul>
</li>
<li><p><strong>Enable CORS</strong>:</p>
<ul>
<li><p>Add headers:</p>
<ul>
<li><p><code>Access-Control-Allow-Origin: *</code></p>
</li>
<li><p><code>Access-Control-Allow-Methods: GET, POST, OPTIONS</code></p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<hr />
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735677517509/b40316ab-f126-4896-81c6-f27c8f9fb404.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735677524745/7b26c63e-1ec7-47f8-ba9a-53b5cdb43026.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-5-hosting-frontend-on-s3">5. Hosting Frontend on S3</h3>
<ol>
<li><p>Sync the frontend files:</p>
<pre><code class="lang-bash"> aws s3 sync static-web/ s3://video-upload-bucket
</code></pre>
</li>
</ol>
<hr />
<h3 id="heading-6-testing-the-application">6. Testing the Application</h3>
<ol>
<li><p>Open the static website URL in your browser.</p>
</li>
<li><p>Upload videos and fetch metadata to test the functionality.</p>
</li>
</ol>
<hr />
<h2 id="heading-monitoring-and-optimization">Monitoring and Optimization</h2>
<ol>
<li><p><strong>Enable CloudWatch Logs</strong> for Lambda functions to monitor errors and performance.</p>
</li>
<li><p>Use <strong>CloudFront</strong> to distribute video content globally for faster playback.</p>
</li>
</ol>
<hr />
<h2 id="heading-challenges-and-solutions">Challenges and Solutions</h2>
<ol>
<li><p><strong>CORS Issues</strong>:</p>
<ul>
<li>Ensure proper headers are configured in API Gateway and Lambda responses.</li>
</ul>
</li>
<li><p><strong>Large File Uploads</strong>:</p>
<ul>
<li>Use multipart uploads for better performance with large files.</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>This AWS serverless project demonstrates the power of building scalable, cost-efficient, and secure video upload and playback systems. By leveraging AWS services like S3, DynamoDB, Lambda, and API Gateway, developers can deliver high-quality user experiences without managing server infrastructure. This architecture is ideal for real-time video applications.</p>
]]></content:encoded></item><item><title><![CDATA[AWS Serverless Project: Praful's resume pdf download serverless Web Application]]></title><description><![CDATA[This project leverages AWS serverless services to provide resume download functionality and a visitor counter for a portfolio website. The frontend includes various JavaScript libraries for animations and interactive components, while the backend use...]]></description><link>https://praful.cloud/aws-serverless-project-prafuls-resume-pdf-download-serverless-web-application</link><guid isPermaLink="true">https://praful.cloud/aws-serverless-project-prafuls-resume-pdf-download-serverless-web-application</guid><category><![CDATA[Web Development AWS S3 Full-Stack Development Portfolio Website DevOps Video Streaming Amazon Web Services Lambda Functions DynamoDB]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Wed, 06 Nov 2024 05:35:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730870956571/eab885a1-3720-4f0f-aeab-546573fb0a84.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This project leverages AWS serverless services to provide resume download functionality and a visitor counter for a portfolio website. The frontend includes various JavaScript libraries for animations and interactive components, while the backend uses AWS Lambda and DynamoDB to deliver dynamic functionalities.</p>
<p>GitHub Repo: <a target="_blank" href="https://github.com/prafulpatel16/prafuls-portfolio-webapp">https://github.com/prafulpatel16/prafuls-portfolio-webapp</a></p>
<h3 id="heading-solution-diagram-aws-serverless-architecture"><strong>Solution Diagram: AWS Serverless Architecture</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730870859019/f0b0d71e-e66c-4e3a-818d-92e54e50bd77.gif" alt class="image--center mx-auto" /></p>
<h3 id="heading-frontend">Frontend:</h3>
<ol>
<li><p><strong>HTML/CSS/JavaScript</strong>: For building the website structure and interactivity.</p>
</li>
<li><p><strong>Bootstrap</strong>: For responsive design and styling.</p>
</li>
<li><p><strong>Google Fonts</strong>: For custom fonts.</p>
</li>
<li><p><strong>JavaScript Libraries</strong>:</p>
<ul>
<li><p><strong>Typed.js</strong>: For typing animation on the website.</p>
</li>
<li><p><strong>PureCounter.js</strong>: For dynamic visitor counter on the frontend.</p>
</li>
<li><p><strong>AOS.js</strong>: For animations on scroll.</p>
</li>
<li><p><strong>Swiper.js</strong>: For testimonial slider functionality.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-backend">Backend:</h3>
<ol>
<li><p><strong>AWS Lambda</strong>: For serverless functions that handle resume downloads and visitor counting.</p>
</li>
<li><p><strong>AWS API Gateway</strong>: For exposing API endpoints for the Lambda functions.</p>
</li>
<li><p><strong>AWS DynamoDB</strong>: For storing visitor counts and incrementing them for each visit.</p>
</li>
<li><p><strong>AWS S3</strong>: For storing the resume PDF file that users can download.</p>
</li>
</ol>
<h3 id="heading-cloud-infrastructure-amp-security">Cloud Infrastructure &amp; Security:</h3>
<ol>
<li><p><strong>AWS IAM</strong>: For managing roles and policies to secure Lambda access to S3 and DynamoDB.</p>
</li>
<li><p><strong>AWS CloudWatch</strong>: For logging and monitoring Lambda functions.</p>
</li>
<li><p><strong>AWS WAF (optional)</strong>: For protecting the API Gateway endpoints (not implemented but recommended).</p>
</li>
</ol>
<h3 id="heading-version-control-amp-project-management">Version Control &amp; Project Management:</h3>
<ol>
<li><p><strong>Git</strong>: For version control.</p>
</li>
<li><p><strong>GitHub</strong>: For hosting and collaboration on the project.</p>
</li>
</ol>
<h3 id="heading-scripting">Scripting:</h3>
<ol>
<li><p><strong>Python</strong>: Used in Lambda functions to handle resume downloads and visitor counter logic.</p>
</li>
<li><p><strong>Boto3</strong>: Python SDK to interact with AWS services (S3, DynamoDB, etc.).</p>
</li>
</ol>
<h3 id="heading-deployment-amp-hosting">Deployment &amp; Hosting:</h3>
<ol>
<li><strong>AWS Free Tier</strong>: Keeping costs within the AWS Free Tier by leveraging free-tier limits on Lambda, API Gateway, S3, and DynamoDB.</li>
</ol>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ol>
<li><p><strong>Introduction</strong></p>
</li>
<li><p><strong>Project Structure</strong></p>
</li>
<li><p><strong>System Design Overview</strong></p>
</li>
<li><p><strong>Infrastructure Setup</strong></p>
<ul>
<li><p>IAM Role Setup</p>
</li>
<li><p>Lambda Function Setup</p>
</li>
<li><p>API Gateway Setup</p>
</li>
</ul>
</li>
<li><p><strong>Resume Download Functionality</strong></p>
<ul>
<li><p>Implementation Overview</p>
</li>
<li><p>S3 Configuration</p>
</li>
<li><p>API Gateway-Lambda Integration for Resume Download</p>
</li>
</ul>
</li>
<li><p><strong>Visitor Counter Functionality</strong></p>
<ul>
<li><p>Implementation Overview</p>
</li>
<li><p>DynamoDB Configuration</p>
</li>
<li><p>API Gateway-Lambda Integration for Visitor Counter</p>
</li>
</ul>
</li>
<li><p><strong>Security Measures</strong></p>
</li>
<li><p><strong>Budget Considerations</strong></p>
</li>
<li><p><strong>Deployment Diagram</strong></p>
</li>
<li><p><strong>Final Testing &amp; Validation</strong></p>
</li>
</ol>
<hr />
<h2 id="heading-1-introduction">1. Introduction</h2>
<p>This project demonstrates how to build a serverless web application with a <strong>Resume Download Functionality</strong> and <strong>Visitor Counter</strong> using AWS services. The primary focus is on leveraging <strong>API Gateway</strong>, <strong>Lambda Functions</strong>, <strong>S3</strong>, <strong>DynamoDB</strong>, and <strong>IAM roles</strong>. This system ensures that the infrastructure operates efficiently within the AWS Free Tier limits and maintains a budget of $10/month for all AWS resources.</p>
<hr />
<h2 id="heading-2-project-structure">2. Project Structure</h2>
<pre><code class="lang-plaintext">bashCopy code/project-root
    ├── /forms
    │   └── insert.php (for contact form)
    ├── /assets
    │   ├── /css (for stylesheets)
    │   ├── /img (for images, including the resume)
    │   └── /js (for JavaScript)
    ├── index.php (Main Website)
    └── README.md (Project Documentation)
</code></pre>
<hr />
<h2 id="heading-3-system-design-overview">3. System Design Overview</h2>
<h3 id="heading-system-architecture">System Architecture</h3>
<p>The architecture consists of:</p>
<ol>
<li><p><strong>Frontend</strong>: A simple HTML/Bootstrap-based web page hosted via a static site generator (e.g., AWS S3 or EC2).</p>
</li>
<li><p><strong>Backend Services</strong>: Lambda functions that handle:</p>
<ul>
<li><p><strong>Resume Download</strong>: Fetching the resume from an S3 bucket and allowing the user to download it.</p>
</li>
<li><p><strong>Visitor Counter</strong>: Logging visits in a DynamoDB table and returning the visitor count.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-core-aws-services-used">Core AWS Services Used:</h3>
<ul>
<li><p><strong>S3</strong>: Stores the resume PDF.</p>
</li>
<li><p><strong>Lambda</strong>: Handles the backend logic (fetching the resume and tracking visitor count).</p>
</li>
<li><p><strong>API Gateway</strong>: Exposes API endpoints for Lambda functions.</p>
</li>
<li><p><strong>DynamoDB</strong>: Stores visitor count.</p>
</li>
<li><p><strong>IAM Roles</strong>: Manages access permissions.</p>
</li>
</ul>
<hr />
<h2 id="heading-4-infrastructure-setup">4. Infrastructure Setup</h2>
<h3 id="heading-41-iam-role-setup">4.1 IAM Role Setup</h3>
<h4 id="heading-steps">Steps:</h4>
<ol>
<li><p><strong>Create IAM Role for Lambda Execution:</strong></p>
<ul>
<li><p>Go to IAM in the AWS console.</p>
</li>
<li><p>Click on "Roles" &gt; "Create Role."</p>
</li>
<li><p>Choose "Lambda" as the trusted entity.</p>
</li>
<li><p>Attach the following policies:</p>
<ul>
<li><p><code>AmazonS3ReadOnlyAccess</code>: Allows Lambda to read from the S3 bucket.</p>
</li>
<li><p><code>AmazonDynamoDBFullAccess</code>: Allows Lambda to interact with DynamoDB.</p>
</li>
<li><p><code>AWSLambdaBasicExecutionRole</code>: Grants Lambda access to CloudWatch for logs.</p>
</li>
</ul>
</li>
<li><p>Give the role a meaningful name (e.g., <code>Lambda-Resume-Download-Role</code>).</p>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728335545911/46028e50-6a31-4812-887b-3142dfa1e6ff.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><strong>Policy for S3 Access:</strong> Ensure the following policy is attached:</p>
<pre><code class="lang-plaintext"> code{
   "Version": "2012-10-17",
   "Statement": [
     {
       "Effect": "Allow",
       "Action": "s3:GetObject",
       "Resource": "arn:aws:s3:::your-bucket-name/*"
     }
   ]
 }
</code></pre>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728335615024/b6653f8f-564a-4e45-a581-7c6da76ccf27.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-42-lambda-function-setup">4.2 Lambda Function Setup</h3>
<h4 id="heading-steps-to-create-lambda-function-for-resume-download">Steps to Create Lambda Function for Resume Download:</h4>
<ol>
<li><p><strong>Create Lambda Function</strong>:</p>
<ul>
<li><p>Go to AWS Lambda and click "Create Function."</p>
</li>
<li><p>Choose the "Author from scratch" option.</p>
</li>
<li><p>Name the function <code>ResumeDownloadFunction</code>.</p>
</li>
<li><p>Choose the runtime (Python 3.9 or Node.js as per your choice).</p>
</li>
<li><p>Set the IAM role to the one you created (<code>Lambda-Resume-Download-Role</code>).</p>
</li>
</ul>
</li>
<li><p><strong>Lambda Code</strong>:</p>
<pre><code class="lang-plaintext"> import json
 import boto3
 import base64
 import os
 from botocore.exceptions import ClientError

 s3 = boto3.client('s3')

 def lambda_handler(event, context):
     bucket_name = os.getenv('S3_BUCKET_NAME')
     resume_key = os.getenv('RESUME_KEY')

     try:
         # Fetch the resume PDF from S3
         response = s3.get_object(Bucket=bucket_name, Key=resume_key)
         pdf_content = response['Body'].read()

         # Return the PDF file as binary stream
         return {
             'statusCode': 200,
             'headers': {
                 'Content-Type': 'application/pdf',
                 'Content-Disposition': 'attachment; filename="Praful_Resume.pdf"',
             },
             'body': base64.b64encode(pdf_content).decode('utf-8'),
             'isBase64Encoded': True  # This must be True for binary files
         }

     except ClientError as e:
         error_code = e.response['Error']['Code']
         return {
             'statusCode': 500,
             'body': json.dumps(f"Error downloading resume: {error_code} - {str(e)}")
         }
     except Exception as e:
         return {
             'statusCode': 500,
             'body': json.dumps(f"Error downloading resume: {str(e)}")
         }
</code></pre>
</li>
<li><p><strong>Set Environment Variables</strong>:</p>
<ul>
<li><p>S3_BUCKET_NAME: Your S3 bucket name.</p>
</li>
<li><p>RESUME_KEY: The key (path) to your resume PDF file in S3.</p>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728335813247/d6c14a8f-eeb0-425e-baa8-bd4be2fa12db.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728335842968/9ffade85-29a9-411c-a4ce-11cf21b2953a.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-steps-to-create-lambda-function-for-visitor-counter">Steps to Create Lambda Function for Visitor Counter:</h4>
<ol>
<li><p><strong>Create a second Lambda function</strong> called <code>VisitorCounterFunction</code> using the same steps as above.</p>
</li>
<li><p><strong>Lambda Code</strong>:</p>
<pre><code class="lang-plaintext"> import json
 import boto3
 from decimal import Decimal

 # Initialize DynamoDB resource
 dynamodb = boto3.resource('dynamodb')
 table = dynamodb.Table('visitorCounterTable')

 # Helper function to convert Decimal to int or float
 def decimal_default(obj):
     if isinstance(obj, Decimal):
         return int(obj) if obj % 1 == 0 else float(obj)
     raise TypeError

 def lambda_handler(event, context):
     try:
         # Define the primary key (static 'id' for counting visitors)
         visitor_id = "visitorCount"

         # Update the visitor count in DynamoDB
         response = table.update_item(
             Key={'id': visitor_id},
             UpdateExpression="SET visits = if_not_exists(visits, :start) + :increment",
             ExpressionAttributeValues={
                 ':start': 0,
                 ':increment': 1
             },
             ReturnValues="UPDATED_NEW"
         )

         # Get the updated visitor count
         updated_visits = response['Attributes']['visits']

         # Return response with CORS headers
         return {
             'statusCode': 200,
             'headers': {
                 'Access-Control-Allow-Origin': '*',  # Allow any origin
                 'Access-Control-Allow-Methods': 'GET',  # Allow GET method
                 'Access-Control-Allow-Headers': 'Content-Type',  # Allow Content-Type header
             },
             'body': json.dumps({'visits': updated_visits}, default=decimal_default)
         }

     except Exception as e:
         print(f"Error updating visitor count: {str(e)}")
         return {
             'statusCode': 500,
             'headers': {
                 'Access-Control-Allow-Origin': '*',
                 'Access-Control-Allow-Methods': 'GET',
                 'Access-Control-Allow-Headers': 'Content-Type',
             },
             'body': json.dumps({'message': f'Error: {str(e)}'})
         }
</code></pre>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728335938367/a9284940-170d-4eb6-892d-2a144e4f56c2.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-43-api-gateway-setup">4.3 API Gateway Setup</h3>
<h4 id="heading-steps-1">Steps:</h4>
<ol>
<li><p><strong>Create a new API</strong> in API Gateway.</p>
<ul>
<li><p>Go to API Gateway in the AWS Console.</p>
</li>
<li><p>Create a REST API.</p>
</li>
<li><p>Name it <code>resumeDownload</code></p>
</li>
</ul>
</li>
<li><p><strong>Create Resource and Method for Resume Download</strong>:</p>
<ul>
<li><p>Create a resource <code>/resume</code>.</p>
</li>
<li><p>Under <code>/resume</code>, create a <strong>GET method</strong>.</p>
</li>
<li><p>Integrate the method with the <code>ResumeDownloadFunction</code> Lambda function.</p>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728336232112/b7e41e6d-61f2-4c7d-91da-eb8e77bc200e.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><strong>Create Resource and Method for Visitor Counter</strong>:</p>
<ul>
<li><p>Create a resource <code>/visitCount</code>.</p>
</li>
<li><p>Under <code>/visitCount</code>, create a <strong>GET method</strong>.</p>
</li>
<li><p>Integrate the method with the <code>VisitorCounterFunction</code> Lambda function.</p>
</li>
</ul>
</li>
<li><p><strong>Enable CORS</strong> for both endpoints in API Gateway.</p>
</li>
<li><p><strong>Deploy API</strong>:</p>
<ul>
<li><p>Go to "Deploy API" in API Gateway.</p>
</li>
<li><p>Create a new stage (e.g., <code>dev</code>).</p>
</li>
<li><p>Note down the endpoint URL for both <code>/resume</code> and <code>/visitCount</code>.</p>
</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-5-resume-download-functionality">5. Resume Download Functionality</h2>
<h3 id="heading-51-implementation-overview">5.1 Implementation Overview</h3>
<ul>
<li><p>The resume is stored as a PDF in an S3 bucket.</p>
</li>
<li><p>The Lambda function fetches the resume from S3 and sends it to the user.</p>
</li>
<li><p>The API Gateway acts as the entry point, invoking the Lambda function.</p>
</li>
</ul>
<h3 id="heading-52-s3-configuration">5.2 S3 Configuration</h3>
<ol>
<li><p><strong>Create an S3 Bucket</strong>:</p>
<ul>
<li><p>Go to S3 in the AWS Console.</p>
</li>
<li><p>Create a bucket (e.g., <code>resume-bucket</code>).</p>
</li>
<li><p>Upload your resume as a PDF file.</p>
</li>
<li><p>Ensure the file is accessible via the S3 read permissions set in the IAM role.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-53-api-gateway-lambda-integration">5.3 API Gateway-Lambda Integration</h3>
<ul>
<li><p>The <code>/resume</code> endpoint calls the Lambda function to fetch the resume.</p>
</li>
<li><p>The Lambda function fetches the resume from S3 and encodes it in base64 format to be sent to the user via API Gateway.</p>
</li>
</ul>
<hr />
<h2 id="heading-6-visitor-counter-functionality">6. Visitor Counter Functionality</h2>
<h3 id="heading-61-implementation-overview">6.1 Implementation Overview</h3>
<ul>
<li><p>A DynamoDB table stores the visitor count.</p>
</li>
<li><p>The Lambda function increments the count each time it is invoked and returns the updated count.</p>
</li>
</ul>
<h3 id="heading-62-dynamodb-configuration">6.2 DynamoDB Configuration</h3>
<ol>
<li><p><strong>Create a DynamoDB Table</strong>:</p>
<ul>
<li><p>Go to DynamoDB in the AWS Console.</p>
</li>
<li><p>Create a table called <code>VisitorCounter</code>.</p>
</li>
<li><p>Set <code>id</code> as the partition key.</p>
</li>
<li><p>Prepopulate the table with an item:</p>
<pre><code class="lang-plaintext">  jsonCopy code{
    "id": "visitor_count",
    "visit_count": 0
  }
</code></pre>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728336457736/6b504325-1f0b-43b8-b0d3-4586ce38a60a.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728336504261/a7e15019-e0ca-4fdc-b0ff-8e9ddc27943b.png" alt class="image--center mx-auto" /></p>
<p>API Deployed</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728336730348/11191f35-b87d-473e-9bf1-abcc26f0bbae.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-63-api-gateway-lambda-integration">6.3 API Gateway-Lambda Integration</h3>
<ul>
<li>The <code>/visitCount</code> endpoint calls the Lambda function, which increments and returns the updated visitor count.</li>
</ul>
<p>Access web application through S3 static webapp</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728337149318/ed5ed9e0-dfd1-4332-82ea-4d9c31dd1ded.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728337209553/16f2ceaf-f53f-4a32-aa61-32e262dc28b8.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-7-security-measures">7. Security Measures</h2>
<ul>
<li><p><strong>IAM Role</strong>: Ensure that the Lambda function has only the required permissions (S3 read, DynamoDB access).</p>
</li>
<li><p><strong>API Gateway Authorization</strong>: Consider adding an API key or Cognito for access control.</p>
</li>
<li><p><strong>S3 Bucket</strong>: Enable server-side encryption (SSE-S3 or SSE-KMS) for the resume file.</p>
</li>
<li><p><strong>AWS WAF</strong>: Add a Web Application Firewall (WAF) to protect your API endpoints.</p>
</li>
</ul>
<hr />
<h2 id="heading-8-budget-considerations">8. Budget Considerations</h2>
<ul>
<li><p><strong>Lambda</strong>: AWS Lambda comes with 1M free requests and 400,000 GB-seconds of compute time, which should be sufficient for low-traffic websites.</p>
</li>
<li><p><strong>S3</strong>: S3 provides 5GB of free storage and 20,000 GET requests/month within the Free Tier.</p>
</li>
<li><p><strong>DynamoDB</strong>: DynamoDB offers 25 RCU/WCU (Read/Write Capacity Units) and 25GB of free storage per month.</p>
</li>
<li><p><strong>API Gateway</strong>: The Free Tier includes 1M REST API calls per month.</p>
</li>
</ul>
<p>By staying within these limits, the overall cost will remain under $10.</p>
<hr />
<h2 id="heading-9-deployment-diagram">9. Deployment Diagram</h2>
<hr />
<h2 id="heading-10-final-testing-amp-validation">10. Final Testing &amp; Validation</h2>
<ol>
<li><p><strong>Test API Endpoints</strong>:</p>
<ul>
<li><p>Ensure the <code>/resume</code> endpoint returns the PDF correctly.</p>
</li>
<li><p>Ensure the <code>/visitCount</code> endpoint increments and returns the visitor count.</p>
</li>
</ul>
</li>
<li><p><strong>Frontend Integration</strong>:</p>
<ul>
<li><p>Integrate the API URLs with the frontend (index.php).</p>
</li>
<li><p>Test the resume download and visitor counter.</p>
</li>
</ul>
</li>
<li><p><strong>Monitor Logs</strong>:</p>
<ul>
<li>Use CloudWatch to monitor Lambda executions and catch any potential errors.</li>
</ul>
</li>
</ol>
<hr />
<p>This documentation provides a comprehensive guide to setting up a serverless web application using AWS services like Lambda, API Gateway, S3, and DynamoDB. Follow these steps closely to implement the resume download and visitor counter functionalities within a budget of $10.</p>
]]></content:encoded></item><item><title><![CDATA[AWS Serverless Project - Order Processing System]]></title><description><![CDATA[GitHub Repo Link: https://github.com/prafulpatel16/aws-order-proccessing-system.git
AWS Serverless offerings

Project Use Case: Real-Time Order Processing System
Architecture Overview:

User Interface (UI): A React frontend hosted on S3 and served vi...]]></description><link>https://praful.cloud/aws-serverless-order-processing-system</link><guid isPermaLink="true">https://praful.cloud/aws-serverless-order-processing-system</guid><category><![CDATA[AWS Terraform S3 Infrastructure as Code (IaC) DevOps AWS S3 Static Website Hosting Node.js Frontend Deployment AWS Terraform Cloud Computing Automation AWS IAM AWS DevOps Serverless Deployment ]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Thu, 24 Oct 2024 18:47:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730001710703/7b3339c9-ffe0-450a-8bc6-e14d106c8774.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729625585908/d73ca7bc-2bdb-45a8-901d-e657c99c9422.png?auto=compress,format&amp;format=webp" alt /></p>
<p>GitHub Repo Link: <a target="_blank" href="https://github.com/prafulpatel16/aws-order-proccessing-system.git">https://github.com/prafulpatel16/aws-order-proccessing-system.git</a></p>
<p>AWS Serverless offerings</p>
<p><img src="https://docs.aws.amazon.com/images/whitepapers/latest/optimizing-enterprise-economics-with-serverless/images/serverless-components.png" alt /></p>
<h3 id="heading-project-use-case-real-time-order-processing-system"><strong>Project Use Case: Real-Time Order Processing System</strong></h3>
<h3 id="heading-architecture-overview"><strong>Architecture Overview:</strong></h3>
<ul>
<li><p><strong>User Interface (UI)</strong>: A React frontend hosted on S3 and served via CloudFront.</p>
</li>
<li><p><strong>Backend</strong>: API Gateway, AWS Lambda functions, Step Functions for order processing orchestration, and DynamoDB as the database.</p>
</li>
<li><p><strong>Additional Services</strong>: SNS for notifications, S3 for storing receipts, and CloudWatch for monitoring.</p>
</li>
</ul>
<ol>
<li><p><strong>Frontend (React)</strong>: A simple order form hosted in an S3 bucket.</p>
</li>
<li><p><strong>API Gateway</strong>: To handle order submission requests.</p>
</li>
<li><p><strong>Lambda</strong>: Multiple Lambda functions for each stage of the order processing.</p>
</li>
<li><p><strong>Step Functions</strong>: Orchestration for the order processing workflow.</p>
</li>
<li><p><strong>DynamoDB</strong>: For storing orders and inventory data.</p>
</li>
<li><p><strong>SQS</strong>: Queue to process background tasks like sending notifications and generating receipts.</p>
</li>
<li><p><strong>SNS</strong>: For real-time notifications to users.</p>
</li>
<li><p><strong>S3</strong>: For storing order receipts.</p>
</li>
<li><p><strong>CloudWatch</strong>: For monitoring and error logging.</p>
</li>
</ol>
<h3 id="heading-key-requirements"><strong>Key Requirements:</strong></h3>
<ol>
<li><p><strong>Real-time Order Submission</strong>: Users can place orders through the e-commerce frontend.</p>
</li>
<li><p><strong>Order Validation</strong>: Validate the order, including checking stock availability and payment verification.</p>
</li>
<li><p><strong>Inventory Management</strong>: Deduct inventory once the order is placed.</p>
</li>
<li><p><strong>Payment Processing</strong>: Integrate with third-party payment gateways.</p>
</li>
<li><p><strong>Notification</strong>: Notify users via email when the order is successfully processed.</p>
</li>
<li><p><strong>Store Order Receipts</strong>: Store the order details and generate a receipt to be stored in S3.</p>
</li>
<li><p><strong>Monitoring</strong>: Use CloudWatch to monitor the flow, errors, and execution times.</p>
</li>
</ol>
<h3 id="heading-tech-stack"><strong>Tech Stack:</strong></h3>
<ul>
<li><p><strong>Frontend</strong>: React.js (Hosted in S3 + CloudFront)</p>
</li>
<li><p><strong>API</strong>: API Gateway (to expose REST API)</p>
</li>
<li><p><strong>Logic</strong>: Lambda functions</p>
</li>
<li><p><strong>Orchestration</strong>: Step Functions</p>
</li>
<li><p><strong>Database</strong>: DynamoDB (for order details and inventory management)</p>
</li>
<li><p><strong>Notifications</strong>: SNS</p>
</li>
<li><p><strong>File Storage</strong>: S3 (for storing receipts)</p>
</li>
<li><p><strong>Monitoring</strong>: CloudWatch</p>
</li>
</ul>
<h3 id="heading-step-by-step-implementation"><strong>Step-by-Step Implementation:</strong></h3>
<h4 id="heading-1-frontend-react-api-gateway"><strong>1. Frontend (React + API Gateway):</strong></h4>
<ul>
<li><p>Create a React application for order submission.</p>
</li>
<li><p>Host the React frontend in <strong>S3</strong> with CloudFront for faster access.</p>
</li>
<li><p>The frontend sends an API request to the <strong>API Gateway</strong> to submit the order.</p>
</li>
<li><p>API Gateway triggers a <strong>Lambda function</strong> to start the process.</p>
</li>
</ul>
<h4 id="heading-2-api-gateway-setup"><strong>2. API Gateway Setup:</strong></h4>
<ul>
<li><p>Configure <strong>AWS API Gateway</strong> to expose a REST API with a <code>/place-order</code> endpoint.</p>
</li>
<li><p>This API will trigger an AWS <strong>Lambda</strong> function (<code>OrderPlacementFunction</code>).</p>
</li>
<li><p>The Lambda function will initiate an <strong>AWS Step Functions</strong> workflow.</p>
</li>
</ul>
<h4 id="heading-3-aws-step-functions"><strong>3. AWS Step Functions:</strong></h4>
<ul>
<li><p><strong>Define a Step Function</strong> to manage the order processing workflow.</p>
</li>
<li><p>The workflow consists of multiple states:</p>
<ul>
<li><p><strong>Validate Order</strong>: Check for stock availability using Lambda.</p>
</li>
<li><p><strong>Process Payment</strong>: Trigger payment processing using a Lambda function.</p>
</li>
<li><p><strong>Update Inventory</strong>: Once payment is successful, deduct the inventory.</p>
</li>
<li><p><strong>Send Notification</strong>: Send a confirmation email via SNS.</p>
</li>
<li><p><strong>Generate Receipt</strong>: Store the order receipt in S3 using Lambda.</p>
</li>
</ul>
</li>
</ul>
<h4 id="heading-4-order-validation-lambda"><strong>4. Order Validation Lambda:</strong></h4>
<ul>
<li><p>Create a Lambda function (<code>ValidateOrderFunction</code>) that validates the stock availability by querying <strong>DynamoDB</strong>.</p>
</li>
<li><p>If the item is in stock, the workflow proceeds to payment processing.</p>
</li>
</ul>
<h4 id="heading-5-payment-processing-lambda"><strong>5. Payment Processing Lambda:</strong></h4>
<ul>
<li><p>Lambda function (<code>ProcessPaymentFunction</code>) integrates with a third-party payment service (e.g., Stripe).</p>
</li>
<li><p>After successful payment, update the payment status in DynamoDB.</p>
</li>
</ul>
<h4 id="heading-6-update-inventory-lambda"><strong>6. Update Inventory Lambda:</strong></h4>
<ul>
<li><p>Lambda function (<code>UpdateInventoryFunction</code>) updates the inventory in DynamoDB once the payment is processed.</p>
</li>
<li><p>If inventory update fails, trigger a rollback or handle errors via a defined Step Functions fail state.</p>
</li>
</ul>
<h4 id="heading-7-send-notification-sns"><strong>7. Send Notification (SNS):</strong></h4>
<ul>
<li><p>Create an <strong>SNS topic</strong> to send a notification to the user about the order status.</p>
</li>
<li><p>Lambda function (<code>SendNotificationFunction</code>) triggers SNS to send an email with the order details to the user.</p>
</li>
</ul>
<h4 id="heading-8-generate-and-store-receipt-s3"><strong>8. Generate and Store Receipt (S3):</strong></h4>
<ul>
<li><p>Lambda function (<code>GenerateReceiptFunction</code>) generates a receipt for the order and stores it in an S3 bucket.</p>
</li>
<li><p>A presigned URL is generated for users to download the receipt.</p>
</li>
</ul>
<h4 id="heading-9-monitoring-and-error-handling"><strong>9. Monitoring and Error Handling:</strong></h4>
<ul>
<li><p>Use <strong>AWS CloudWatch</strong> to track the workflow and log errors.</p>
</li>
<li><p>Step Functions should have proper error handling with retry logic or defined failure states.</p>
</li>
<li><p>CloudWatch metrics and alarms can be set to monitor for errors in the order process.</p>
</li>
</ul>
<h3 id="heading-aws-step-functions-workflow-example"><strong>AWS Step Functions Workflow Example:</strong></h3>
<pre><code class="lang-plaintext">Copy{
  "StartAt": "ValidateOrder",
  "States": {
    "ValidateOrder": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:ValidateOrderFunction",
      "Next": "ProcessPayment",
      "Catch": [
        {
          "ErrorEquals": ["States.TaskFailed"],
          "Next": "FailOrder"
        }
      ]
    },
    "ProcessPayment": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:ProcessPaymentFunction",
      "Next": "UpdateInventory"
    },
    "UpdateInventory": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:UpdateInventoryFunction",
      "Next": "SendNotification"
    },
    "SendNotification": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:SendNotificationFunction",
      "Next": "GenerateReceipt"
    },
    "GenerateReceipt": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:GenerateReceiptFunction",
      "End": true
    },
    "FailOrder": {
      "Type": "Fail",
      "Error": "OrderProcessingFailed",
      "Cause": "An error occurred during order processing."
    }
  }
}
</code></pre>
<h3 id="heading-benefits-of-this-approach"><strong>Benefits of this Approach:</strong></h3>
<ul>
<li><p><strong>Scalable and Serverless</strong>: No server management needed, the architecture scales automatically with the load.</p>
</li>
<li><p><strong>Event-Driven</strong>: AWS Step Functions allow orchestration of Lambda functions in a step-by-step fashion.</p>
</li>
<li><p><strong>Real-Time Notifications</strong>: SNS ensures users are notified instantly once the order is processed.</p>
</li>
<li><p><strong>Cost-Effective</strong>: You only pay for the compute resources (Lambda executions) and API Gateway usage.</p>
</li>
<li><p><strong>Monitoring</strong>: CloudWatch allows monitoring in real time for better insight into performance and <a target="_blank" href="http://errors.Tech">errors.Tech</a> Stack:</p>
<ul>
<li><p><strong>Frontend</strong>: React.js (Hosted in S3 + CloudFront)</p>
</li>
<li><p><strong>API</strong>: API Gateway (to expose REST API)</p>
</li>
<li><p><strong>Logic</strong>: Lambda functions</p>
</li>
<li><p><strong>Orchestration</strong>: Step Functions</p>
</li>
<li><p><strong>Database</strong>: DynamoDB (for order details and inventory management)</p>
</li>
<li><p><strong>Notifications</strong>: SNS</p>
</li>
<li><p><strong>File Storage</strong>: S3 (for storing receipts)</p>
</li>
<li><p><strong>Monitoring</strong>: CloudWatch</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-project-structure"><strong>Project Structure:</strong></h3>
<pre><code class="lang-plaintext">Copyorder-processing-system/
├── frontend/                   # React app for frontend
│   ├── public/
│   ├── src/
│   └── package.json
├── backend/
│   ├── functions/              # Lambda functions
│   │   ├── validateOrder.js
│   │   ├── processPayment.js
│   │   ├── updateInventory.js
│   │   ├── sendNotification.js
│   │   ├── generateReceipt.js
│   └── stepFunctions.json      # Step Function definition
├── infrastructure/             # Infrastructure as Code (CloudFormation/Terraform)
│   ├── api-gateway.yaml
│   ├── dynamodb.yaml
│   ├── s3.yaml
│   ├── sqs.yaml
│   ├── sns.yaml
│   └── step-functions.yaml
└── README.md                   # Project documentation
</code></pre>
<h3 id="heading-project-implementation">Project Implementation:</h3>
<h3 id="heading-dynamodb-setup"><strong>Dynamodb Setup</strong></h3>
<p>To launch an individual <strong>CloudFormation stack</strong> using a CloudFormation template (in your case, <code>dynamodb.yaml</code>) that has been uploaded to S3, follow the steps below:</p>
<h3 id="heading-step-1-upload-the-cloudformation-template-to-s3"><strong>Step 1: Upload the CloudFormation Template to S3</strong></h3>
<p>You mentioned you have already uploaded the <code>dynamodb.yaml</code> file to S3. Here's the command you would use if you haven't:</p>
<pre><code class="lang-plaintext">aws s3 cp dynamodb.yaml s3://your-bucket-name/
Replace your-bucket-name with the name of your actual S3 bucket.
</code></pre>
<h3 id="heading-step-2-launch-the-cloudformation-stack-using-the-template"><strong>Step 2: Launch the CloudFormation Stack Using the Template</strong></h3>
<p>You can launch the CloudFormation stack using the AWS CLI by referencing the template in S3.</p>
<p>Here’s how to launch the stack:</p>
<ol>
<li><strong>Run the following command to create a CloudFormation stack</strong>:</li>
</ol>
<pre><code class="lang-plaintext">aws cloudformation create-stack \
  --stack-name dynamodb-stack \
  --template-url https://s3.amazonaws.com/your-bucket-name/dynamodb.yaml \
  --capabilities CAPABILITY_NAMED_IAM
Replace your-bucket-name with your S3 bucket name.
</code></pre>
<ul>
<li><p>Replace <code>dynamodb.yaml</code> with the path to the CloudFormation template.</p>
</li>
<li><p>The <code>--capabilities CAPABILITY_NAMED_IAM</code> flag allows CloudFormation to create IAM roles or policies if necessary.</p>
</li>
</ul>
<h3 id="heading-explanation-of-command"><strong>Explanation of Command:</strong></h3>
<ul>
<li><p><code>--stack-name dynamodb-stack</code>: The name you want to give the CloudFormation stack. This can be anything meaningful, such as <code>dynamodb-stack</code>.</p>
</li>
<li><p><code>--template-url</code>: The URL of the CloudFormation template stored in your S3 bucket.</p>
</li>
<li><p><code>--capabilities CAPABILITY_NAMED_IAM</code>: This flag is required if your CloudFormation stack creates IAM resources such as roles or policies.</p>
</li>
</ul>
<h3 id="heading-step-3-monitor-the-stack-creation"><strong>Step 3: Monitor the Stack Creation</strong></h3>
<p>Once the command is executed, you can monitor the progress of your stack creation either through the <strong>AWS Management Console</strong> or using the CLI.</p>
<ul>
<li>To check the status of the stack using the CLI, you can run:</li>
</ul>
<pre><code class="lang-plaintext">aws cloudformation describe-stacks --stack-name dynamodb-stack
This will give you details about the status of the stack (e.g., CREATE_IN_PROGRESS, CREATE_COMPLETE, etc.).
</code></pre>
<h3 id="heading-step-4-verify-stack-creation"><strong>Step 4: Verify Stack Creation</strong></h3>
<ol>
<li><p><strong>AWS Management Console</strong>:</p>
<ul>
<li><p>Go to the <strong>CloudFormation</strong> section in the AWS Management Console.</p>
</li>
<li><p>Find your stack (<code>dynamodb-stack</code>) in the list of stacks and check its status.</p>
</li>
<li><p>You can also look at the <strong>Resources</strong> tab to see the DynamoDB table and other resources created by your stack.</p>
</li>
</ul>
</li>
<li><p><strong>AWS CLI</strong>:</p>
<ul>
<li>You can describe the resources created by your stack using this command:</li>
</ul>
</li>
</ol>
<pre><code class="lang-plaintext">    aws cloudformation describe-stack-resources --stack-name dynamodb-stack
    This will list the resources (e.g., DynamoDB tables) that have been created by the stack.
</code></pre>
<h3 id="heading-step-5-interact-with-the-created-resources"><strong>Step 5: Interact with the Created Resources</strong></h3>
<p>Once your stack is successfully created, you can interact with the resources (such as the DynamoDB table) using the AWS Management Console or the AWS CLI.</p>
<p>For example, to list the tables in DynamoDB:</p>
<pre><code class="lang-plaintext">aws dynamodb list-tables
</code></pre>
<h3 id="heading-inventory-table-schema"><strong>Inventory Table Schema:</strong></h3>
<ol>
<li><p><strong>Partition Key (Primary Key):</strong></p>
<ul>
<li><strong>productId</strong>: A unique identifier for each product (String type).</li>
</ul>
</li>
<li><p><strong>Attributes:</strong></p>
<ul>
<li><p><strong>stock</strong>: The available stock quantity for the product (Number type).</p>
</li>
<li><p><strong>price</strong>: The price of the product (optional, Number type).</p>
</li>
<li><p><strong>description</strong>: A description of the product (optional, String type).</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-table-definition"><strong>Table Definition:</strong></h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Attribute Name</td><td>Data Type</td><td>Purpose</td></tr>
</thead>
<tbody>
<tr>
<td><code>productId</code></td><td>String</td><td>Primary Key, unique identifier for each product.</td></tr>
<tr>
<td><code>stock</code></td><td>Number</td><td>Available stock quantity.</td></tr>
<tr>
<td><code>price</code></td><td>Number</td><td>Price of the product (optional).</td></tr>
<tr>
<td><code>description</code></td><td>String</td><td>Product description (optional).</td></tr>
</tbody>
</table>
</div><h3 id="heading-example-table-definition-aws-cli"><strong>Example Table Definition (AWS CLI):</strong></h3>
<p>You can create the table using the AWS CLI as follows:</p>
<pre><code class="lang-plaintext">aws dynamodb create-table \
    --table-name Inventory \
    --attribute-definitions AttributeName=productId,AttributeType=S \
    --key-schema AttributeName=productId,KeyType=HASH \
    --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
</code></pre>
<h3 id="heading-example-data"><strong>Example Data:</strong></h3>
<p>Here is some sample data for the Inventory table:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>productId</td><td>stock</td><td>price</td><td>description</td></tr>
</thead>
<tbody>
<tr>
<td>P001</td><td>100</td><td>19.99</td><td>Red T-shirt</td></tr>
<tr>
<td>P002</td><td>50</td><td>29.99</td><td>Blue Jeans</td></tr>
<tr>
<td>P003</td><td>25</td><td>9.99</td><td>Black Hat</td></tr>
<tr>
<td>P004</td><td>10</td><td>49.99</td><td>Running Shoes</td></tr>
<tr>
<td>P005</td><td>75</td><td>5.99</td><td>Cotton Socks</td></tr>
</tbody>
</table>
</div><h3 id="heading-adding-sample-data-aws-cli"><strong>Adding Sample Data (AWS CLI):</strong></h3>
<p>You can add items to your DynamoDB table using the AWS CLI:</p>
<pre><code class="lang-plaintext">aws dynamodb put-item \
    --table-name Inventory \
    --item '{
        "productId": {"S": "P001"},
        "stock": {"N": "100"},
        "price": {"N": "19.99"},
        "description": {"S": "Red T-shirt"}
    }'

aws dynamodb put-item \
    --table-name Inventory \
    --item '{
        "productId": {"S": "P002"},
        "stock": {"N": "50"},
        "price": {"N": "29.99"},
        "description": {"S": "Blue Jeans"}
    }'
</code></pre>
<h3 id="heading-verifying-data"><strong>Verifying Data:</strong></h3>
<p>To verify the data inserted into your DynamoDB table, you can use the following command:</p>
<pre><code class="lang-plaintext">aws dynamodb scan --table-name Inventory
</code></pre>
<h3 id="heading-example-output"><strong>Example Output:</strong></h3>
<pre><code class="lang-plaintext">code{
    "Items": [
        {
            "productId": { "S": "P001" },
            "stock": { "N": "100" },
            "price": { "N": "19.99" },
            "description": { "S": "Red T-shirt" }
        },
        {
            "productId": { "S": "P002" },
            "stock": { "N": "50" },
            "price": { "N": "29.99" },
            "description": { "S": "Blue Jeans" }
        }
    ],
    "Count": 2,
    "ScannedCount": 2
}
</code></pre>
<h3 id="heading-conclusion"><strong>Conclusion:</strong></h3>
<ul>
<li><p><strong>Partition Key</strong>: Use <code>productId</code> as the partition key, which will uniquely identify each product.</p>
</li>
<li><p><strong>Attributes</strong>: Store the available <code>stock</code> as a number, and optionally add <code>price</code> and <code>description</code> attributes for each product.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729099926072/8002e293-e0b3-4a6f-8f4a-ae159ada39c1.png?auto=compress,format&amp;format=webp" alt /></p>
<p>INSERT some items in Inventory data table as stock so it can Validate the order</p>
<p>aws dynamodb put-item \ --table-name Inventory \ --item '{ "productId": {"S": "P001"}, "stock": {"N": "100"}, "price": {"N": "19.99"}, "description": {"S": "Red T-shirt"} }'</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729100116374/8bd82196-89bd-49ee-8ec4-56e4dbc99ecb.png?auto=compress,format&amp;format=webp" alt /></p>
<hr />
<h1 id="heading-lambda-function"><strong>Lambda function</strong></h1>
<h3 id="heading-phase-1-deploy-api-gateway-to-trigger-lambda"><strong>Phase 1: Deploy API Gateway to Trigger Lambda</strong></h3>
<h4 id="heading-step-11-create-a-lambda-function-orderplacementfunction"><strong>Step 1.1: Create a Lambda Function (OrderPlacementFunction)</strong></h4>
<ol>
<li><p>Go to the <strong>Lambda Console</strong>.</p>
</li>
<li><p>Click <strong>Create Function</strong>.</p>
<ul>
<li><p><strong>Function Name</strong>: <code>OrderPlacementFunction</code></p>
</li>
<li><p><strong>Runtime</strong>: Node.js 14.x (or the runtime of your choice)</p>
</li>
<li><p><strong>Role</strong>: Choose or create an IAM role that allows Lambda to interact with AWS Step Functions.</p>
</li>
</ul>
</li>
<li><p>In the <strong>Lambda code editor</strong>, add code that initiates the AWS Step Functions workflow when an order is placed:</p>
</li>
</ol>
<p><code>orderPlacement.js</code> (Lambda code):</p>
<pre><code class="lang-plaintext">const WS = require('aws-sdk');
const stepFunctions = new AWS.StepFunctions();

exports.handler = async (event) =&gt; {
  const order = JSON.parse(event.body);  // Assuming the order details are in the body

  const params = {
    stateMachineArn: process.env.STEP_FUNCTION_ARN,  // ARN of the Step Functions state machine
    input: JSON.stringify(order),  // Pass the order details to Step Functions
  };

  try {
    const result = await stepFunctions.startExecution(params).promise();
    return {
      statusCode: 200,
      body: JSON.stringify({
        message: 'Order processing started',
        executionArn: result.executionArn,
      }),
    };
  } catch (error) {
    console.error(error);
    return {
      statusCode: 500,
      body: JSON.stringify({ message: 'Failed to start order processing' }),
    };
  }
};
</code></pre>
<ol start="4">
<li><p><strong>Environment Variable</strong>:</p>
<ul>
<li><p>Add an environment variable to store the <strong>Step Functions ARN</strong> (<code>STEP_FUNCTION_ARN</code>).</p>
</li>
<li><p>The value will be set later once the Step Functions workflow is created.</p>
</li>
</ul>
</li>
<li><p><strong>Deploy the Lambda Function</strong>.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729027618092/9716d353-d265-4d0c-98ff-b9c8f3a32f89.png?auto=compress,format&amp;format=webp" alt /></p>
<p>ENV variable</p>
<p>arn:aws:iam::202533534284:role/service-role/StepFunctions-OrderProcessingStateMachine-role-7xpccmy1x</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729029256386/a722b6a2-11bd-4a9f-8105-ce45e5944d17.png?auto=compress,format&amp;format=webp" alt /></p>
<hr />
<h1 id="heading-step-functions-workflow"><strong>Step Functions - Workflow</strong></h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729027836017/3453db9e-9821-479e-9d8c-8223060b0f2b.png?auto=compress,format&amp;format=webp" alt /></p>
<pre><code class="lang-plaintext">{
  "StartAt": "ValidateOrder",
  "States": {
    "ValidateOrder": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:202533534284:function:validateOrderFunction",
      "Next": "SaveOrderToDatabase",
      "ResultPath": "$.validationOutput"
    },
    "SaveOrderToDatabase": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:202533534284:function:saveOrderFunction",
      "Parameters": {
        "OrderId.$": "$.validationOutput.OrderId",
        "customerEmail.$": "$.validationOutput.customerEmail",
        "productId.$": "$.validationOutput.productId",
        "quantity.$": "$.validationOutput.quantity"
      },
      "Next": "ProcessPayment"
    },
    "ProcessPayment": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:202533534284:function:processPaymentFunction",
      "Next": "UpdateInventory"
    },
    "UpdateInventory": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:202533534284:function:updateInventoryFunction",
      "Next": "SendNotification"
    },
    "SendNotification": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:202533534284:function:sendNotificationFunction",
      "Next": "GenerateReceipt"
    },
    "GenerateReceipt": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:202533534284:function:generateReceiptFunction",
      "End": true
    }
  }
}
</code></pre>
<hr />
<h1 id="heading-api-gateway"><strong>API Gateway</strong></h1>
<h4 id="heading-step-12-create-api-gateway-to-trigger-lambda"><strong>Step 1.2: Create API Gateway to Trigger Lambda</strong></h4>
<ol>
<li><p>Go to the <strong>API Gateway Console</strong>.</p>
</li>
<li><p>Click <strong>Create API</strong> &gt; <strong>REST API</strong>.</p>
<ul>
<li><p><strong>API Name</strong>: <code>OrderProcessingAPI</code></p>
</li>
<li><p><strong>Description</strong>: API for triggering order placement workflow.</p>
</li>
</ul>
</li>
<li><p><strong>Create a Resource and Method</strong>:</p>
<ul>
<li><p><strong>Resource</strong>: <code>/place-order</code></p>
</li>
<li><p><strong>Method</strong>: POST</p>
</li>
<li><p><strong>Integration Type</strong>: Lambda Function</p>
</li>
<li><p><strong>Lambda Function</strong>: Select <code>OrderPlacementFunction</code>.</p>
</li>
</ul>
</li>
<li><p><strong>Enable CORS</strong>:</p>
<ul>
<li>Enable CORS on the <code>/place-order</code> method to allow cross-origin requests from the frontend.</li>
</ul>
</li>
<li><p><strong>Deploy the API</strong>:</p>
<ul>
<li><p>Go to <strong>Actions</strong> &gt; <strong>Deploy API</strong>.</p>
</li>
<li><p><strong>Stage Name</strong>: <code>prod</code>.</p>
</li>
</ul>
</li>
<li><p>Once deployed, you’ll get a public <strong>API URL</strong> that the frontend can use to place orders.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729029593051/6fb564c7-abda-4a5e-bfca-9ef0de78dc91.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Create POST Method</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729029683658/12581004-d0ae-4b14-937e-6c13b74132ba.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Enable CORS</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729029737889/3f6859a7-6fa3-4731-95fe-0103447b22dd.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Deploy API - dev stage</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729029807774/9af5520b-208c-49d9-8af7-08e0ded61d98.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Invoke URL</p>
<p><a target="_blank" href="https://88ax43nqed.execute-api.us-east-1.amazonaws.com/dev"><strong>https://88ax43nqed.execute-api.us-east-1.amazonaws.com/dev</strong></a></p>
<p>Go to Frontend static web app code and place the invoke url</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729029958212/71243481-9231-4ce3-9b8e-83f5885c20d7.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729030026207/894bddf0-96d2-45c6-bbce-51059064b4ac.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Re deploy the frontend code to the s3 static web application bucket</p>
<pre><code class="lang-plaintext">
npm run build
</code></pre>
<p><strong>Sync build files to S3</strong>:</p>
<pre><code class="lang-plaintext">aws s3 sync ./build s3://your-bucket-name
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729030738892/a097fe84-5747-404d-ba71-38bf1d9376f5.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729030814016/856ffa22-63ec-4133-ab6e-b84758e454d9.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Go to Static web app URL</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729030873303/92b4c666-dfa0-4181-84e3-880dc6da7228.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Access the React App</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729030937698/de548e3a-e563-4cc9-b21f-0ff203535cbe.png?auto=compress,format&amp;format=webp" alt /></p>
<hr />
<h1 id="heading-s3-static-webapp"><strong>S3 - static webapp</strong></h1>
<h3 id="heading-1-frontend-deployment-on-s3-with-cloudfront-react-app"><strong>1. Frontend Deployment on S3 with CloudFront (React App)</strong></h3>
<h3 id="heading-step-11-build-the-react-application"><strong>Step 1.1: Build the React Application</strong></h3>
<ol>
<li><p><strong>Navigate to the frontend directory</strong> where your React project is located:</p>
<pre><code class="lang-plaintext"> cd /path-to-your-frontend
 Install dependencies (if you haven’t already):
</code></pre>
<pre><code class="lang-plaintext"> npm install
</code></pre>
</li>
<li><p><strong>Build the project</strong> for production:</p>
<pre><code class="lang-plaintext"> npm run build
</code></pre>
<p> This will create a <code>build</code> directory with static files optimized for production.</p>
</li>
</ol>
<h3 id="heading-step-12-create-an-s3-bucket-for-static-website-hosting"><strong>Step 1.2: Create an S3 Bucket for Static Website Hosting</strong></h3>
<ol>
<li><p>Go to the <strong>S3 Console</strong> and click on <strong>Create Bucket</strong>.</p>
<ul>
<li><p><strong>Bucket Name</strong>: Choose a globally unique name (e.g., <code>my-frontend-bucket</code>).</p>
</li>
<li><p><strong>Region</strong>: Choose a region close to your users.</p>
</li>
<li><p><strong>Block all public access</strong>: Uncheck this setting to allow public access (since this is a static website).</p>
</li>
</ul>
</li>
<li><p><strong>Enable Static Website Hosting</strong>:</p>
<ul>
<li><p>Go to the <strong>Properties</strong> tab.</p>
</li>
<li><p>Scroll down to <strong>Static website hosting</strong>.</p>
</li>
<li><p>Choose <strong>Enable</strong>.</p>
</li>
<li><p>Enter the <strong>index document</strong> as <code>index.html</code> and <strong>error document</strong> as <code>index.html</code> (for single-page apps).</p>
</li>
</ul>
</li>
<li><p><strong>Upload the Build Files</strong>:</p>
<ul>
<li><p>Click on <strong>Upload</strong>.</p>
</li>
<li><p>Drag and drop the contents of the <code>build</code> folder into the S3 bucket.</p>
</li>
</ul>
</li>
<li><p><strong>Set Object Permissions</strong>:</p>
<ul>
<li>Select the uploaded objects, go to the <strong>Permissions</strong> tab, and ensure they have public-read access for static website hosting.</li>
</ul>
</li>
<li><p><strong>Optional: Set up CloudFront for CDN</strong> (for faster access):</p>
<ul>
<li><p>Go to the <strong>CloudFront Console</strong>.</p>
</li>
<li><p>Create a new <strong>CloudFront Distribution</strong>.</p>
</li>
<li><p>For <strong>Origin Domain</strong>, select your S3 bucket.</p>
</li>
<li><p>Use HTTPS for security.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-step-13-add-permissions-for-s3"><strong>Step 1.3: Add Permissions for S3</strong></h3>
<p>To make the bucket public:</p>
<ol>
<li><p>Go to <strong>Bucket Permissions</strong>.</p>
</li>
<li><p>Add a <strong>Bucket Policy</strong>:</p>
<pre><code class="lang-plaintext"> {
   "Version": "2012-10-17",
   "Statement": [
     {
       "Sid": "PublicReadGetObject",
       "Effect": "Allow",
       "Principal": "*",
       "Action": "s3:GetObject",
       "Resource": "arn:aws:s3:::my-frontend-bucket/*"
     }
   ]
 }
</code></pre>
</li>
</ol>
<p>Your React frontend is now deployed on <strong>S3</strong>.</p>
<p>You can access it using the S3 website URL or CloudFront distribution URL.</p>
<hr />
<h1 id="heading-challenges-amp-troubleshooting"><strong>Challenges &amp; Troubleshooting</strong></h1>
<p>Error:</p>
<p>POST <a target="_blank" href="https://88ax43nqed.execute-api.us-east-1.amazonaws.com/dev/place-order"><strong>https://88ax43nqed.execute-api.us-east-1.amazonaws.com/dev/place-order</strong></a> 502 (Bad Gateway)</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729031324842/1b110a46-f100-417c-ae54-453afd22ac2d.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729031370369/c2f976d2-4470-4bc3-a3cb-15965c362bd5.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Error</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729034890010/e9eddec3-7b24-494e-885f-0a9a167d8700.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Error</p>
<p>App.js:8 POST <a target="_blank" href="https://88ax43nqed.execute-api.us-east-1.amazonaws.com/dev/place-order"><strong>https://88ax43nqed.execute-api.us-east-1.amazonaws.com/dev/place-order</strong></a> 500 (Internal Server Error)</p>
<p>App.js:16</p>
<ol>
<li><p><em>{message: 'Failed to start order processing', error: "'STATE_MACHINE_ARN'"}</em></p>
<ol>
<li><p><strong>error</strong>: "'STATE_MACHINE_ARN'"</p>
</li>
<li><p><strong>message</strong>: "Failed to start order processing"</p>
</li>
<li><p>[[Prototype]]: Object</p>
</li>
</ol>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729043729617/42993d4e-c2d4-4329-b7e0-14ffeb64c454.png?auto=compress,format&amp;format=webp" alt /></p>
<p><strong>Fix:</strong> Updated the ENV variable of STATE_MACHINE_ARN</p>
<p>arn:aws:states:us-east-1:202533534284:stateMachine:OrderProcessingStateMachine</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729044141908/d2bd26fc-4c13-417a-875f-2f1d63913f06.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Re test Successful:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729044200389/3fe37275-bcae-4408-8bb1-d909a7656571.png?auto=compress,format&amp;format=webp" alt /></p>
<hr />
<p>Error: The frontend seems to be successful but the products data does not sent to the database</p>
<p>Let’s investigate the error</p>
<ol>
<li><p>Verify and ensure that the <a target="_blank" href="http://a-orderplacement.py/"><strong>A-orderPlacement.py</strong></a> Lambda function is correct and it has integrated the Step Machine correctly, defined the ENV for state Machine ARN</p>
</li>
<li><p>Verify that the STEP Machine workflow successfully triggered</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729099059192/7be50a25-ac17-49be-98e2-32460c374f5d.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Error:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729099476030/e19a530a-8e80-4394-ae81-a52c79401e9a.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Cloudwatch</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729104324134/ab35cf87-90f2-4ffb-aa49-75c7da76b0d8.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Error</p>
<p>2024-10-16T18:48:31.922Z</p>
<p><strong>Error</strong> validating order: An <strong>error</strong> occurred (ValidationException) when calling the GetItem operation: 1 validation <strong>error</strong> detected: Value ' Inventory' at 'tableName' failed to satisfy constraint: Member must satisfy regular expression pattern: [a-zA-Z0-9_.-]+</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729105234043/b5c4b981-7eef-40e5-8fa8-f844a993be67.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-fix"><strong>Fix:</strong></h3>
<ol>
<li><p><strong>Check your environment variable for the DynamoDB table name</strong>: The error shows that the table name is <code>' Inventory'</code>, which suggests that there is an unintended leading space in the value of the <code>INVENTORY_TABLE</code> environment variable.</p>
</li>
<li><p><strong>Ensure that the environment variable is set correctly</strong>: Go to the AWS Lambda console and verify that the <code>INVENTORY_TABLE</code> environment variable is correctly set without any extra spaces.</p>
</li>
</ol>
<h3 id="heading-how-to-correct-the-table-name"><strong>How to Correct the Table Name:</strong></h3>
<ol>
<li><p><strong>Remove the Leading Space</strong>:</p>
<ul>
<li>In the Lambda function configuration, under <strong>Environment Variables</strong>, find the <code>INVENTORY_TABLE</code> variable and remove any leading or trailing spaces.</li>
</ul>
</li>
<li><p><strong>Verify the Code</strong>:</p>
<ul>
<li>Ensure that in the code, you're using the correct environment variable without any modifications that might introduce spaces.</li>
</ul>
</li>
</ol>
<p><strong>Error:</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729105588651/9643ce31-cea3-427d-b9ae-dbbfcfd728e2.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Error validating order: '&gt;=' not supported between instances of 'int' and 'str'</p>
<p>Item queried successful from dynamodb but still getting an error</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729105703071/ba1c7c3c-2498-4ede-88d9-a330714cf32a.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-fix-1"><strong>Fix:</strong></h3>
<p>You need to explicitly convert the <code>stock</code> value retrieved from DynamoDB to an integer before comparing it with the <code>quantity</code>.</p>
<p>Here’s the updated code:</p>
<pre><code class="lang-plaintext">import json
import os
import boto3

# Initialize the DynamoDB client
dynamodb = boto3.client('dynamodb')

def lambda_handler(event, context):
    try:
        # Log the entire event to inspect the input
        print(f"Received event: {event}")

        # Get productId and quantity from the event
        product_id = event.get('productId')
        quantity = event.get('quantity')

        if not product_id or not quantity:
            raise Exception("Invalid input: productId and quantity are required")

        # Define the parameters for fetching the product information from DynamoDB
        table_name = os.environ['INVENTORY_TABLE'].strip()  # Strip any extra spaces
        params = {
            'TableName': table_name,
            'Key': {
                'productId': {'S': product_id}
            }
        }

        # Get the item from DynamoDB
        result = dynamodb.get_item(**params)

        # Debugging log for DynamoDB response
        print(f"DynamoDB get_item result: {result}")

        # Check if the item exists
        if 'Item' not in result:
            raise Exception(f"Product with productId {product_id} not found")

        # Check if 'stock' exists in the item and is valid
        if 'stock' not in result['Item']:
            raise Exception(f"Stock information missing for productId {product_id}")

        # Convert stock to integer for comparison
        stock = int(result['Item']['stock']['N'])

        print(f"Stock for productId {product_id}: {stock}, Requested quantity: {quantity}")

        # Ensure that quantity is an integer
        if not isinstance(quantity, int):
            quantity = int(quantity)

        # Check if stock is enough
        if stock &gt;= quantity:
            return {
                'status': 'VALID',
                'productId': product_id,
                'quantity': quantity
            }
        else:
            raise Exception('Out of stock')

    except Exception as e:
        print(f"Error validating order: {e}")
        raise Exception('Order validation failed')
</code></pre>
<h3 id="heading-key-changes"><strong>Key Changes:</strong></h3>
<ol>
<li><p><strong>Convert</strong> <code>stock</code> to an integer:</p>
<ul>
<li><code>stock = int(result['Item']['stock']['N'])</code> ensures that the stock value retrieved from DynamoDB is converted from a string to an integer.</li>
</ul>
</li>
<li><p><strong>Ensure</strong> <code>quantity</code> is an integer:</p>
<ul>
<li>Added a check to ensure that the <code>quantity</code> is also an integer. If it's a string, it will be converted to an integer using <code>quantity = int(quantity)</code>.</li>
</ul>
</li>
</ol>
<p>Redeploy the Lambda function and test</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729105976346/3ea819ab-29fd-4f99-984d-c153d602304e.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Test</p>
<p>Error:</p>
<p>Step: ValidateOrder Passed,</p>
<p>saveOrdertoDatabase Failed</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729106824720/ef88bacb-c0b3-4ec3-ab6a-6f5b53c24ce9.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729106771963/00c1b97e-2aba-40d6-940d-fd8407a898ef.png?auto=compress,format&amp;format=webp" alt /></p>
<p>{ "cause": "User: arn:aws:sts::202533534284:assumed-role/StepFunctions-OrderProcessingStateMachine-role-7xpccmy1x/HyBZgMwbOpRHNcXcgWnobLmYTRXnIfkF is not authorized to perform: lambda:InvokeFunction on resource: arn:aws:lambda:us-east-1:202533534284:function:saveOrderFunction because no identity-based policy allows the lambda:InvokeFunction action (Service: Lambda, Status Code: 403, Request ID: ac1acb0b-b064-4e2d-a7d8-c8a6e49709cc)", "error": "Lambda.AWSLambdaException" }</p>
<p>The error you're encountering is a permissions issue, where the <strong>Step Functions role</strong> does not have permission to invoke the specified Lambda function (<code>saveOrderFunction</code>). The role that Step Functions assumes (<code>StepFunctions-OrderProcessingStateMachine-role-7xpccmy1x</code>) needs the <code>lambda:InvokeFunction</code> permission.</p>
<h3 id="heading-steps-to-fix"><strong>Steps to Fix:</strong></h3>
<ol>
<li><strong>Update the IAM Role for Step Functions</strong>: You need to attach a policy to the IAM role (<code>StepFunctions-OrderProcessingStateMachine-role-7xpccmy1x</code>) that allows it to invoke the Lambda function (<code>saveOrderFunction</code>).</li>
</ol>
<h4 id="heading-option-1-attach-the-policy-via-the-aws-console"><strong>Option 1: Attach the Policy via the AWS Console</strong></h4>
<ol>
<li><p>Go to the <strong>IAM Console</strong> in AWS.</p>
</li>
<li><p>Find the role named <code>StepFunctions-OrderProcessingStateMachine-role-7xpccmy1x</code>.</p>
</li>
<li><p>Click <strong>Attach Policies</strong>.</p>
</li>
<li><p>Create a new inline policy by clicking <strong>Add permissions &gt; Create inline policy</strong>.</p>
</li>
<li><p>In the <strong>JSON</strong> tab, use the following policy to allow <code>lambda:InvokeFunction</code> on the <code>saveOrderFunction</code>:</p>
</li>
</ol>
<pre><code class="lang-plaintext">{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "lambda:InvokeFunction",
            "Resource": "arn:aws:lambda:us-east-1:202533534284:function:saveOrderFunction"
        }
    ]
}
</code></pre>
<ol start="6">
<li><strong>Review</strong> and <strong>Save</strong> the policy.</li>
</ol>
<h4 id="heading-option-2-attach-the-policy-using-the-aws-cli"><strong>Option 2: Attach the Policy Using the AWS CLI</strong></h4>
<p>You can also use the AWS CLI to attach the required permission to the IAM role.</p>
<ol>
<li>Run the following command to attach the inline policy to the Step Functions role:</li>
</ol>
<pre><code class="lang-plaintext">aws iam put-role-policy \
    --role-name StepFunctions-OrderProcessingStateMachine-role-7xpccmy1x \
    --policy-name AlowInvokeLambda \
    --policy-document '{
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "lambda:InvokeFunction",
                "Resource": "arn:aws:lambda:us-east-1:202533534284:function:saveOrderFunction"
            }
        ]
    }'
</code></pre>
<h3 id="heading-explanation"><strong>Explanation:</strong></h3>
<ul>
<li><p>The <code>Action</code>: <code>"lambda:InvokeFunction"</code> allows the role to invoke the Lambda function.</p>
</li>
<li><p>The <code>Resource</code> specifies the ARN of the Lambda function (<code>saveOrderFunction</code>) that the role should be allowed to invoke.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729107162880/07861e10-afc7-4605-94f1-6c00aa5823ce.png?auto=compress,format&amp;format=webp" alt /></p>
<p><strong>Error: Need to pass the output of one step function to input to next step function</strong></p>
<p>The error message <code>"'OrderId' is required but not found in the event or is None"</code> suggests that the <code>OrderId</code> field is either missing or <code>None</code> in the event passed to the <code>saveOrderToDatabase</code> Lambda function. This could mean that the <code>OrderId</code> is not being passed correctly from the previous step in the AWS Step Functions workflow.</p>
<h3 id="heading-steps-to-diagnose-and-fix"><strong>Steps to Diagnose and Fix:</strong></h3>
<ol>
<li><p><strong>Verify the Event in CloudWatch Logs</strong>:</p>
<ul>
<li>Ensure that the event being passed to the <code>saveOrderToDatabase</code> function contains the <code>OrderId</code> field. Add logging to capture the full event, as you are already doing with <code>print(f"Received event: {event}")</code>.</li>
</ul>
</li>
<li><p><strong>Ensure</strong> <code>OrderId</code> is Passed Between Steps:</p>
<ul>
<li><p>If the <code>OrderId</code> is being generated in a previous step, you need to make sure it is being passed correctly between steps in the Step Functions workflow.</p>
</li>
<li><p>In the <strong>Step Functions console</strong>, check the <strong>input</strong> and <strong>output</strong> of each state to ensure that <code>OrderId</code> is included in the output of the previous step and passed into this step.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-adjusting-step-functions-configuration"><strong>Adjusting Step Functions Configuration:</strong></h3>
<p>If <code>OrderId</code> is missing because it is not passed correctly between steps, you need to make sure that the state machine's <strong>"InputPath"</strong>, <strong>"ResultPath"</strong>, or <strong>"OutputPath"</strong> is correctly set to pass the <code>OrderId</code> from one step to the next.</p>
<h4 id="heading-example-of-passing-data-between-steps"><strong>Example of Passing Data Between Steps:</strong></h4>
<p>In your Step Function definition, you can use <strong>"Parameters"</strong> to ensure the necessary fields like <code>OrderId</code>, <code>customerEmail</code>, <code>productId</code>, and <code>quantity</code> are passed to the <code>saveOrderToDatabase</code> step.</p>
<pre><code class="lang-plaintext">{
  "StartAt": "ValidateOrder",
  "States": {
    "ValidateOrder": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:validateOrderFunction",
      "Next": "SaveOrderToDatabase",
      "ResultPath": "$.validationOutput"
    },
    "SaveOrderToDatabase": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:saveOrderToDatabaseFunction",
      "Parameters": {
        "OrderId.$": "$.validationOutput.OrderId",
        "customerEmail.$": "$.validationOutput.customerEmail",
        "productId.$": "$.validationOutput.productId",
        "quantity.$": "$.validationOutput.quantity"
      },
      "End": true
    }
  }
}
</code></pre>
<p>In this example:</p>
<ul>
<li><p><code>ResultPath</code> stores the output of <code>ValidateOrder</code> in <code>$.validationOutput</code>.</p>
</li>
<li><p><code>Parameters</code> ensure that the <code>OrderId</code>, <code>customerEmail</code>, <code>productId</code>, and <code>quantity</code> are passed to the <code>SaveOrderToDatabase</code> function from the previous state.</p>
</li>
</ul>
<p>Error:</p>
<p>{ "statusCode": 500, "body": "{\"message\": \"Payment processing failed\", \"error\": \"Missing 'amount' in the event\"}" }</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729118252430/46f35f88-9fdc-476a-95c7-2115c7b680a6.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-steps-to-diagnose-and-fix-1"><strong>Steps to Diagnose and Fix:</strong></h3>
<ol>
<li><p><strong>Check the Input to the Payment Lambda Function</strong>:</p>
<ul>
<li><p>Ensure that the event passed to the <strong>Payment Processing</strong> Lambda function contains the required <code>amount</code> field.</p>
</li>
<li><p>You can log the event in the Lambda function to verify the incoming data.</p>
</li>
</ul>
</li>
<li><p><strong>Ensure</strong> <code>amount</code> is Passed Between Steps in Step Functions:</p>
<ul>
<li>Make sure that the <code>amount</code> field is being included in the output of the previous steps and passed correctly to the <strong>Process Payment</strong> step.</li>
</ul>
</li>
</ol>
<h3 id="heading-example-lambda-function-with-logging"><strong>Example Lambda Function with Logging:</strong></h3>
<p>You can log the entire event in the <strong>processPaymentFunction</strong> Lambda function to inspect the incoming event and ensure the <code>amount</code> is present:</p>
<pre><code class="lang-plaintext">import json

def lambda_handler(event, context):
    try:
        # Log the entire event to inspect the input
        print(f"Received event: {event}")

        # Extract the required fields from the event
        amount = event.get('amount')
        payment_method = event.get('paymentMethod')
        order_id = event.get('OrderId')

        # Ensure 'amount', 'paymentMethod', and 'OrderId' are present
        if not amount:
            raise Exception("Missing 'amount' in the event")
        if not payment_method:
            raise Exception("Missing 'paymentMethod' in the event")
        if not order_id:
            raise Exception("Missing 'OrderId' in the event")

        # Simulate payment processing (replace this with actual payment gateway logic)
        if payment_method == 'creditCard':
            print(f"Processing payment for order {order_id}, amount: {amount}")
            return {
                'statusCode': 200,
                'body': json.dumps({
                    'status': 'SUCCESS',
                    'orderId': order_id
                })
            }
        else:
            raise Exception('Payment method not supported')

    except Exception as e:
        print(f"Error processing payment: {e}")
        return {
            'statusCode': 500,
            'body': json.dumps({
                'message': 'Payment processing failed',
                'error': str(e)
            })
        }
</code></pre>
<p>In this example, the function logs the event received to CloudWatch. If <code>amount</code> is missing, it raises an exception with a clear message.</p>
<h3 id="heading-step-functions-configuration"><strong>Step Functions Configuration:</strong></h3>
<p>Ensure that the <code>amount</code> field is being passed from the previous step in the Step Functions workflow to the <strong>Process Payment</strong> step.</p>
<p>If <code>amount</code> is generated or calculated in a previous step, make sure that it is passed as part of the event when transitioning to the <strong>Process Payment</strong> state.</p>
<p>Here’s an example of how to configure the <strong>Process Payment</strong> state in Step Functions to include <code>amount</code>:</p>
<pre><code class="lang-plaintext">{
  "ProcessPayment": {
    "Type": "Task",
    "Resource": "arn:aws:lambda:us-east-1:202533534284:function:processPaymentFunction",
    "Parameters": {
      "amount.$": "$.orderDetails.amount",  // Adjust based on where amount is in the event
      "paymentMethod.$": "$.orderDetails.paymentMethod",
      "OrderId.$": "$.orderDetails.OrderId"
    },
    "Next": "UpdateInventory"
  }
}
</code></pre>
<h3 id="heading-key-things-to-check"><strong>Key Things to Check:</strong></h3>
<ul>
<li><p><strong>Check Input and Output in Step Functions</strong>: In the AWS Step Functions console, review the execution history to see what input is passed to each state. Verify that the <code>amount</code> field is present in the input to the <strong>Process Payment</strong> state.</p>
</li>
<li><p><strong>Log the Input in Lambda</strong>: Add the logging in your <strong>processPaymentFunction</strong> Lambda to check what fields are passed and troubleshoot why <code>amount</code> is missing.</p>
</li>
<li><p><strong>Adjust Step Functions Parameters</strong>: If <code>amount</code> is generated earlier in the workflow, make sure it is passed correctly from the previous state to the <strong>Process Payment</strong> state.</p>
</li>
</ul>
<h3 id="heading-example-event"><strong>Example Event:</strong></h3>
<p>Here’s what the event should look like when passed to the <strong>processPaymentFunction</strong> Lambda:</p>
<pre><code class="lang-plaintext">{
  "amount": 100,
  "paymentMethod": "creditCard",
  "OrderId": "1234"
}
</code></pre>
<h3 id="heading-conclusion-1"><strong>Conclusion:</strong></h3>
<ol>
<li><p><strong>Log the Input</strong>: Verify what event is passed to the <strong>Process Payment</strong> Lambda by logging it in CloudWatch.</p>
</li>
<li><p><strong>Check Step Functions</strong>: Ensure that the <code>amount</code> field is passed correctly from the previous step to the payment processing step.</p>
</li>
<li><p><strong>Adjust Parameters</strong>: If necessary, modify the Step Functions state machine definition to ensure that <code>amount</code> is included in the input to the payment step.</p>
</li>
</ol>
<hr />
<h1 id="heading-webapp-test"><strong>WebApp Test</strong></h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729136774782/14296af1-9e2f-410a-82a9-67f9d13b156f.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Access Web App from URL</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729136822861/73f1f094-fabd-4225-9ee9-f71a38684bb0.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Verify that order created successful and order receipt generated and saved to the S3 bucket</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729136899310/96f0cab6-c76a-4d93-886e-3c784584d25c.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729136962852/3cb310f0-aaee-44b4-84e7-a68b26708406.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729137020850/57685ced-d5ea-4125-93fe-2e03f18caf4a.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Verify from the dynamoDb ‘Order’ Table - OrderNumber:2903</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729137333461/bac6c431-ac1e-4a26-a8b2-0cf22ff34ad7.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729137358388/8dccb2f6-2140-4bf9-879f-e5e4f73aa7b9.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Verify from “Inventory” Table that the stock count is reduced</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729137412307/5137680f-2788-4406-917d-02acd39ce022.png?auto=compress,format&amp;format=webp" alt /></p>
<hr />
<h1 id="heading-devops"><strong>DevOps</strong></h1>
<p>To deploy your <strong>Order Processing System</strong> with a DevOps approach, you need to implement a streamlined <strong>CI/CD pipeline</strong> to automate the entire deployment process for both your <strong>frontend</strong> and <strong>backend</strong> components. I'll guide you through each phase of the <strong>DevOps lifecycle</strong> to deploy this project step by step, ensuring automation, scalability, and efficiency.</p>
<p>Here's an outline of the phases:</p>
<h3 id="heading-phase-1-source-code-management-version-control"><strong>Phase 1: Source Code Management (Version Control)</strong></h3>
<h4 id="heading-11-setup-a-git-repository"><strong>1.1 Setup a Git Repository</strong></h4>
<ul>
<li><p>Use <strong>Git</strong> as the version control system for managing your source code.</p>
</li>
<li><p>If you haven’t done so, initialize a Git repository in your project folder and push the code to a <strong>remote Git repository</strong> like <strong>GitHub</strong>, <strong>GitLab</strong>, or <strong>Bitbucket</strong>.</p>
</li>
</ul>
<pre><code class="lang-plaintext">git init
git add .
git commit -m "Initial commit for order-processing-system"
git remote add origin https://github.com/your-repo-url.git
git push -u origin main
</code></pre>
<h3 id="heading-phase-2-continuous-integration-ci"><strong>Phase 2: Continuous Integration (CI)</strong></h3>
<h4 id="heading-21-setup-ci-for-frontend-react-app"><strong>2.1 Setup CI for Frontend (React App)</strong></h4>
<ol>
<li><p><strong>Install Dependencies and Build the Frontend</strong>:</p>
<ul>
<li><p>In your CI pipeline, define steps to install dependencies, run tests (if any), and build the frontend code.</p>
</li>
<li><p>Use a service like <strong>GitHub Actions</strong>, <strong>GitLab CI</strong>, or <strong>Jenkins</strong> to automate this process.</p>
</li>
</ul>
</li>
</ol>
<p><strong>Example for GitHub Actions:</strong></p>
<pre><code class="lang-plaintext">Frontend CI Pipeline

on:
  push:
    branches:
      - main

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - name: Install Node.js
      uses: actions/setup-node@v2
      with:
        node-version: '14'
    - run: npm install
    - run: npm run build
    - name: Upload Build Artifacts
      uses: actions/upload-artifact@v2
      with:
        name: build
        path: build/
</code></pre>
<p><strong>Unit Testing</strong></p>
<p>To test the <code>OrderForm</code> component using <strong>Jest</strong>, follow the steps below. Jest is already included in your project as a dependency, so you can directly create the test files.</p>
<h3 id="heading-step-1-install-any-necessary-dependencies"><strong>Step 1: Install any necessary dependencies</strong></h3>
<p>Since you're using React, <code>@testing-library/react</code> and <code>@testing-library/jest-dom</code> are typically used for testing React components.</p>
<p>You can install these if they aren't already installed:</p>
<pre><code class="lang-plaintext">npm install --save-dev @testing-library/react @testing-library/jest-dom
</code></pre>
<h3 id="heading-step-2-create-a-test-file-for-orderform"><strong>Step 2: Create a test file for</strong> <code>OrderForm</code></h3>
<p>Inside your <code>src</code> folder, create a <code>__tests__</code> folder (if not already created) and add a file called <code>OrderForm.test.js</code>:</p>
<pre><code class="lang-plaintext">mkdir -p src/__tests__
touch src/__tests__/OrderForm.test.js
</code></pre>
<h3 id="heading-step-3-write-unit-tests-for-orderform"><strong>Step 3: Write Unit Tests for</strong> <code>OrderForm</code></h3>
<p>Below is an example test for <code>OrderForm</code>. It mocks the fetch API, tests user interactions, and checks that the order is successfully placed.</p>
<pre><code class="lang-plaintext">import React from 'react';
import { render, screen, fireEvent, waitFor } from '@testing-library/react';
import '@testing-library/jest-dom/extend-expect'; // For better assertion syntax
import OrderForm from '../OrderForm'; // Adjust the path if needed

// Mocking the fetch function
global.fetch = jest.fn(() =&gt;
  Promise.resolve({
    ok: true,
    json: () =&gt; Promise.resolve({ OrderId: '1234' }),
  })
);

describe('OrderForm Component', () =&gt; {
  beforeEach(() =&gt; {
    // Reset fetch mock before each test
    fetch.mockClear();
  });

  test('renders OrderForm and displays the title', () =&gt; {
    render(&lt;OrderForm /&gt;);
    expect(screen.getByText('AWS Serverless Order Management System')).toBeInTheDocument();
  });

  test('submits the form and displays success message with order number', async () =&gt; {
    render(&lt;OrderForm /&gt;);

    // Enter values into form fields
    fireEvent.change(screen.getByLabelText(/Product ID/i), { target: { value: 'P001' } });
    fireEvent.change(screen.getByLabelText(/Quantity/i), { target: { value: '2' } });
    fireEvent.change(screen.getByLabelText(/Customer Email/i), { target: { value: 'test@example.com' } });

    // Click the submit button
    fireEvent.click(screen.getByText(/Place Order/i));

    // Wait for the fetch to complete and the success message to appear
    await waitFor(() =&gt; screen.getByText(/Order placed successfully!/));

    // Assert that success message is displayed
    expect(screen.getByText(/Order placed successfully!/)).toBeInTheDocument();

    // Assert that the order number is displayed
    expect(screen.getByText(/Order Number: 1234/)).toBeInTheDocument();
  });

  test('handles failed order submission', async () =&gt; {
    // Mocking fetch to return an error response
    fetch.mockImplementationOnce(() =&gt;
      Promise.resolve({
        ok: false,
        json: () =&gt; Promise.resolve({ error: 'Failed to place order' }),
      })
    );

    render(&lt;OrderForm /&gt;);

    // Enter values into form fields
    fireEvent.change(screen.getByLabelText(/Product ID/i), { target: { value: 'P001' } });
    fireEvent.change(screen.getByLabelText(/Quantity/i), { target: { value: '2' } });
    fireEvent.change(screen.getByLabelText(/Customer Email/i), { target: { value: 'test@example.com' } });

    // Click the submit button
    fireEvent.click(screen.getByText(/Place Order/i));

    // Wait for the fetch to complete and error message to appear
    await waitFor(() =&gt; screen.getByText(/Failed to place order/));

    // Assert that failure message is displayed
    expect(screen.getByText(/Failed to place order/)).toBeInTheDocument();
  });
});
</code></pre>
<h3 id="heading-step-4-run-the-tests"><strong>Step 4: Run the Tests</strong></h3>
<p>Now, you can run the tests using the following command:</p>
<pre><code class="lang-plaintext">npm test
</code></pre>
<h3 id="heading-breakdown-of-the-tests"><strong>Breakdown of the Tests:</strong></h3>
<ul>
<li><p><strong>Test 1: Renders OrderForm Component</strong></p>
<ul>
<li>This test ensures that the form renders correctly and the title "AWS Serverless Order Management System" is displayed.</li>
</ul>
</li>
<li><p><strong>Test 2: Successfully Places an Order</strong></p>
<ul>
<li>This test simulates filling out the form, submitting it, and checks that the order is placed successfully by asserting that the success message and order number are displayed.</li>
</ul>
</li>
<li><p><strong>Test 3: Handles Failed Order Submission</strong></p>
<ul>
<li>This test mocks a failed response from the API and checks that an error message is displayed when the form submission fails.</li>
</ul>
</li>
</ul>
<h3 id="heading-step-5-check-for-code-coverage-optional"><strong>Step 5: Check for Code Coverage (Optional)</strong></h3>
<p>You can check for code coverage by running Jest with the <code>--coverage</code> flag:</p>
<pre><code class="lang-plaintext">npm test -- --coverage
</code></pre>
<p>This will generate a code coverage report for your tests.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729194316746/b469523d-0bc8-4334-b0e0-064eb1200e50.png?auto=compress,format&amp;format=webp" alt /></p>
<h1 id="heading-frontend-ci"><strong>Frontend CI</strong></h1>
<p>To add the <strong>GitHub Actions</strong> workflow to your local project in <strong>VSCode</strong> and then upload it to GitHub, follow these steps:</p>
<h3 id="heading-step-1-create-the-github-actions-workflow-directory"><strong>Step 1: Create the GitHub Actions Workflow Directory</strong></h3>
<p>In your local project directory (in VSCode):</p>
<ol>
<li><p><strong>Navigate to the root of your project</strong>.</p>
</li>
<li><p><strong>Create a</strong> <code>.github</code> directory inside the root directory:</p>
<ul>
<li>In <strong>VSCode</strong>, right-click in the explorer view and select <strong>New Folder</strong>, then name it <code>.github</code>.</li>
</ul>
</li>
<li><p>Inside the <code>.github</code> folder, create a <strong>workflows</strong> directory:</p>
<ul>
<li>Right-click again inside <code>.github</code> and select <strong>New Folder</strong>, then name it <code>workflows</code>.</li>
</ul>
</li>
</ol>
<p>Your directory structure should look like this:</p>
<pre><code class="lang-plaintext">/your-project
  ├── /src
  ├── /public
  ├── /node_modules
  ├── package.json
  ├── .gitignore
  └── /.github
        └── /workflows
</code></pre>
<h3 id="heading-step-2-add-the-github-actions-workflow-file"><strong>Step 2: Add the GitHub Actions Workflow File</strong></h3>
<ol>
<li><p>Inside the <code>/workflows</code> folder, create a new <strong>YAML</strong> file for the CI pipeline. You can name it something meaningful, such as <code>ci.yml</code> or <code>frontend-ci.yml</code>.</p>
<p> Example:</p>
<ul>
<li>Right-click on the <code>/workflows</code> folder, select <strong>New File</strong>, and name it <code>frontend-ci.yml</code>.</li>
</ul>
</li>
<li><p>Open the <code>frontend-ci.yml</code> file and add the workflow configuration for <strong>Jest tests and frontend build</strong> that we created earlier.</p>
</li>
</ol>
<pre><code class="lang-plaintext">Frontend CI Pipeline

on:
  push:
    branches:
      - master


jobs:
  deploy:
    runs-on: ubuntu-latest
    permissions:
      id-token: write  # Required to generate OIDC token
      contents: read   # Required to read repo contents
    steps:

       # Step 1: Checkout the repository
      - name: Checkout Code
        uses: actions/checkout@v3

      # Step 2: Configure AWS Credentials
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::202533534284:role/awsGitHubActionsRole1
          aws-region: us-east-1

       # Step 3: Set up Node.js environment
      - name: Install Node.js
        uses: actions/setup-node@v2
        with:
          node-version: '14'

      # Step 4: Install dependencies
      - name: Install dependencies
        run: npm install
        working-directory: ./frontend

      # Step 5: Build the frontend
      - name: Build frontend
        run: npm run build
        working-directory: ./frontend

      # Step 6: Upload build artifacts
      - name: Upload Build Artifacts
        uses: actions/upload-artifact@v3
        with:
          name: build
          path: frontend/build/

      # Step 7: Deploy to S3
      - name: Deploy to S3
        run: aws s3 sync ./frontend/build s3://ordeprocess-frontend/ --delete
</code></pre>
<h3 id="heading-step-3-commit-the-changes-and-push-to-github"><strong>Step 3: Commit the Changes and Push to GitHub</strong></h3>
<p>Once the <strong>GitHub Actions</strong> workflow is set up locally, you need to commit and push it to your GitHub repository.</p>
<ol>
<li><p><strong>Stage the changes</strong>: In VSCode terminal (or your terminal of choice), run:</p>
<pre><code class="lang-plaintext"> git add .github/workflows/frontend-ci.yml
</code></pre>
</li>
<li><p><strong>Commit the changes</strong>:</p>
<pre><code class="lang-plaintext"> git commit -m "Add CI pipeline for Jest tests and frontend build"
</code></pre>
</li>
<li><p><strong>Push the changes to GitHub</strong>:</p>
<pre><code class="lang-plaintext"> git push origin main
</code></pre>
</li>
</ol>
<h3 id="heading-step-4-verify-the-workflow-on-github"><strong>Step 4: Verify the Workflow on GitHub</strong></h3>
<p>After pushing the changes:</p>
<ol>
<li><p>Go to your <strong>GitHub repository</strong>.</p>
</li>
<li><p>Navigate to the <strong>Actions</strong> tab in the repository.</p>
</li>
<li><p>You should see the <strong>Frontend CI Pipeline</strong> running automatically if a push was made to the <code>main</code> branch.</p>
</li>
</ol>
<h3 id="heading-summary"><strong>Summary:</strong></h3>
<ul>
<li><p>Create a <code>.github/workflows</code> directory in your local project.</p>
</li>
<li><p>Add a <code>frontend-ci.yml</code> file inside that directory with the GitHub Actions configuration.</p>
</li>
<li><p>Commit and push the changes to GitHub.</p>
</li>
<li><p>Verify that the pipeline is running from the <strong>Actions</strong> tab in your GitHub repository.</p>
</li>
</ul>
<p>To store credentials securely for a CI pipeline, you typically use the CI/CD platform’s <strong>secret management</strong> system. Here's how to securely store your AWS credentials on some common CI platforms:</p>
<h3 id="heading-configure-secrets"><strong>Configure Secrets</strong></h3>
<p>In GitHub Actions, secrets can be stored in <strong>GitHub Secrets</strong>. Here's how to do it:</p>
<ol>
<li><p>Go to your <strong>GitHub repository</strong>.</p>
</li>
<li><p>Navigate to <code>Settings</code> &gt; <code>Secrets and variables</code> &gt; <code>Actions</code>.</p>
</li>
<li><p>Click on the <strong>"New repository secret"</strong> button.</p>
</li>
<li><p>Add your AWS credentials as secrets:</p>
<ul>
<li><p><strong>Name:</strong> <code>AWS_ACCESS_KEY_ID</code></p>
</li>
<li><p><strong>Value:</strong> Your AWS access key.</p>
</li>
<li><p>Add another secret for the secret key:</p>
<ul>
<li><p><strong>Name:</strong> <code>AWS_SECRET_ACCESS_KEY</code></p>
</li>
<li><p><strong>Value:</strong> Your AWS secret access key.</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<p>Now, in your GitHub Actions workflow file (<code>.github/workflows/&lt;your-workflow&gt;.yml</code>), reference the stored secrets like this:</p>
<pre><code class="lang-plaintext">env:
  AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
  AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
</code></pre>
<p><strong>Error:</strong></p>
<p>Error: Credentials could not be loaded, please check your action inputs: Could not load credentials from any providers</p>
<p>Error: This request has been automatically failed because it uses a deprecated version of <code>actions/upload-artifact: v2</code>. Learn more: <a target="_blank" href="https://github.blog/changelog/2024-02-13-deprecation-notice-v1-and-v2-of-the-artifact-actions/"><strong>https://github.blog/changelog/2024-02-13-deprecation-notice-v1-and-v2-of-the-artifact-actions/</strong></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729199344649/c676b610-e1df-4c31-b60c-a95f96c27822.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Fix:</p>
<p>GitHub has deprecated versions <code>v1</code> and <code>v2</code> of the <code>upload-artifact</code> action. You need to update it to use <code>v3</code></p>
<p>Solution: Update <code>upload-artifact</code> to version <code>v3</code></p>
<pre><code class="lang-plaintext"># Step 6: Upload build artifacts (Updated to v3)
      - name: Upload Build Artifacts
        uses: actions/upload-artifact@v3
        with:
          name: build
          path: build/
</code></pre>
<p><strong>Error:</strong></p>
<p><em>Error: Credentials could not be loaded, please check your action inputs: Could not load credentials from any providers</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729269579882/6a485d40-e3ca-4ad6-a416-200eb037b342.png?auto=compress,format&amp;format=webp" alt /></p>
<p><strong>Fix: Solution</strong></p>
<p>To integrate GitHub Actions with AWS using OpenID Connect (OIDC), you need to manually add GitHub as an OIDC provider. This involves creating a custom OIDC identity provider for GitHub in your AWS IAM settings.</p>
<p>Here’s how to set it up:</p>
<h3 id="heading-1-create-github-as-an-oidc-provider"><strong>1. Create GitHub as an OIDC Provider:</strong></h3>
<ol>
<li><p>Go to the <strong>IAM console</strong> in the AWS Management Console.</p>
</li>
<li><p>In the left sidebar, click <strong>“Identity providers.”</strong></p>
</li>
<li><p>Click on <strong>“Add provider.”</strong></p>
</li>
<li><p>In the <strong>Provider type</strong>, select <strong>“OpenID Connect (OIDC)”</strong>.</p>
</li>
<li><p>In the <strong>Provider URL</strong>, enter:</p>
<pre><code class="lang-plaintext"> https://token.actions.githubusercontent.com
</code></pre>
</li>
<li><p>For <strong>Audience</strong>, enter:</p>
<pre><code class="lang-plaintext"> sts.amazonaws.com
</code></pre>
<p> Click <strong>“Add provider”</strong> to create the provider.</p>
</li>
</ol>
<p>To integrate GitHub Actions with AWS using OpenID Connect (OIDC), you need to manually add GitHub as an OIDC provider. This involves creating a custom OIDC identity provider for GitHub in your AWS IAM settings.</p>
<p>Here’s how to set it up:</p>
<h3 id="heading-1-create-github-as-an-oidc-provider-1"><strong>1. Create GitHub as an OIDC Provider:</strong></h3>
<ol>
<li><p>Go to the <strong>IAM console</strong> in the AWS Management Console.</p>
</li>
<li><p>In the left sidebar, click <strong>“Identity providers.”</strong></p>
</li>
<li><p>Click on <strong>“Add provider.”</strong></p>
</li>
<li><p>In the <strong>Provider type</strong>, select <strong>“OpenID Connect (OIDC)”</strong>.</p>
</li>
<li><p>In the <strong>Provider URL</strong>, enter:</p>
<pre><code class="lang-plaintext"> https://token.actions.githubusercontent.com
</code></pre>
</li>
<li><p>For <strong>Audience</strong>, enter:</p>
<pre><code class="lang-plaintext"> sts.amazonaws.com
</code></pre>
</li>
<li><p>Click <strong>“Add provider”</strong> to create the provider.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729288792955/dea3aeb0-8964-4b9c-9efa-af81f7ab9cff.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729288834916/2b093264-b7d1-487c-8605-5254bdefe118.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-2-create-or-update-an-iam-role"><strong>2. Create or Update an IAM Role:</strong></h3>
<ol>
<li><p>After adding GitHub as an OIDC provider, navigate to <strong>“Roles”</strong> in the IAM console.</p>
</li>
<li><p>Click <strong>“Create role”</strong>.</p>
</li>
<li><p>Select <strong>“Web identity”</strong> as the trusted entity type.</p>
</li>
<li><p>In the <strong>Identity provider</strong> dropdown, select the GitHub OIDC provider you created (<a target="_blank" href="https://token.actions.githubusercontent.com/"><code>https://token.actions.githubusercontent.com</code></a>).</p>
</li>
<li><p>For <strong>Audience</strong>, select <a target="_blank" href="http://sts.amazonaws.com/"><code>sts.amazonaws.com</code></a>.</p>
</li>
<li><p>Under <strong>"Conditions,"</strong> add the following condition:</p>
<pre><code class="lang-plaintext"> {
   "StringEquals": {
     "token.actions.githubusercontent.com:sub": "repo:&lt;GitHub-Org-or-User&gt;/&lt;Repo-Name&gt;:ref:refs/heads/&lt;Branch-Name&gt;"
   }
 }
</code></pre>
<ul>
<li>Replace <code>&lt;GitHub-Org-or-User&gt;</code>, <code>&lt;Repo-Name&gt;</code>, and <code>&lt;Branch-Name&gt;</code> with your GitHub organization/user, repository name, and branch name.</li>
</ul>
</li>
<li><p>Assign permissions like <strong>AmazonS3FullAccess</strong> or other required policies to allow access to the necessary AWS resources.</p>
</li>
<li><p>Complete the role creation.</p>
</li>
</ol>
<h3 id="heading-3-update-your-github-actions-workflow"><strong>3. Update Your GitHub Actions Workflow:</strong></h3>
<p>Once the OIDC provider and role are set up, update your GitHub Actions workflow to assume the IAM role:</p>
<pre><code class="lang-plaintext">jobs:
  deploy:
    runs-on: ubuntu-latest
    permissions:
      id-token: write  # Required for OIDC
      contents: read

    steps:
      - name: Checkout Code
        uses: actions/checkout@v3

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::&lt;ACCOUNT_ID&gt;:role/&lt;Role-Name&gt;
          aws-region: us-east-1

      - name: Deploy to S3
        run: aws s3 sync ./frontend/build s3://&lt;bucket-name&gt;/ --delete
</code></pre>
<h3 id="heading-key-points"><strong>Key Points:</strong></h3>
<ul>
<li><p><strong>OIDC provider URL</strong> should be <a target="_blank" href="https://token.actions.githubusercontent.com/"><code>https://token.actions.githubusercontent.com</code></a>.</p>
</li>
<li><p><strong>Audience</strong> should always be set to <a target="_blank" href="http://sts.amazonaws.com/"><code>sts.amazonaws.com</code></a>.</p>
</li>
<li><p>The IAM role should allow the <code>sts:AssumeRoleWithWebIdentity</code> action.</p>
</li>
<li><p>Ensure that the <strong>trust relationship policy</strong> specifies the GitHub repository and branch to limit access.</p>
</li>
</ul>
<h3 id="heading-2-create-or-update-an-iam-role-1"><strong>2. Create or Update an IAM Role:</strong></h3>
<ol>
<li><p>After adding GitHub as an OIDC provider, navigate to <strong>“Roles”</strong> in the IAM console.</p>
</li>
<li><p>Click <strong>“Create role”</strong>.</p>
</li>
<li><p>Select <strong>“Web identity”</strong> as the trusted entity type.</p>
</li>
<li><p>In the <strong>Identity provider</strong> dropdown, select the GitHub OIDC provider you created (<a target="_blank" href="https://token.actions.githubusercontent.com/"><code>https://token.actions.githubusercontent.com</code></a>).</p>
</li>
<li><p>For <strong>Audience</strong>, select <a target="_blank" href="http://sts.amazonaws.com/"><code>sts.amazonaws.com</code></a>.</p>
</li>
<li><p>Under <strong>"Conditions,"</strong> add the following condition:</p>
<pre><code class="lang-plaintext"> {
   "StringEquals": {
     "token.actions.githubusercontent.com:sub": "repo:&lt;GitHub-Org-or-User&gt;/&lt;Repo-Name&gt;:ref:refs/heads/&lt;Branch-Name&gt;"
   }
 }
</code></pre>
<ul>
<li>Replace <code>&lt;GitHub-Org-or-User&gt;</code>, <code>&lt;Repo-Name&gt;</code>, and <code>&lt;Branch-Name&gt;</code> with your GitHub organization/user, repository name, and branch name.</li>
</ul>
</li>
<li><p>Assign permissions like <strong>AmazonS3FullAccess</strong> or other required policies to allow access to the necessary AWS resources.</p>
</li>
<li><p>Complete the role creation.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729289020236/9fef1e02-d219-4f8d-b973-553c5f0fbe6d.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729289046705/142c3d84-7c7b-4b40-89dd-715582c72bd4.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729289091318/2492e761-623f-4727-8fb6-a2842f0ad94f.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729289137664/1ed4bc3b-b021-4c78-9328-4b4301dcf0ee.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-3-update-your-github-actions-workflow-1"><strong>3. Update Your GitHub Actions Workflow:</strong></h3>
<p>Once the OIDC provider and role are set up, update your GitHub Actions workflow to assume the IAM role:</p>
<pre><code class="lang-plaintext">jobs:
  deploy:
    runs-on: ubuntu-latest
    permissions:
      id-token: write  # Required for OIDC
      contents: read

    steps:
      - name: Checkout Code
        uses: actions/checkout@v3

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::&lt;ACCOUNT_ID&gt;:role/&lt;Role-Name&gt;
          aws-region: us-east-1

      - name: Deploy to S3
        run: aws s3 sync ./frontend/build s3://&lt;bucket-name&gt;/ --delete
</code></pre>
<h3 id="heading-key-points-1"><strong>Key Points:</strong></h3>
<ul>
<li><p><strong>OIDC provider URL</strong> should be <a target="_blank" href="https://token.actions.githubusercontent.com/"><code>https://token.actions.githubusercontent.com</code></a>.</p>
</li>
<li><p><strong>Audience</strong> should always be set to <a target="_blank" href="http://sts.amazonaws.com/"><code>sts.amazonaws.com</code></a>.</p>
</li>
<li><p>The IAM role should allow the <code>sts:AssumeRoleWithWebIdentity</code> action.</p>
</li>
<li><p>Ensure that the <strong>trust relationship policy</strong> specifies the GitHub repository and branch to limit access.</p>
</li>
</ul>
<p><strong>Test the GitHub Actions:</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729289474265/601a310e-eca6-4e05-b554-3131a1388310.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729289557409/e76560eb-9d5a-4627-84dd-a767a8e53a17.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-web-app"><strong>Web app</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729625889266/6f46a43e-e265-4ced-93a2-2fe85caeb9c8.png?auto=compress,format&amp;format=webp" alt /></p>
]]></content:encoded></item><item><title><![CDATA[Deploying a Real-Time Chat App on AWS Lightsail Containers | Next.js & WebSocket Backend]]></title><description><![CDATA[GitHub Repo: https://github.com/prafulpatel16/realtime- chatapp-test.git
Video Link: https://youtu.be/kVH8n5w6Gkw
https://youtu.be/LlCu7K45cBI
Real-Time Use Case Solution: Real-Time Chat Application for Customer Support
Solution Overview:
To solve th...]]></description><link>https://praful.cloud/deploying-a-real-time-chat-app-on-aws-lightsail-containers-nextjs-websocket-backend</link><guid isPermaLink="true">https://praful.cloud/deploying-a-real-time-chat-app-on-aws-lightsail-containers-nextjs-websocket-backend</guid><category><![CDATA[#AWS #Lightsail #Nextjs #WebSocket #RealTimeChatApp #Docker #CloudDeployment #Frontend #Backend #WebDevelopment]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Sat, 28 Sep 2024 22:24:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1727560857809/2249c6e4-8513-4491-9807-784efcf0a0a0.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>GitHub Repo: <a target="_blank" href="https://github.com/prafulpatel16/realtime-">https://github.com/prafulpatel16/realtime-</a> chatapp-test.git</p>
<p>Video Link: <a target="_blank" href="https://youtu.be/kVH8n5w6Gkw">https://youtu.be/kVH8n5w6Gkw</a></p>
<p><a target="_blank" href="https://youtu.be/LlCu7K45cBI">https://youtu.be/LlCu7K45cBI</a></p>
<h3 id="heading-real-time-use-case-solution-real-time-chat-application-for-customer-support">Real-Time Use Case Solution: Real-Time Chat Application for Customer Support</h3>
<h4 id="heading-solution-overview"><strong>Solution Overview:</strong></h4>
<p>To solve this problem, the company deploys a <strong>real-time chat application</strong> that integrates directly into their e-commerce website. The application consists of:</p>
<ul>
<li><p><strong>Next.js Frontend</strong> for the chat interface, allowing customers to engage with support agents or AI-driven chatbots.</p>
</li>
<li><p><strong>WebSocket Backend</strong> to handle bi-directional, real-time communication between customers and the support team.</p>
</li>
<li><p><strong>Dockerized Containers</strong> deployed on <strong>AWS Lightsail</strong> to ensure scalability and performance.</p>
</li>
</ul>
<h3 id="heading-architecture-and-workflow"><strong>Architecture and Workflow</strong></h3>
<h4 id="heading-components"><strong>Components:</strong></h4>
<ol>
<li><p><strong>Frontend (Next.js)</strong>: A user-friendly chat widget embedded in the e-commerce site where customers can enter their inquiries. It allows real-time messaging with the support backend.</p>
</li>
<li><p><strong>Backend (WebSocket Server)</strong>: Handles real-time communication between the frontend chat widget and either support agents or AI bots. All communication is managed through WebSocket connections, ensuring real-time message delivery without delays.</p>
</li>
<li><p><strong>Docker Containers on AWS Lightsail</strong>: The frontend and backend are deployed as separate Docker containers on AWS Lightsail, ensuring high availability, security, and cost-effectiveness.</p>
</li>
</ol>
<h4 id="heading-real-time-workflow"><strong>Real-Time Workflow:</strong></h4>
<ol>
<li><p><strong>Customer Initiates Chat</strong>: A customer browsing the e-commerce site initiates a chat session from the embedded chat widget.</p>
</li>
<li><p><strong>Connection Established</strong>: The Next.js frontend establishes a WebSocket connection to the backend running on AWS Lightsail.</p>
</li>
<li><p><strong>Real-Time Messaging</strong>: The customer’s queries are sent to the backend in real-time and are either directed to a human support agent or handled by an AI-driven chatbot.</p>
</li>
<li><p><strong>Instant Responses</strong>: The customer receives an instant response, reducing wait time from hours (via email) to seconds. Support agents can handle multiple conversations simultaneously, further improving efficiency.</p>
</li>
<li><p><strong>Scalability</strong>: During peak times, AWS Lightsail scales the container services to handle additional chat sessions without downtime.</p>
</li>
</ol>
<hr />
<h3 id="heading-key-features-and-benefits"><strong>Key Features and Benefits</strong></h3>
<ol>
<li><p><strong>Instant Response Time</strong>: Real-time communication ensures that customer queries are answered immediately, leading to higher customer satisfaction.</p>
</li>
<li><p><strong>Scalable Solution</strong>: The system can scale to support thousands of concurrent users, especially during peak hours or events, using the elasticity of AWS Lightsail.</p>
</li>
<li><p><strong>Seamless User Experience</strong>: Customers can communicate with agents without leaving the website, providing a seamless experience that keeps them engaged and more likely to complete purchases.</p>
</li>
<li><p><strong>Cost Efficiency</strong>: By deploying in lightweight Docker containers on AWS Lightsail, the company minimizes infrastructure costs while maintaining high performance.</p>
</li>
<li><p><strong>Operational Efficiency</strong>: The chat system enables support agents to handle multiple customers in real time, reducing the need for hiring more agents.</p>
</li>
</ol>
<hr />
<h3 id="heading-technical-stack"><strong>Technical Stack</strong></h3>
<ul>
<li><p><strong>Frontend</strong>: Next.js for chat UI</p>
</li>
<li><p><strong>Backend</strong>: WebSocket server in Node.js</p>
</li>
<li><p><strong>Containerization</strong>: Docker</p>
</li>
<li><p><strong>Cloud Platform</strong>: AWS Lightsail for cost-effective container deployment</p>
</li>
<li><p><strong>Scalability</strong>: Autoscaling support during high-traffic periods</p>
</li>
<li><p><strong>Security</strong>: SSL/TLS configuration for secure WebSocket communication</p>
</li>
</ul>
<p>The real-time chat allows customers to receive instant answers to their queries, reducing wait times significantly and increasing customer satisfaction. The system is designed to scale dynamically during peak times, ensuring smooth operations during high-traffic periods such as sales.</p>
<pre><code class="lang-plaintext">
 Step 1: Initialize the Project
Create a new directory for the chat application and initialize both Next.js and the WebSocket server.


mkdir realtime-chat-app
cd realtime-chat-app

### 1.1. Next.js Frontend Setup
First, let's create the Next.js frontend.

npx create-next-app@latest frontend

You can go through the prompts or use flags to set up the defaults:

npx create-next-app@latest frontend --ts --eslint --src-dir --tailwin

This command sets up a TypeScript Next.js project with ESLint, `src` directory structure, and TailwindCSS for styling (optional, but highly recommended).

### 1.2. WebSocket Backend Setup
Next, let's set up the backend in a `backend` folder, which will host the WebSocket server.


mkdir backend
cd backend
npm init -y
npm install ws express cors dotenv


The `ws` package is for WebSocket implementation, `express` is for running an HTTP server, and `dotenv` is for environment variables.

### Step 2: Directory Structure

Here is the full directory structure for the app:

realtime-chat-app/
│
├── backend/
│   ├── server.js         # WebSocket backend server
│   ├── package.json      # Node.js dependencies
│   └── .env              # Environment variables for WebSocket server
│
├── frontend/
│   ├── public/           # Static files
│   ├── src/              # Application logic
│   │   ├── components/   # React components for chat UI
│   │   ├── pages/        # Next.js pages
│   │   ├── utils/        # WebSocket connection utility
│   ├── package.json      # Next.js dependencies
│   ├── tailwind.config.js # TailwindCSS configuration (optional)
│   └── .env.local        # Frontend environment variables
└── README.md             # Project documentation
```

### Step 3: Backend WebSocket Server Setup

Inside the `backend` folder, create a file `server.js`:

```javascript
// backend/server.js
const express = require('express');
const { Server } = require('ws');
const cors = require('cors');
require('dotenv').config();

const app = express();
app.use(cors());

const PORT = process.env.PORT || 8080;

// HTTP server for serving WebSocket on the same port
const server = app.listen(PORT, () =&gt; {
  console.log(`WebSocket server running on port ${PORT}`);
});

// WebSocket server
const wss = new Server({ server });

wss.on('connection', (ws) =&gt; {
  console.log('Client connected');

  ws.on('message', (message) =&gt; {
    console.log(`Received: ${message}`);

    // Broadcasting the message to all connected clients
    wss.clients.forEach((client) =&gt; {
      if (client.readyState === ws.OPEN) {
        client.send(message);
      }
    });
  });

  ws.on('close', () =&gt; {
    console.log('Client disconnected');
  });
});
```

Create a `.env` file for the backend:

```plaintext
# backend/.env
PORT=8080
```

### Step 4: Next.js Frontend Setup

Go to the `frontend` directory and update `src/pages/index.tsx` to use WebSocket.

Here’s how the basic chat interface will look:

```tsx
// frontend/src/pages/index.tsx
import { useState, useEffect } from 'react';

const ChatPage = () =&gt; {
  const [messages, setMessages] = useState&lt;string[]&gt;([]);
  const [input, setInput] = useState('');
  const [ws, setWs] = useState&lt;WebSocket | null&gt;(null);

  useEffect(() =&gt; {
    const socket = new WebSocket(process.env.NEXT_PUBLIC_WS_URL as string);
    setWs(socket);

    socket.onmessage = (event) =&gt; {
      const newMessage = event.data;
      setMessages((prevMessages) =&gt; [...prevMessages, newMessage]);
    };

    socket.onclose = () =&gt; {
      console.log('WebSocket connection closed');
    };

    return () =&gt; {
      socket.close();
    };
  }, []);

  const sendMessage = () =&gt; {
    if (ws &amp;&amp; input.trim()) {
      ws.send(input);
      setInput('');
    }
  };

  return (
    &lt;div className="container mx-auto p-4"&gt;
      &lt;h1 className="text-2xl font-bold"&gt;Chat Room&lt;/h1&gt;
      &lt;div className="border rounded p-2 h-64 overflow-y-scroll"&gt;
        {messages.map((message, index) =&gt; (
          &lt;div key={index} className="py-1"&gt;{message}&lt;/div&gt;
        ))}
      &lt;/div&gt;
      &lt;input
        type="text"
        value={input}
        onChange={(e) =&gt; setInput(e.target.value)}
        className="border p-2 rounded w-full my-2"
        placeholder="Type a message..."
      /&gt;
      &lt;button
        onClick={sendMessage}
        className="bg-blue-500 text-white p-2 rounded"
      &gt;
        Send
      &lt;/button&gt;
    &lt;/div&gt;
  );
};

export default ChatPage;
```

This `ChatPage` component listens to WebSocket messages and renders them in a chat box. Users can type a message and send it, which will be broadcast to all connected clients via the WebSocket server.

### Step 5: Environment Variables

In the `frontend` folder, create a `.env.local` file:

```plaintext
NEXT_PUBLIC_WS_URL=ws://localhost:8080
```

### Step 6: Tailwind CSS Setup (Optional)

If you chose to use Tailwind, add it to the frontend project:

```bash
npm install -D tailwindcss postcss autoprefixer
npx tailwindcss init
```

Update `tailwind.config.js`:

```javascript
// frontend/tailwind.config.js
module.exports = {
  content: ['./src/**/*.{js,ts,jsx,tsx}'],
  theme: {
    extend: {},
  },
  plugins: [],
};
```

Then, add the following to your `src/styles/globals.css`:

```css
@tailwind base;
@tailwind components;
@tailwind utilities;
```

### Step 7: Run the Application

#### 7.1. Start WebSocket Backend

```bash
cd backend
npm start
```

#### 7.2. Start Next.js Frontend

```bash
cd frontend
npm run dev
```

The frontend will be available at `http://localhost:3000` and the WebSocket server at `ws://localhost:8080`.

### Step 8: Testing Routes and Final App
When you visit `http://localhost:3000`, you should see a chat interface. Messages sent will appear for all connected clients, thanks to the WebSocket setup.

### Final Routes
1. **Frontend Route**: 
   - `/` (Chat interface)
2. **WebSocket Backend Route**: 
   - `ws://localhost:8080` (WebSocket connection endpoint)

With these steps, you have a fully working chat application with a real-time WebSocket backend and Next.js frontend.

Step 7: Run the Application
7.1. Start WebSocket Backend

cd backend
npm start

7.2. Start Next.js Frontend
cd frontend
npm run dev
</code></pre>
<p>frontend</p>
<p>.env</p>
<pre><code class="lang-plaintext">NEXT_PUBLIC_WS_URL=ws://localhost:8080
</code></pre>
<p>backend</p>
<p>.env</p>
<pre><code class="lang-plaintext"># backend/.env
PORT=8080
</code></pre>
<p># docker-compose.yml version: '3' services: websocket-backend: build: ./backend ports: - "8080:8080" environment: - PORT=8080 restart: always nextjs-frontend: build: ./frontend ports: - "3000:3000" environment: - NEXT_PUBLIC_WS_URL=ws://websocket-backend:8080 depends_on: - websocket-backend restart: always</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727037809295/d1e6d83c-5277-4a73-942f-775b02f8665c.png" alt class="image--center mx-auto" /></p>
<p>docker-compose up --build</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727037344654/8a9ca2a2-1d72-4d59-8d90-f28ab750909a.png" alt class="image--center mx-auto" /></p>
<p>open frontend web app url</p>
<p>http://localhost:3000</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727037461358/cc22ac9e-db21-4771-8c8c-abe143551277.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727037500216/05cbf253-52dd-4e05-809b-1eb6bd7d7b6c.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727037534097/de369e3b-0370-406f-9c0f-315b3c484502.png" alt class="image--center mx-auto" /></p>
<p>Deploy docker containers on AWS Lightsail containers</p>
<h3 id="heading-prerequisites">Prerequisites:</h3>
<ol>
<li><p><strong>AWS CLI Installed</strong>: Ensure you have the AWS CLI installed. You can verify this by running:</p>
<pre><code class="lang-plaintext"> aws --version
 If it’s not installed, follow the AWS CLI installation guide.
</code></pre>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html"><strong>AWS CLI Configured</strong>: Make sure t</a>he AWS CLI is configured with valid credentials. If not, configure it withaws configure</p>
<p> <a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html">You’ll need to</a> provide your AWS <strong>Access Key ID</strong>, <strong>Secret</strong> <a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html"><strong>Access Key</strong>, <strong>Region</strong>, and <strong>O</strong></a><strong>utput format</strong> (e.g., <code>json</code>).</p>
</li>
</ol>
<h3 id="heading-steps-to-create-an-ecr-repository">Steps to Create an ECR Repository:</h3>
<ol>
<li><p><strong>Create the ECR Repository</strong>: You can create a<a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html">n ECR repository using the</a> following AWS CLI command:</p>
<pre><code class="lang-plaintext"> ecr create-repository --repository-name &lt;repository-name&gt; --region &lt;region&gt;
 Replace &lt;repository-name&gt; with the name of your repository and &lt;region&gt; with the AWS region where you want to create it. For example:
</code></pre>
<pre><code class="lang-plaintext"> ecr create-repository --repository-name my-realtime-chatapp --region us-east-1
</code></pre>
</li>
<li><p><strong>Example Output</strong>: Upon successful creation, you should get a response like this:</p>
<pre><code class="lang-plaintext"> jsonCopy code{
     "repository": {
         "repositoryArn": "arn:aws:ecr:us-east-1:123456789012:repository/my-realtime-chatapp",
         "registryId": "123456789012",
         "repositoryName": "my-realtime-chatapp",
         "repositoryUri": "123456789012.dkr.ecr.us-east-1.amazonaws.com/my-realtime-chatapp",
         "createdAt": 1644870241.0,
         "imageTagMutability": "MUTABLE",
         "imageScanningConfiguration": {
             "scanOnPush": false
         },
         "encryptionConfiguration": {
             "encryptionType": "AES256"
         }
     }
 }
</code></pre>
<h3 id="heading-steps-to-push-docker-images-to-ecr">Steps to Push Docker Images to ECR:</h3>
<h4 id="heading-1-authenticate-docker-to-your-ecr-registry">1. <strong>Authenticate Docker to Your ECR Registry</strong></h4>
<p> Before pushing images to ECR, you need to authenticate Docker to your Amazon ECR registry.</p>
<p> Run this command to authenticate (replace <code>&lt;your-region&gt;</code> with your AWS region, e.g., <code>us-east-1</code>):</p>
<pre><code class="lang-plaintext"> aws ecr get-login-password --region &lt;your-region&gt; | docker login --username AWS --password-stdin &lt;your-account-id&gt;.dkr.ecr.&lt;your-region&gt;.amazonaws.com
</code></pre>
<p> For example, if your AWS region is <code>us-east-1</code> and your account ID is <code>123456789012</code>, the command would be:</p>
<pre><code class="lang-plaintext"> aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
</code></pre>
</li>
<li><p>Amazon ECR repositories use a specific URL format for tagging images, which includes your AWS account ID, region, and repository name.</p>
<p> You will need to tag both images (<code>realtime-chatapp-test-nextjs-frontend</code> and <code>realtime-chatapp-test-websocket-backend</code>) so they point to the correct ECR repository.</p>
<h5 id="heading-tag-the-frontend-image">Tag the Frontend Image:</h5>
<pre><code class="lang-plaintext"> docker tag realtime-chatapp-test-nextjs-frontend:latest &lt;your-account-id&gt;.dkr.ecr.&lt;your-region&gt;.amazonaws.com/my-realtime-chatapp:frontend-latest
</code></pre>
<p> Example:</p>
<pre><code class="lang-plaintext"> docker tag realtime-chatapp-test-nextjs-frontend:latest 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-realtime-chatapp:frontend-latest
</code></pre>
<h5 id="heading-tag-the-backend-imagedocker-tag-realtime-chatapp-test-websocket-backendlatest-ltyour-account-idgtdkrecrltyour-regiongtamazonawscommy-realtime-chatappbackend-latest">Tag the Backend Imagedocker tag realtime-chatapp-test-websocket-backend:latest &lt;your-account-id&gt;.dkr.ecr.&lt;your-region&gt;.amazonaws.com/my-realtime-chatapp:backend-latest</h5>
<p> Example:</p>
<pre><code class="lang-plaintext"> bdocker tag realtime-chatapp-test-websocket-backend:latest 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-realtime-chatapp:backend-latest
</code></pre>
<h4 id="heading-3-push-the-docker-images-to-ecr">3. <strong>Push the Docker Images to ECR</strong></h4>
<p> Now that the images are tagged, you can push them to the ECR repository.</p>
<h5 id="heading-push-the-frontend-image">Push the Frontend Image:</h5>
<pre><code class="lang-plaintext"> docker push &lt;your-account-id&gt;.dkr.ecr.&lt;your-region&gt;.amazonaws.com/my-realtime-chatapp:frontend-latest
</code></pre>
<p> Example:</p>
<pre><code class="lang-plaintext"> docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-realtime-chatapp:frontend-latest
</code></pre>
<h5 id="heading-push-the-backend-image">Push the Backend Image:</h5>
<pre><code class="lang-plaintext"> docker push &lt;your-account-id&gt;.dkr.ecr.&lt;your-region&gt;.amazonaws.com/my-realtime-chatapp:backend-latest
</code></pre>
<p> Example:</p>
<pre><code class="lang-plaintext"> bashCopy codedocker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-realtime-chatapp:backend-latest
</code></pre>
<h3 id="heading-verify-your-images-in-ecr">Verify Your Images in ECR:</h3>
<p> Once the push is successful, you can verify that your images have been uploaded by going to the <a target="_blank" href="https://console.aws.amazon.com/ecr">Amazon ECR Console</a> and checking the <code>my-realtime-chatapp</code> repository.</p>
</li>
</ol>
<p>When configuring a WebSocket connection on AWS Lightsail or any other environment, WebSockets generally use either:</p>
<ul>
<li><p><code>ws://</code> for unencrypted WebSocket connections</p>
</li>
<li><p><code>wss://</code> for encrypted WebSocket connections (WebSockets over TLS/SSL, similar to HTTPS).</p>
</li>
</ul>
<h3 id="heading-which-protocol-to-use">Which Protocol to Use:</h3>
<ol>
<li><p><code>ws://</code> (Unencrypted WebSocket):</p>
<ul>
<li><p>This is used for <strong>local development</strong> or when you don’t have SSL/TLS certificates configured on your server.</p>
</li>
<li><p>Example: <code>ws://</code><a target="_blank" href="http://localhost:8080"><code>localhost:8080</code></a></p>
</li>
</ul>
</li>
<li><p><code>wss://</code> (Secure WebSocket):</p>
<ul>
<li><p>This is the <strong>recommended</strong> protocol for production environments, including AWS Lightsail.</p>
</li>
<li><p>WebSocket connections secured with SSL/TLS (just like HTTPS).</p>
</li>
<li><p>Example: <code>wss://</code><a target="_blank" href="http://your-domain.com/websocket"><code>your-domain.com/websocket</code></a></p>
</li>
</ul>
</li>
</ol>
<p>Deploy on AWS Lighsail</p>
<p>Create IAM Role:</p>
<p>EC2-Lightsail-ECR-Route53-Role</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727150623140/1ffa5ce8-7f04-4eac-9c04-8548ac969566.png" alt class="image--center mx-auto" /></p>
<p>Create a Policy:</p>
<p><a target="_blank" href="https://us-east-1.console.aws.amazon.com/iam/home?region=us-east-2#/roles/details/EC2-Lightsail-ECR-Route53-Role/editPolicy/LightSailAccess?step=addPermissions">LightSailAccess</a></p>
<pre><code class="lang-plaintext">{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "iam:DeleteServiceLinkedRole",
                "iam:GetServiceLinkedRoleDeletionStatus"
            ],
            "Resource": "arn:aws:iam::*:role/aws-service-role/lightsail.amazonaws.com/AWSServiceRoleForLightsail*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CopySnapshot",
                "ec2:DescribeSnapshots",
                "ec2:CopyImage",
                "ec2:DescribeImages"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetAccountPublicAccessBlock"
            ],
            "Resource": "*"
        }
    ]
}
</code></pre>
<h3 id="heading-setup-aws-lightsailhttpslightsailawsamazoncom"><strong>Setup AWS Lightsai</strong><a target="_blank" href="https://lightsail.aws.amazon.com"><strong>l</strong></a></h3>
<ol>
<li><p><a target="_blank" href="https://lightsail.aws.amazon.com"><strong>Log In to</strong></a> <strong>AWS Light</strong><a target="_blank" href="https://lightsail.aws.amazon.com"><strong>sail:</strong></a></p>
<ul>
<li><p><a target="_blank" href="https://lightsail.aws.amazon.com">Go t</a>o <a target="_blank" href="https://lightsail.aws.amazon.com">AWS Lightsail.</a></p>
</li>
<li><p><a target="_blank" href="https://lightsail.aws.amazon.com">Sign in</a> with your A<a target="_blank" href="https://lightsail.aws.amazon.com">WS credential</a>s.</p>
</li>
</ul>
</li>
<li><p><strong>Create a Lightsail</strong> <a target="_blank" href="https://lightsail.aws.amazon.com"><strong>Container Ser</strong></a><strong>vice:</strong></p>
<ul>
<li><p>Navigate to the <strong>Con</strong><a target="_blank" href="https://lightsail.aws.amazon.com"><strong>tainers</strong> secti</a>on.</p>
</li>
<li><p>Click on <strong>Create a c</strong><a target="_blank" href="https://lightsail.aws.amazon.com"><strong>ontainer serv</strong></a><strong>ice</strong>.</p>
</li>
<li><p>Choose the size of <a target="_blank" href="https://lightsail.aws.amazon.com">the container</a> based on your application's resource needs (start with a smaller plan if unsure).</p>
</li>
<li><p>Select the AWS regi<a target="_blank" href="https://lightsail.aws.amazon.com">on closest to</a> your users.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-deploy-frontend-and-backend-containers"><strong>Deploy Frontend and Backend Containers</strong></h3>
<ol>
<li><p><strong>Deploy Backend Container:</strong></p>
<ul>
<li><p>After creating the container service, select it and click <strong>Add container</strong>.</p>
</li>
<li><p>Under <strong>Container Name</strong>, enter a name like <code>backend</code>.</p>
</li>
<li><p>Set the <strong>Container image</strong> to your backend image (e.g., <code>&lt;your-username&gt;/realtime-chat-backend:latest</code>).</p>
</li>
<li><p>Set <strong>Open ports</strong> to <code>8080</code> (the port your WebSocket server listens on).</p>
</li>
<li><p>Click <strong>Save</strong>.</p>
</li>
</ul>
</li>
<li><p><strong>Deploy Frontend Container:</strong></p>
<ul>
<li><p>Repeat the above steps for your frontend:</p>
<ul>
<li><p>Add a container named <code>frontend</code>.</p>
</li>
<li><p>Set the <strong>Container image</strong> to your frontend image (e.g., <code>&lt;your-username&gt;/realtime-chat-frontend:latest</code>).</p>
</li>
<li><p>Set <strong>Open ports</strong> to <code>3000</code> (the port your Next.js app listens on).</p>
</li>
</ul>
</li>
<li><p>Click <strong>Save</strong> and <strong>Deploy</strong>.</p>
</li>
</ul>
</li>
</ol>
<p>Deploy both the containers within one Lightsail Container service</p>
<p>Access the Webapp and verify on the browser</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727555365785/801a37e9-3f2d-408b-8228-f256b0082b19.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-conclusion">Conclusion</h3>
<p>By deploying a real-time chat application using Next.js, WebSockets, and AWS Lightsail containers, the e-commerce company solved their customer support issues, scaled their infrastructure efficiently, and significantly boosted both customer satisfaction and revenue. This real-time business solution is a powerful use case for modern e-commerce platforms looking to optimize user engagement and operational efficiency.</p>
]]></content:encoded></item><item><title><![CDATA[Secure Your Spring Application: Keycloak & OAuth2 Integration and Configuration with Spring PetClinic]]></title><description><![CDATA[Video Link: https://youtu.be/cLSTSznrg14
Secure User Authentication and Role-Based Access Control for a Veterinary Clinic Management System
Business Problem:
A veterinary clinic management system, similar to the Spring PetClinic application, requires...]]></description><link>https://praful.cloud/secure-your-spring-application-keycloak-oauth2-integration-and-configuration-with-spring-petclinic</link><guid isPermaLink="true">https://praful.cloud/secure-your-spring-application-keycloak-oauth2-integration-and-configuration-with-spring-petclinic</guid><category><![CDATA[#KeycloakIntegration #SpringPetClinic #KeycloakSpringBoot #SpringBootKeycloak #KeycloakSecurity #SpringSecurityKeycloak #SpringPetClinicTutorial #SpringBootSSO #KeycloakTutorial #SpringBootAuthentication #OAuth2SpringBoot #OpenIDConnectSpringBoot #SpringBootMicroservices #KeycloakTokenManagement #AuthenticationSpringBoot #PrafulCloud #PetClinicApp #KeycloakForJavaApps #SecureSpringBootApps]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Fri, 27 Sep 2024 01:55:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1727395114406/58968da4-a3b3-49fc-9b99-580c3118af7d.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Video Link: <a target="_blank" href="https://youtu.be/cLSTSznrg14">https://youtu.be/cLSTSznrg14</a></p>
<h3 id="heading-secure-user-authentication-and-role-based-access-control-for-a-veterinary-clinic-management-system"><strong>Secure User Authentication and Role-Based Access Control for a Veterinary Clinic Management System</strong></h3>
<p><strong>Business Problem:</strong></p>
<p>A veterinary clinic management system, similar to the <strong>Spring PetClinic</strong> application, requires secure user authentication and role-based access control (RBAC) to ensure that different users (e.g., veterinarians, pet owners, and administrators) can access the system based on their roles. The system manages sensitive information, such as medical records, owner details, and billing data, and requires the following:</p>
<ol>
<li><p><strong>Authentication</strong>: Ensure that only authenticated users can access the system.</p>
</li>
<li><p><strong>Authorization</strong>: Restrict access to specific data and functionalities based on the user's role (veterinarian, owner, admin).</p>
</li>
<li><p><strong>Token Management</strong>: Manage secure tokens for logged-in users to ensure efficient session handling.</p>
</li>
<li><p><strong>Seamless Integration</strong>: The solution must integrate seamlessly with the existing system without significant rework.</p>
</li>
</ol>
<p>The veterinary clinic seeks a scalable and secure authentication and authorization system to prevent unauthorized access and protect sensitive data.</p>
<hr />
<h3 id="heading-challenges"><strong>Challenges:</strong></h3>
<ol>
<li><p><strong>Implementing Role-Based Access Control (RBAC)</strong>: The clinic requires the ability to control which roles can access which resources. For instance:</p>
<ul>
<li><p><strong>Veterinarians</strong> should have access to manage pet medical records.</p>
</li>
<li><p><strong>Owners</strong> should only access their pets' details.</p>
</li>
<li><p><strong>Administrators</strong> should have full access to manage clinic operations.</p>
</li>
</ul>
</li>
<li><p><strong>Seamless Integration with Existing Systems</strong>: The clinic management system is built using <strong>Spring Boot</strong>, and any integration must fit within the existing architecture with minimal disruption.</p>
</li>
<li><p><strong>Secure Token-Based Authentication</strong>: The system needs secure token-based authentication using industry-standard protocols (e.g., OAuth2, OpenID Connect) to provide robust security.</p>
</li>
<li><p><strong>Centralized User Management</strong>: The clinic desires centralized identity management to handle user credentials, roles, and permissions efficiently. It must support <strong>Single Sign-On (SSO)</strong> for both web and mobile clients in the future.</p>
</li>
</ol>
<hr />
<h3 id="heading-solution-integration-of-keycloak-with-spring-petclinic"><strong>Solution: Integration of Keycloak with Spring PetClinic</strong></h3>
<p><strong>Keycloak</strong> is an open-source Identity and Access Management solution that solves these authentication and authorization challenges. Keycloak provides a comprehensive framework for managing users, roles, tokens, and Single Sign-On (SSO).</p>
<p>We implemented <strong>Keycloak</strong> as the authentication and authorization provider for the Spring PetClinic application to meet the clinic’s security requirements.</p>
<p>Flow Diagram</p>
<pre><code class="lang-plaintext">+-------------+                     +--------------+                      +--------------+
|   Browser   | ---- Login ----&gt;     |  Keycloak    |  ---- Redirect ----&gt; | Spring App   |
|             | &lt;--- Auth Token ---  |              |  &lt;--- Token -------&gt; |              |
+-------------+                     +--------------+                      +--------------+
</code></pre>
<h3 id="heading-prerequisites">Prerequisites:</h3>
<ul>
<li><p>Java 11 or later</p>
</li>
<li><p>Maven</p>
</li>
<li><p>A running Keycloak instance (you can run it locally or in Docker)</p>
</li>
<li><p>The Spring PetClinic application source code (av<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">ailable</a> on <a target="_blank" href="https://github.com/spring-projects/spring-petclinic">GitHub</a>)</p>
</li>
</ul>
<h3 id="heading-stehttpsgithubcomspring-projectsspring-petclinicp-1-set-up-keycloakhttpsgithubcomspring-projectsspring-petclinic"><a target="_blank" href="https://github.com/spring-projects/spring-petclinic">Ste</a>p 1: Set Up Ke<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">ycloak</a></h3>
<p>Install Keycloak:</p>
<p>Run the below shell script inside the Ubuntu machine to install keycloak</p>
<pre><code class="lang-plaintext">#!/bin/bash

# Objective: To install Keycloak in Ubuntu Machine
# Date: 24 SEP 2024
# Author: PRAFUL PATEL
# Web: https://www.praful.cloud
# ---------------------------------------------------------------

# Variables (modify these as per your requirements)
KEYCLOAK_VERSION="25.0.6"  # Latest stable version
KEYCLOAK_USER="keycloak"
KEYCLOAK_HOME="/opt/keycloak"
DB_USER="keycloak"
DB_PASSWORD="admin"
DB_NAME="keycloak"
ADMIN_USERNAME="admin"
ADMIN_PASSWORD="admin"

# Step 1: Update system and install required packages
echo "Updating system and installing required packages..."
sudo apt update -y
sudo apt upgrade -y
sudo apt install -y openjdk-17-jdk curl wget unzip

# Step 2: Download Keycloak
echo "Downloading Keycloak..."
wget https://github.com/keycloak/keycloak/releases/download/${KEYCLOAK_VERSION}/keycloak-${KEYCLOAK_VERSION}.zip -P /tmp

# Step 3: Install Keycloak
echo "Installing Keycloak..."
sudo unzip /tmp/keycloak-${KEYCLOAK_VERSION}.zip -d /tmp/
sudo cp -r /tmp/keycloak-${KEYCLOAK_VERSION}/* ${KEYCLOAK_HOME}/


# Step 4: Create Keycloak user and set permissions
echo "Creating Keycloak user and setting permissions..."
sudo useradd -r -d ${KEYCLOAK_HOME} -s /bin/false ${KEYCLOAK_USER}
sudo chown -R ${KEYCLOAK_USER}:${KEYCLOAK_USER} ${KEYCLOAK_HOME}
sudo chmod +x ${KEYCLOAK_HOME}/bin/kc.sh
sudo chmod -R 755 ${KEYCLOAK_HOME}

# Step 5: Set up database (Optional)
echo "Setting up PostgreSQL database..."
sudo apt install -y postgresql postgresql-contrib

sudo -u postgres psql -c "CREATE DATABASE keycloak;"
sudo -u postgres psql -c "CREATE USER ${DB_USER} WITH PASSWORD '${DB_PASSWORD}';"
sudo -u postgres psql -c "CREATE DATABASE ${DB_NAME};"
sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE ${DB_NAME} TO ${DB_USER};"
sudo -u postgres psql -c  "ALTER DATABASE keycloak OWNER TO keycloak;"


# Step 6: Configure Keycloak to use PostgreSQL
echo "Configuring Keycloak to use PostgreSQL..."
sudo tee ${KEYCLOAK_HOME}/conf/keycloak.conf &lt;&lt;EOF
db=postgres
db-url=jdbc:postgresql://localhost:5432/${DB_NAME}
db-username=${DB_USER}
db-password=${DB_PASSWORD}
EOF

# Step 7: Configure Keycloak as a service
echo "Configuring Keycloak as a service..."
sudo tee /etc/systemd/system/keycloak.service &lt;&lt;EOF
[Unit]
Description=Keycloak Server
After=network.target

[Service]
User=keycloak
Group=keycloak
Environment="JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64"
Environment="KEYCLOAK_ADMIN=admin"
Environment="KEYCLOAK_ADMIN_PASSWORD=admin"
ExecStart=/opt/keycloak/bin/kc.sh start --http-port=8081 
WorkingDirectory=/opt/keycloak
Restart=on-failure
LimitNOFILE=102642
TimeoutStopSec=120
StandardOutput=journal
StandardError=journal

[Install]
WantedBy=multi-user.target

EOF

# Step 8: Reload systemd, start, and enable Keycloak service
echo "Enabling and starting Keycloak service..."
sudo systemctl daemon-reload
sudo systemctl enable keycloak
sudo systemctl start keycloak

 # Start the keucloak in dev environment
sudo /opt/keycloak/bin/kc.sh start-dev --http-port=8081 --verbose


# Step 9: Display Keycloak Admin Console Information
echo "Installation complete!"
echo "Access Keycloak Admin Console at: http://localhost:8081"
echo "Admin Username: ${ADMIN_USERNAME}"
echo "Admin Password: ${ADMIN_PASSWORD}"
</code></pre>
<h4 id="heading-11-inshttpsgithubcomspring-projectsspring-petclinictall-keycloak-if-not-already-installed">1<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">.1 Ins</a>tall Keycloak (if not already installed)</h4>
<p>You can run Keycloak <a target="_blank" href="https://github.com/spring-projects/spring-petclinic">locall</a>y or using Docker:</p>
<pre><code class="lang-plaintext">bashCopy codedocker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak:latest start-dev
</code></pre>
<p>This will start Keycloak in development mode. Access Keycloak at <a target="_blank" href="http://localhost:8080"><code>http://localhost:8080</code></a>.</p>
<h4 id="heading-12-create-a-realm">1.2 Create a Realm</h4>
<ol>
<li><p>Log in to the Keycloak Admin Console (default is <code>admin</code><a target="_blank" href="https://github.com/spring-projects/spring-petclinic"><code>/admin</code></a>).</p>
</li>
<li><p>Click <strong>Add Realm</strong> and create a realm called <code>spring-petclinic</code>.</p>
</li>
</ol>
<h4 id="heading-13-create-a-clienthttpsgithubcomspring-projectsspring-petclinic-for-spring-petclinichttpsgithubcomspring-projectsspring-petclinic">1.3 Create a C<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">lient</a> for Spring PetCl<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">inic</a></h4>
<ol>
<li><p>In your <code>spring-petclinic</code> realm, navigate to <strong>Clients</strong> and cl<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">ick <strong>Cr</strong></a><strong>eate</strong>.</p>
</li>
<li><p>Set the <strong>Client ID</strong> to <code>spring-petclinic</code> and <strong>Access</strong> <a target="_blank" href="https://github.com/spring-projects/spring-petclinic"><strong>Type</strong></a> to <code>confidential</code>.</p>
</li>
<li><p>Set the <strong>Root URL</strong> t<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">o your</a> Spring PetClinic application’s URL, e.g., <a target="_blank" href="http://localhost:8080"><code>http://localhost:8080</code></a>.</p>
</li>
<li><p><a target="_blank" href="https://github.com/spring-projects/spring-petclinic">Afte</a>r creating the client, go to the <strong>Credentials</strong> tab and copy the <strong>Client</strong> <a target="_blank" href="https://github.com/spring-projects/spring-petclinic"><strong>Secre</strong></a><strong>t</strong>. You will need this for the Spring Boot application configuration.</p>
</li>
</ol>
<h4 id="heading-14-create-roleshttpsgithubcomspring-projectsspring-petclinic">1.4 Create Role<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">s</a></h4>
<ol>
<li><p><a target="_blank" href="https://github.com/spring-projects/spring-petclinic">N</a>avigate to <strong>Roles</strong> and create the following roles:</p>
<ul>
<li><p><code>admin</code></p>
</li>
<li><p><code>vet</code></p>
</li>
<li><p><code>owner</code></p>
</li>
</ul>
</li>
</ol>
<h4 id="heading-15-create-users">1.5 Create Users</h4>
<ol>
<li><p>Go to <strong>Users</strong> and create users for each role. F<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">or exa</a>mple:</p>
<ul>
<li><p><code>admin</code> <a target="_blank" href="https://github.com/spring-projects/spring-petclinic">user</a> with the <code>admin</code> role.</p>
</li>
<li><p><code>vet</code> user with the <code>vet</code> r<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">ole.</a></p>
</li>
<li><p><code>o</code><a target="_blank" href="https://github.com/spring-projects/spring-petclinic"><code>wner</code> u</a>s<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">er wit</a>h the <a target="_blank" href="https://github.com/spring-projects/spring-petclinic"><code>owner</code></a> role.</p>
</li>
</ul>
</li>
<li><p>As<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">sign t</a>he corresponding role to each user in the <strong>Role Mapping</strong><a target="_blank" href="https://github.com/spring-projects/spring-petclinic"><strong>s</strong> tab.</a></p>
</li>
</ol>
<h3 id="heading-step-2-configure-spring-petclihttpsgithubcomspring-projectsspring-petclinicnic-to-use-keycloak">Step 2: Configure Spring <a target="_blank" href="https://github.com/spring-projects/spring-petclinic">PetCli</a>nic to Use Keycloak</h3>
<p>You <a target="_blank" href="https://github.com/spring-projects/spring-petclinic">need t</a>o modify the Spring PetClinic a<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">pplica</a>tion to authenticate with Keycloak using the <strong>Spring Security Keycl</strong><a target="_blank" href="https://github.com/spring-projects/spring-petclinic"><strong>oak Ad</strong></a><strong>apter</strong>.</p>
<h4 id="heading-21-add-keycloak-dependencies-to-pomxmlhttpsgithubcomspring-projectsspring-petclinic">2.1 Add Keycloak Dependencies to <code>pom.x</code><a target="_blank" href="https://github.com/spring-projects/spring-petclinic"><code>ml</code></a></h4>
<p><a target="_blank" href="https://github.com/spring-projects/spring-petclinic">In</a> the <code>pom.xml</code> file, add the necessary Keycloak dependencies:</p>
<pre><code class="lang-plaintext">xmlCopy code&lt;dependency&gt;
    &lt;groupId&gt;org.keycloak&lt;/groupId&gt;
    &lt;artifactId&gt;keycloak-spring-boot-starter&lt;/artifactId&gt;
    &lt;version&gt;21.0.1&lt;/version&gt; &lt;!-- Use the latest Keycloak version --&gt;
&lt;/dependency&gt;

&lt;dependency&gt;
    &lt;groupId&gt;org.keycloak&lt;/groupId&gt;
    &lt;artifactId&gt;keycloak-spring-security-adapter&lt;/artifactId&gt;
    &lt;version&gt;21.0.1&lt;/version&gt;
&lt;/dependency&gt;
</code></pre>
<h4 id="heading-22-configure-keycloak-in-applicationpropertieshttpapplicationproperties-or-applicationyml">2.2 Configure Keycloak in <a target="_blank" href="http://application.properties"><code>application.properties</code></a> or <code>application.yml</code></h4>
<p>If you use <a target="_blank" href="http://application.properties"><code>application.properties</code></a>, add the following:</p>
<pre><code class="lang-plaintext">propertiesCopy codekeycloak.realm=spring-petclinic
keycloak.auth-server-url=http://localhost:8080
keycloak.resource=spring-petclinic
keycloak.credentials.secret=YOUR_CLIENT_SECRET
keycloak.ssl-required=none
keycloak.public-client=false
keycloak.principal-attribute=preferred_username
keycloak.bearer-only=false
</code></pre>
<p>If you use <code>application.yml</code>, add the following:</p>
<pre><code class="lang-plaintext">yamlCopy codekeycloak:
  realm: spring-petclinic
  auth-server-url: http://localhost:8080
  resource: spring-petclinic
  credentials:
    secret: YOUR_CLIENT_SECRET
  ssl-required: none
  public-client: false
  principal-attribute: preferred_username
  bearer-only: false
</code></pre>
<p>Replace <code>YOUR_CLIENT_SECRET</code> with the client secret you copied from the Keycloak client settings earlier.</p>
<h4 id="heading-23-configure-spring-security">2.3 Configure Spring Security</h4>
<p>Create a security configurati<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">on cla</a>ss that extends <code>KeycloakWebSecurityConfigurerAdapter</code> to secure the endpoints and map roles:</p>
<pre><code class="lang-plaintext">javaCopy codeimport org.keycloak.adapters.springsecurity.KeycloakConfiguration;
import org.keycloak.adapters.springsecurity.config.KeycloakWebSecurityConfigurerAdapter;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.core.session.SessionRegistryImpl;
import org.springframework.security.web.authentication.session.RegisterSessionAuthenticationStrategy;
import org.springframework.security.web.authentication.session.SessionAuthenticationStrategy;

@KeycloakConfiguration
public class SecurityConfig extends KeycloakWebSecurityConfigurerAdapter {

    @Autowired
    public void configureGlobal(AuthenticationManagerBuilder auth) {
        auth.authenticationProvider(keycloakAuthenticationProvider());
    }

    @Bean
    @Override
    protected SessionAuthenticationStrategy sessionAuthenticationStrategy() {
        return new RegisterSessionAuthenticationStrategy(new SessionRegistryImpl());
    }

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        super.configure(http);
        http
            .authorizeRequests()
            .antMatchers("/vets/**").hasRole("vet")
            .antMatchers("/owners/**").hasRole("owner")
            .antMatchers("/admin/**").hasRole("admin")
            .anyRequest().permitAll();
    }
}
</code></pre>
<h3 id="heading-step-3-test-the-application">Step 3: Test the Application</h3>
<ol>
<li><p><strong>Start Keycloak</strong> if it’s not already running.</p>
</li>
<li><p><strong>Run the Spring PetClinic Application</strong>:</p>
<pre><code class="lang-plaintext"> ./mvnw spring-boot:run
</code></pre>
</li>
<li><p><strong>Access</strong> <a target="_blank" href="https://github.com/spring-projects/spring-petclinic"><strong>the Pe</strong></a><strong>tClinic App</strong>: Open a browse<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">r and</a> navigate to <a target="_blank" href="http://localhost:8080"><code>http://localhost:8080</code></a>.</p>
<ul>
<li><p>You <a target="_blank" href="https://github.com/spring-projects/spring-petclinic">will</a> be redirected to the Keycloak login page when you try to access protected <a target="_blank" href="https://github.com/spring-projects/spring-petclinic">route</a>s like <code>/vets</code>, <code>/owners</code>, or <code>/admin</code>.</p>
</li>
<li><p>Log in using the corresponding role-base<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">d user</a>s you created in Keycloak.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-step-4-test-authorization">Step 4: Test Authorization</h3>
<ul>
<li><p><strong>Log in as Admin</strong>:</p>
<ul>
<li>Navigate to <code>/admin</code> and log in using t<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">he <code>adm</code></a><code>in</code> user.</li>
</ul>
</li>
<li><p><strong>Log in as Vet</strong>:</p>
<ul>
<li>Navigate to <code>/vets</code> and log in using the <code>v</code><a target="_blank" href="https://github.com/spring-projects/spring-petclinic"><code>et</code> use</a>r.</li>
</ul>
</li>
<li><p><strong>Log in as Owner</strong>:</p>
<ul>
<li><a target="_blank" href="https://github.com/spring-projects/spring-petclinic">Na</a>vigate to <code>/own</code><a target="_blank" href="https://github.com/spring-projects/spring-petclinic"><code>ers</code> an</a>d log in using the <code>owner</code> user.</li>
</ul>
</li>
</ul>
<h3 id="heading-conclusion">Conclusion</h3>
<p>By <a target="_blank" href="https://github.com/spring-projects/spring-petclinic">follow</a>ing these st<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">eps, y</a>ou have integrated Keycloak as the authenticatio<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">n and</a> authorization <a target="_blank" href="https://github.com/spring-projects/spring-petclinic">provid</a>er for the Spring PetClinic application. The applica<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">tion n</a>ow aut<a target="_blank" href="https://github.com/spring-projects/spring-petclinic">hentic</a>ates users via Keycloak and restricts access based on user roles.</p>
<p>Create Realm</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727295486621/29f41667-3975-4525-88a1-47cca651d3a7.png" alt class="image--center mx-auto" /></p>
<p>Create Client</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727295776547/ae67df3f-6dd2-4258-a3b8-543f04203764.png" alt class="image--center mx-auto" /></p>
<p>Create Credentials</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727295131271/248caf48-fd6b-4f0b-b66b-c293ec323f1b.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727294976324/466a8bcb-1399-4219-accb-0cbfed64e9d0.png" alt class="image--center mx-auto" /></p>
<p>Copy Client Secret</p>
<p>J4Hl5BSJYfbOAqTYmaFHL6T0MEX</p>
<p>Create Role</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727296024615/90e11361-5b5d-4f0d-95cd-e7930b98e277.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-create-roles">Create Roles</h4>
<ol>
<li><p>Navigate to <strong>Roles</strong> and create the following roles:</p>
<ul>
<li><p><code>admin</code></p>
</li>
<li><p><code>vet</code></p>
</li>
<li><p><code>owner</code></p>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727296136256/89f2a14c-cb28-4778-bb62-3cadc146b85f.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-create-users">Create Users</h4>
<ol>
<li><p>Go to <strong>Users</strong> and create users for each role. For example:</p>
<ul>
<li><p><code>admin</code> user with the <code>admin</code> role.</p>
</li>
<li><p><code>vet</code> user with the <code>vet</code> role.</p>
</li>
<li><p><code>owner</code> user with the <code>owner</code> role.</p>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727296189961/d218c01b-ddf5-41e9-95c5-e759d65a0629.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727296251479/3966a8a8-07c1-4472-a613-e692a3fb4faa.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727296371528/4039647f-bc1a-40ad-9182-c39848a7f843.png" alt class="image--center mx-auto" /></p>
<p>Role mapping</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727296418702/87f0af8b-c43b-4e44-b032-8eeda55267d3.png" alt class="image--center mx-auto" /></p>
<p>Login</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727374745724/822d3d80-1c08-4367-a40e-e12cc281a8b3.png" alt class="image--center mx-auto" /></p>
<p>Error</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727374777240/8765accd-29b0-4f33-a5b0-194200101851.png" alt class="image--center mx-auto" /></p>
<p>To Fix this error will have to integrate token controller' in spring petclinic app</p>
<p>To handle the <strong>Keycloak token</strong> in your <strong>Spring PetClinic</strong> application, you need to add the token management code in the appropriate parts of the Spring Boot application based on how you want to retrieve and use the token.</p>
<p>Here’s where and how you should add the necessary code in the <strong>Spring PetClinic</strong> application:</p>
<h3 id="heading-configure-security-in-spring-boot"><strong>Configure Security in Spring Boot</strong></h3>
<ol>
<li><p><strong>Create a New Package</strong>:</p>
<ul>
<li>Inside <code>src/main/java/org/springframework/samples/petclinic</code>, create a new package called <code>security</code>.</li>
</ul>
</li>
<li><p><strong>Add the</strong> <a target="_blank" href="http://SecurityConfig.java"><code>SecurityConfig.java</code></a> Class:</p>
<ul>
<li>Inside the <code>security</code> package, create a file <a target="_blank" href="http://SecurityConfig.java"><code>SecurityConfig.java</code></a>:</li>
</ul>
</li>
</ol>
<h3 id="heading-application-level-changes-to-sprint-petclinic-application-code">Application level changes to Sprint-Petclinic Application code</h3>
<p>Configure Security in Spring Boot</p>
<p>add a new package ‘security’ into java/org/springframworks/samples/petclinic<br />file - <a target="_blank" href="http://SecurityConfig.java">SecurityConfig.java</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727384569442/876834c5-48a0-46ff-bb2e-d8ad624fd633.png" alt class="image--center mx-auto" /></p>
<p>Create a New Controller to Retrieve the Token</p>
<p>You can create a new controller class in your Spring PetClinic application that retrieves the Keycloak token from the current session using <strong>Spring Security’s</strong> <code>SecurityContextHolder</code>.</p>
<h4 id="heading-steps">Steps:</h4>
<ul>
<li>Create a new controller file in the appropriate package (usually under <code>org.springframework.samples.petclinic.web</code>).</li>
</ul>
<h4 id="heading-example-tokencontrollerjavahttptokencontrollerjava">Example: <a target="_blank" href="http://TokenController.java"><code>TokenController.java</code></a></h4>
<pre><code class="lang-plaintext">javaCopy codepackage org.springframework.samples.petclinic.web;

import org.springframework.security.core.Authentication;
import org.springframework.security.core.context.SecurityContextHolder;
import org.springframework.security.oauth2.core.oidc.user.OidcUser;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class TokenController {

    // This endpoint returns the Keycloak access token for the logged-in user
    @GetMapping("/token")
    public String getToken() {
        Authentication authentication = SecurityContextHolder.getContext().getAuthentication();

        // Check if the authenticated user is an OIDC user (Keycloak)
        if (authentication.getPrincipal() instanceof OidcUser) {
            OidcUser oidcUser = (OidcUser) authentication.getPrincipal();
            String token = oidcUser.getIdToken().getTokenValue(); // Retrieve the token
            return "Token: " + token;
        }

        return "No token available";
    }
}
</code></pre>
<ul>
<li><p><strong>Where to put this file:</strong></p>
<ul>
<li>This file should be placed under your <strong>web</strong> package, for example, in <code>src/main/java/org/springframework/samples/petclinic/web/</code><a target="_blank" href="http://TokenController.java"><code>TokenController.java</code></a>.</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727383079424/0177caa7-5255-43a1-8511-bdbedd292850.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-update-application-properties-for-keycloak-and-oauth2">Update application properties for keycloak and OAuth2</h3>
<pre><code class="lang-plaintext"># database init, supports mysql too
database=h2
spring.sql.init.schema-locations=classpath*:db/${database}/schema.sql
spring.sql.init.data-locations=classpath*:db/${database}/data.sql

# Web
spring.thymeleaf.mode=HTML

# JPA
spring.jpa.hibernate.ddl-auto=none
spring.jpa.open-in-view=false

# Internationalization
spring.messages.basename=messages/messages

# Actuator
management.endpoints.web.exposure.include=*

# Logging
logging.level.org.springframework=INFO
# logging.level.org.springframework.web=DEBUG
# logging.level.org.springframework.context.annotation=TRACE

# Maximum time static resources should be cached
spring.web.resources.cache.cachecontrol.max-age=12h


# Keycloak config
keycloak.realm=spring-petclinic
keycloak.auth-server-url=http://localhost:8081
keycloak.resource=spring-petclinic
keycloak.credentials.secret=6HGIBPgV4yCtIvNDJFaWx110ddNNZwEg
keycloak.ssl-required=none
keycloak.public-client=false
keycloak.principal-attribute=preferred_username
keycloak.bearer-only=false
logging.level.org.springframework.security=DEBUG
logging.level.org.keycloak=DEBUG

# OAuth2 settings for Spring Security (OpenID Connect)
spring.security.oauth2.client.provider.keycloak.issuer-uri=http://localhost:8081/realms/spring-petclinic
spring.security.oauth2.client.registration.keycloak.client-id=spring-petclinic
spring.security.oauth2.client.registration.keycloak.client-secret=6HGIBPgV4yCtIvNDJFaWx110ddNNZwEg
spring.security.oauth2.client.registration.keycloak.scope=openid,profile,email
spring.security.oauth2.client.registration.keycloak.authorization-grant-type=authorization_code
</code></pre>
<p><strong>Add Keycloak Dependencies to</strong> <code>pom.xml</code>:</p>
<ol>
<li><ul>
<li><p>Open the <code>pom.xml</code> file and add the following dependencies:</p>
<pre><code class="lang-plaintext">   xmlCopy code&lt;dependencies&gt;
     &lt;!-- Keycloak Integration --&gt;
     &lt;dependency&gt;
       &lt;groupId&gt;org.keycloak&lt;/groupId&gt;
       &lt;artifactId&gt;keycloak-spring-boot-starter&lt;/artifactId&gt;
       &lt;version&gt;21.1.1&lt;/version&gt;
     &lt;/dependency&gt;

     &lt;!-- Spring Security --&gt;
     &lt;dependency&gt;
       &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
       &lt;artifactId&gt;spring-boot-starter-security&lt;/artifactId&gt;
     &lt;/dependency&gt;

     &lt;dependency&gt;
       &lt;groupId&gt;org.springframework.security&lt;/groupId&gt;
       &lt;artifactId&gt;spring-security-config&lt;/artifactId&gt;
     &lt;/dependency&gt;

     &lt;dependency&gt;
       &lt;groupId&gt;org.springframework.security&lt;/groupId&gt;
       &lt;artifactId&gt;spring-security-web&lt;/artifactId&gt;
     &lt;/dependency&gt;
   &lt;/dependencies&gt;
</code></pre>
</li>
</ul>
</li>
</ol>
<hr />
<hr />
<h3 id="heading-add-keycloak-configuration-to-applicationyml"><strong>Add Keycloak Configuration to</strong> <code>application.yml</code></h3>
<p>Create or update the <code>application.yml</code> file to include Keycloak integration settings:</p>
<pre><code class="lang-plaintext">yamlCopy codekeycloak:
  realm: spring-petclinic
  auth-server-url: http://localhost:8080/auth
  ssl-required: external
  resource: spring-petclinic
  credentials:
    secret: YOUR_CLIENT_SECRET_HERE
  principal-attribute: preferred_username
  use-resource-role-mappings: true

spring:
  security:
    oauth2:
      client:
        registration:
          keycloak:
            client-id: spring-petclinic
            client-secret: YOUR_CLIENT_SECRET_HERE
            authorization-grant-type: authorization_code
            scope: openid, profile, email
        provider:
          keycloak:
            issuer-uri: http://localhost:8080/auth/realms/spring-petclinic
</code></pre>
<hr />
<h3 id="heading-run-the-application"><strong>Run the Application</strong></h3>
<ol>
<li><p>Run the Spring PetClinic application:</p>
<pre><code class="lang-plaintext"> ./mvnw spring-boot:run
</code></pre>
</li>
<li><p>Open the application in a browser at <a target="_blank" href="http://localhost:8080"><code>http://localhost:8080</code></a>.</p>
</li>
<li><p>Attempt to access a protected route, such as <code>/vets</code> or <code>/owners</code>. You will be redirected to the Keycloak login page.</p>
</li>
<li><p>Log in with the corresponding Keycloak user, and you will be able to access the route based on the role assigned.</p>
</li>
</ol>
<hr />
<h3 id="heading-test-token-retrieval"><strong>Test Token Retrieval</strong></h3>
<p>Access the <code>/token</code> endpoint to retrieve the Keycloak token:</p>
<pre><code class="lang-plaintext">curl http://localhost:8080/token
</code></pre>
<p>If authenticated, the token should be returned in the response</p>
<h3 id="heading-test-the-spring-petclinic-application-with-keycloak">Test the Spring Petclinic Application with Keycloak</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727399371800/7e4682e2-bba3-4fa6-9b76-24dceb44970e.png" alt class="image--center mx-auto" /></p>
<p>Login Successful with Keycloak OAuth2</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727399263855/9a1f7f54-5c1c-4e73-ad9d-7dfa6353990b.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727399315188/6f9c5970-f07d-4107-8557-f62850a02f11.png" alt class="image--center mx-auto" /></p>
<p>Verify token</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727399483900/bdb0e449-d168-4c29-8744-b9c7f58dde03.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-conclusion-1"><strong>Conclusion</strong></h3>
<p>Integrating <strong>Keycloak</strong> with the <strong>Spring PetClinic</strong> application successfully addressed the need for secure, role-based access control and centralized identity management. By implementing token-based authentication with role mapping, we ensured that sensitive veterinary records are protected, while allowing different user groups (vets, owners, admins) to access only the data they are authorized to manage.</p>
<p>This solution provides the clinic with a scalable, secure, and centralized approach to identity and access management, supporting its long-term growth and security needs.</p>
<p>For further details on integrating Keycloak with Spring PetClinic, visit <a target="_blank" href="https://www.praful.cloud"><strong>praful.cloud</strong></a>.</p>
]]></content:encoded></item><item><title><![CDATA[AWS ECS Project 🌩️]]></title><description><![CDATA[Overview
This document provides a comprehensive guide for deploying a Docker application to AWS ECS (Elastic Container Service) and ECR (Elastic Container Registry) using Jenkins. The deployment process involves building a Docker image, pushing it to...]]></description><link>https://praful.cloud/aws-ecs-project</link><guid isPermaLink="true">https://praful.cloud/aws-ecs-project</guid><category><![CDATA[AWS]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[#AWS #ECS #Jenkins #CI/CD #Docker #CloudWatch #DevOps #LoadTesting #AmazonSNS #CloudComputing #DockerContainers #CloudInfrastructure #WebAppDeployment #Monitoring #CloudSecurity #Automation]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Thu, 08 Aug 2024 03:30:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723087619241/aa97c5e7-d8ad-407c-9055-33fdc098733c.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-overview">Overview</h3>
<p>This document provides a comprehensive guide for deploying a Docker application to AWS ECS (Elastic Container Service) and ECR (Elastic Container Registry) using Jenkins. The deployment process involves building a Docker image, pushing it to ECR, updating the ECS task definition, and deploying the updated task definition to an ECS service.</p>
<h3 id="heading-project-description-deploying-a-simple-html-web-application-on-aws-ecs-using-cicd-and-load-testing">Project Description: Deploying a Simple HTML Web Application on AWS ECS Using CI/CD and Load Testing</h3>
<h4 id="heading-introduction">Introduction</h4>
<p>In this project, we explore the end-to-end process of deploying a simple HTML web application using Amazon Web Services (AWS) Elastic Container Service (ECS). The deployment is facilitated by a Continuous Integration and Continuous Deployment (CI/CD) pipeline, and the application’s performance is validated through load testing. This detailed guide covers everything from setting up the necessary infrastructure to monitoring application performance, offering insights into best practices for modern cloud-native application deployment and management.</p>
<h4 id="heading-objectives">Objectives</h4>
<ol>
<li><p><strong>Set Up Jenkins and CI/CD Pipeline</strong>:</p>
<ul>
<li><p>Configure Jenkins on AWS EC2.</p>
</li>
<li><p>Set up Jenkins plugins for Docker, Amazon ECR, and pipeline management.</p>
</li>
<li><p>Automate the build, test, and deployment process using Jenkins.</p>
</li>
</ul>
</li>
<li><p><strong>Dockerize the Application</strong>:</p>
<ul>
<li><p>Create a Dockerfile for the HTML web application.</p>
</li>
<li><p>Build and push Docker images to Amazon ECR.</p>
</li>
</ul>
</li>
<li><p><strong>Deploy on AWS ECS</strong>:</p>
<ul>
<li><p>Configure the necessary AWS infrastructure (VPC, subnets, security groups).</p>
</li>
<li><p>Deploy the Dockerized application to ECS with appropriate task definitions and service configurations.</p>
</li>
</ul>
</li>
<li><p><strong>Load Testing and Monitoring</strong>:</p>
<ul>
<li><p>Perform load testing using tools like Apache Benchmark (ab) and wrk.</p>
</li>
<li><p>Monitor application performance using Amazon CloudWatch.</p>
</li>
<li><p>Set up CloudWatch alarms to alert on performance issues.</p>
</li>
</ul>
</li>
</ol>
<h4 id="heading-key-steps">Key Steps</h4>
<ol>
<li><p><strong>Jenkins Setup and CI/CD Pipeline</strong>:</p>
<ul>
<li><p>Installation and configuration of Jenkins.</p>
</li>
<li><p>Setting up Jenkins plugins for a seamless CI/CD process.</p>
</li>
<li><p>Writing and configuring pipeline scripts for automated deployment.</p>
</li>
</ul>
</li>
<li><p><strong>AWS Infrastructure Configuration</strong>:</p>
<ul>
<li><p>Setting up VPC, subnets, and security groups to ensure proper network configuration for ECS tasks.</p>
</li>
<li><p>Creating and attaching IAM roles with necessary permissions for ECS services.</p>
</li>
</ul>
</li>
<li><p><strong>Dockerization and Deployment</strong>:</p>
<ul>
<li><p>Writing a Dockerfile for the HTML web application.</p>
</li>
<li><p>Building Docker images and pushing them to Amazon ECR.</p>
</li>
<li><p>Configuring and deploying ECS tasks and services.</p>
</li>
</ul>
</li>
<li><p><strong>Performance Testing and Monitoring</strong>:</p>
<ul>
<li><p>Conducting load testing to evaluate application performance under stress.</p>
</li>
<li><p>Using Amazon CloudWatch for real-time monitoring and setting up alerts for proactive issue resolution.</p>
</li>
</ul>
</li>
</ol>
<h4 id="heading-outcomes">Outcomes</h4>
<ol>
<li><p><strong>Automated Deployment</strong>:</p>
<ul>
<li><p>Successful setup of a CI/CD pipeline that automates the build and deployment process.</p>
</li>
<li><p>Reduced manual intervention and errors, speeding up the deployment cycle.</p>
</li>
</ul>
</li>
<li><p><strong>Scalable and Monitored Application</strong>:</p>
<ul>
<li><p>The HTML web application is containerized and deployed on a scalable ECS cluster.</p>
</li>
<li><p>Continuous performance monitoring ensures the application remains reliable and performant under load.</p>
</li>
</ul>
</li>
<li><p><strong>Insights from Load Testing</strong>:</p>
<ul>
<li><p>Valuable data on how the application handles increased traffic.</p>
</li>
<li><p>Identified and resolved potential performance bottlenecks.</p>
</li>
</ul>
</li>
</ol>
<h4 id="heading-conclusion">Conclusion</h4>
<p>This project provides a comprehensive guide for deploying a simple HTML web application on AWS ECS using a CI/CD pipeline. From initial setup to performance testing and monitoring, each step is detailed to ensure a thorough understanding of the deployment process. By following this guide, you will gain hands-on experience in cloud-native application deployment, continuous integration, continuous deployment, and performance monitoring, equipping you with the skills necessary for managing modern cloud-based applications effectively.</p>
<p>GitHub Project Repo: <a target="_blank" href="https://github.com/prafulpatel16/ecs-demo.git">https://github.com/prafulpatel16/ecs-demo.git</a></p>
<h3 id="heading-prerequisites">Prerequisites</h3>
<p>AWS Account: Ensure you have an active AWS account. Docker: Install Docker on your local machine. AWS CLI: Install and configure the AWS CLI. AWS IAM Role: Create an IAM role with permissions for ECS, ECR, and other related services. Jenkins: Install Jenkins on your local machine or use a Jenkins server. Git Repository: Set up a Git repository (e.g., GitHub, GitLab).</p>
<p><strong>Create AWS IAM Role</strong></p>
<p>Create ECS Task Role Name: ecsTaskExecutionRole</p>
<p>Create JSON Policy: EC2ContainerRegistryReadOnly</p>
<pre><code class="lang-plaintext">
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ecr:BatchCheckLayerAvailability",
                "ecr:BatchGetImage",
                "ecr:DescribeRepositories",
                "ecr:GetDownloadUrlForLayer",
                "ecr:ListImages",
                "ecr:DescribeImages",
                "ecr:GetRepositoryPolicy",
                "ecr:DescribeImageScanFindings",
                "ecr:ListTagsForResource",
                "ecr:DescribeRegistry",
                "ecr:GetAuthorizationToken"
            ],
            "Resource": "*"
        }
    ]
}
</code></pre>
<p><img src="https://files.gitbook.com/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fr5HNoMjtIvwcDjeHVkZS%2Fuploads%2FQ4OsKYPCeRrLnnrQAp5x%2Fimage.png?alt=media&amp;token=14ad6b6d-d787-4ef3-96b2-08cab04389dc" alt /></p>
<h3 id="heading-prerequisites-1">Prerequisites</h3>
<ol>
<li><p><strong>AWS Account</strong>: Ensure you have an active AWS account.</p>
</li>
<li><p><strong>Docker</strong>: Install Docker on your local machine.</p>
</li>
<li><p><strong>AWS CLI</strong>: Install and configure the AWS CLI.</p>
</li>
<li><p><strong>Code Repository</strong>: Set up a Git repository (e.g., GitHub, GitLab).</p>
</li>
<li><p><strong>CI/CD Tool</strong>: Use a CI/CD tool like GitHub Actions, GitLab CI, or AWS CodePipeline.</p>
</li>
</ol>
<h3 id="heading-tools-and-aws-services-used-for-the-project">Tools and AWS Services Used for the Project</h3>
<h4 id="heading-tools">Tools</h4>
<ol>
<li><p><strong>Jenkins</strong></p>
<ul>
<li><p><strong>Description</strong>: An open-source automation server used to automate the building, testing, and deployment of applications.</p>
</li>
<li><p><strong>Usage</strong>: Set up CI/CD pipelines to automate the build and deployment process.</p>
</li>
</ul>
</li>
<li><p><strong>Docker</strong></p>
<ul>
<li><p><strong>Description</strong>: A platform for developing, shipping, and running applications in containers.</p>
</li>
<li><p><strong>Usage</strong>: Containerize the HTML web application.</p>
</li>
</ul>
</li>
<li><p><strong>Apache Benchmark (ab)</strong></p>
<ul>
<li><p><strong>Description</strong>: A tool for benchmarking the performance of HTTP web servers.</p>
</li>
<li><p><strong>Usage</strong>: Perform load testing on the deployed application.</p>
</li>
</ul>
</li>
</ol>
<h4 id="heading-aws-services">AWS Services</h4>
<ol>
<li><p><strong>Amazon Elastic Container Service (ECS)</strong></p>
<ul>
<li><p><strong>Description</strong>: A fully managed container orchestration service.</p>
</li>
<li><p><strong>Usage</strong>: Deploy and manage the containerized HTML web application.</p>
</li>
</ul>
</li>
<li><p><strong>Amazon Elastic Container Registry (ECR)</strong></p>
<ul>
<li><p><strong>Description</strong>: A fully managed Docker container registry that makes it easy to store, manage, and deploy Docker container images.</p>
</li>
<li><p><strong>Usage</strong>: Store and manage Docker images for deployment on ECS.</p>
</li>
</ul>
</li>
<li><p><strong>Amazon Elastic Compute Cloud (EC2)</strong></p>
<ul>
<li><p><strong>Description</strong>: A web service that provides resizable compute capacity in the cloud.</p>
</li>
<li><p><strong>Usage</strong>: Host Jenkins for setting up the CI/CD pipeline.</p>
</li>
</ul>
</li>
<li><p><strong>Amazon Virtual Private Cloud (VPC)</strong></p>
<ul>
<li><p><strong>Description</strong>: A service that lets you launch AWS resources in a logically isolated virtual network.</p>
</li>
<li><p><strong>Usage</strong>: Create a secure and isolated network environment for ECS tasks and services.</p>
</li>
</ul>
</li>
<li><p><strong>Amazon CloudWatch</strong></p>
<ul>
<li><p><strong>Description</strong>: A monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers.</p>
</li>
<li><p><strong>Usage</strong>: Monitor application performance, set up alarms, and gain insights into resource utilization.</p>
</li>
</ul>
</li>
<li><p><strong>AWS Identity and Access Management (IAM)</strong></p>
<ul>
<li><p><strong>Description</strong>: A web service that helps you securely control access to AWS services and resources.</p>
</li>
<li><p><strong>Usage</strong>: Manage permissions and roles for ECS tasks and services, ensuring secure access control.</p>
</li>
</ul>
</li>
<li><p><strong>AWS Secrets Manager</strong></p>
<ul>
<li><p><strong>Description</strong>: A service to help you protect access to your applications, services, and IT resources without the upfront cost and maintenance of hardware security modules (HSMs).</p>
</li>
<li><p><strong>Usage</strong>: Manage and retrieve secrets such as database credentials securely.</p>
</li>
</ul>
</li>
<li><p><strong>Amazon Route 53</strong></p>
<ul>
<li><p><strong>Description</strong>: A scalable and highly available Domain Name System (DNS) web service.</p>
</li>
<li><p><strong>Usage</strong>: Route traffic to the application deployed on ECS.</p>
</li>
</ul>
</li>
<li><p><strong>Amazon Simple Notification Service (SNS)</strong></p>
<ul>
<li><p><strong>Description</strong>: A fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication.</p>
</li>
<li><p><strong>Usage</strong>: Send email notifications about the status of the ECS tasks and alarms set in CloudWatch.</p>
</li>
</ul>
</li>
</ol>
<h4 id="heading-additional-features">Additional Features</h4>
<ol>
<li><p><strong>Email Notifications Using SNS</strong></p>
<ul>
<li><p><strong>Description</strong>: Set up Amazon SNS to send email notifications for important events or alarms related to the ECS deployment.</p>
</li>
<li><p><strong>Usage</strong>: Create an SNS topic, subscribe an email endpoint to the topic, and configure CloudWatch to send notifications to the SNS topic.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-implementation-steps-for-sns-and-email-notifications">Implementation Steps for SNS and Email Notifications</h3>
<ol>
<li><p><strong>Set Up SNS Topic and Email Subscription</strong></p>
<ul>
<li><p><strong>Create SNS Topic</strong>:</p>
<pre><code class="lang-bash">  aws sns create-topic --name ecs-deployment-notifications
</code></pre>
</li>
<li><p><strong>Subscribe Email to SNS Topic</strong>:</p>
<pre><code class="lang-bash">  aws sns subscribe --topic-arn arn:aws:sns:us-east-1:123456789012:ecs-deployment-notifications --protocol email --notification-endpoint your-email@example.com
</code></pre>
</li>
</ul>
</li>
<li><p><strong>Configure CloudWatch Alarms to Send Notifications</strong></p>
<ul>
<li><p><strong>Create CloudWatch Alarm</strong>:</p>
<pre><code class="lang-bash">  aws cloudwatch put-metric-alarm --alarm-name <span class="hljs-string">"HighCPUUtilization"</span> --metric-name <span class="hljs-string">"CPUUtilization"</span> --namespace <span class="hljs-string">"AWS/ECS"</span> --statistic <span class="hljs-string">"Average"</span> --period 300 --threshold 80 --comparison-operator <span class="hljs-string">"GreaterThanThreshold"</span> --dimensions Name=ClusterName,Value=your-cluster-name --evaluation-periods 2 --alarm-actions arn:aws:sns:us-east-1:123456789012:ecs-deployment-notifications
</code></pre>
</li>
</ul>
</li>
</ol>
<p>By integrating Amazon SNS and email notifications, you ensure that you receive timely updates on the status of your ECS deployments and any critical alerts, thereby enhancing the observability and reliability of your deployment process.</p>
<h3 id="heading-deployment-steps">Deployment Steps:</h3>
<p>1. <strong>Prepare the HTML Web App</strong></p>
<p>2. <strong>Dockerize the Application</strong></p>
<p>3. <strong>Create a Docker Repository on AWS ECR</strong></p>
<p>4. <strong>Push the Docker Image to ECR</strong></p>
<p>5. <strong>Create ECS Cluster and Task Definition</strong></p>
<p>6. <strong>Create ECS Service</strong></p>
<p>7. <strong>Set Up CI/CD Pipeline</strong></p>
<p>8. <strong>Monitor with CloudWatch</strong></p>
<p>9. <strong>Test and Validate</strong></p>
<h3 id="heading-implementation-in-action">Implementation in Action</h3>
<p>1. <strong>Prepare the HTML Web App</strong></p>
<p>2. <strong>Dockerize the Application</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722802900492/3af835c3-4499-4564-a49c-b24151aa14c4.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-plaintext"># Use an official nginx image as the base image
FROM nginx:alpine

# Copy the HTML file to the nginx directory
COPY index.html /usr/share/nginx/html

# Expose port 80
EXPOSE 80

# Start nginx when the container launches
CMD ["nginx", "-g", "daemon off;"]
</code></pre>
<p><strong>Build Docker Image</strong>: Build docker image in local</p>
<p>docker build -t simple-html-web-app .</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722803465339/a96b7ce3-f0c2-4776-a12b-f210d6642e00.png" alt class="image--center mx-auto" /></p>
<p>docker image build successful</p>
<p>docker images</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722803544859/7e05c66b-a657-4edd-be65-9fe3216ab19a.png" alt class="image--center mx-auto" /></p>
<p>Let's the docker image locally</p>
<p>Start the docker image 'simple-html-web-app'</p>
<pre><code class="lang-plaintext">docker run -d -p 8080:80 simple-html-web-app:latest
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722804155466/d0713eae-8869-43d3-8d33-50a4fc7c48ff.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722804450532/2bdf8007-1aa7-42b7-8276-2f3197cdae85.png" alt class="image--center mx-auto" /></p>
<p>Verify the docker image web app into Browser</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722804377251/50831df6-7213-4ea7-acdf-29ea09ef5521.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-migrate-the-docker-application-to-aws-ecs-container-platform">Migrate the Docker application to AWS ECS Container platform</h3>
<p>3. <strong>Create a Docker Repository on AWS ECR</strong></p>
<p><strong>Create ECR Repository</strong>:</p>
<pre><code class="lang-plaintext">aws ecr create-repository --repository-name simple-html-web-app
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722805004838/98f96e61-547e-41f9-8eaa-73ce05e835b4.png" alt class="image--center mx-auto" /></p>
<p><strong>Login to ECR</strong>:</p>
<pre><code class="lang-plaintext">aws ecr get-login-password --region &lt;your-region&gt; | docker login --username AWS --password-stdin &lt;your-aws-account-id&gt;.dkr.ecr.&lt;your-region&gt;.amazonaws.com
</code></pre>
<p>4. <strong>Push the Docker Image to ECR</strong></p>
<pre><code class="lang-plaintext">docker tag simple-html-web-app:latest &lt;your-aws-account-id&gt;.dkr.ecr.&lt;your-region&gt;.amazonaws.com/simple-html-web-app:latest
</code></pre>
<pre><code class="lang-plaintext">docker push &lt;your-aws-account-id&gt;.dkr.ecr.&lt;your-region&gt;.amazonaws.com/simple-html-web-app:latest
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722805230900/dfe4ea54-4092-4435-b156-03b4dde527d6.png" alt class="image--center mx-auto" /></p>
<p>5. <strong>Create ECS Cluster and Task Definition</strong></p>
<p><strong>Create ECS Cluster</strong>:</p>
<pre><code class="lang-plaintext">aws ecs create-cluster --cluster-name simple-html-web-app-cluster
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722805352807/7c7d37c8-77c5-4ea4-bb31-e05062ad340b.png" alt class="image--center mx-auto" /></p>
<p><strong>Create Task Definition</strong>:</p>
<p>taskdef.json</p>
<pre><code class="lang-plaintext">{
  "family": "simple-html-web-app-task",
  "networkMode": "awsvpc",
  "containerDefinitions": [
    {
      "name": "simple-html-web-app-container",
      "image": "&lt;your-aws-account-id&gt;.dkr.ecr.&lt;your-region&gt;.amazonaws.com/simple-html-web-app:latest",
      "essential": true,
      "portMappings": [
        {
          "containerPort": 80,
          "hostPort": 80
        }
      ],
      "memory": 512,
      "cpu": 256
    }
  ],
  "requiresCompatibilities": [
    "FARGATE"
  ],
  "cpu": "256",
  "memory": "512",
  "executionRoleArn": "arn:aws:iam::&lt;your-aws-account-id&gt;:role/ecsTaskExecutionRole"
}
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722805565949/022fa0c6-f6d7-42c7-acbf-b14965722650.png" alt class="image--center mx-auto" /></p>
<p><strong>Register Task Definition</strong>:</p>
<pre><code class="lang-plaintext">aws ecs register-task-definition --cli-input-json file://task-definition.json
</code></pre>
<p>6. <strong>Create ECS Service</strong></p>
<p><strong>Create Service</strong>:</p>
<p>from terminal using aws cli</p>
<pre><code class="lang-plaintext">aws ecs create-service \
  --cluster simple-html-web-app-cluster \
  --service-name simple-html-web-app-service \
  --task-definition simple-html-web-app-task \
  --desired-count 1 \
  --launch-type FARGATE \
  --network-configuration "awsvpcConfiguration={subnets=[&lt;your-subnet-id&gt;],securityGroups=[&lt;your-security-group-id&gt;],assignPublicIp=ENABLED}"
</code></pre>
<p>from AWS Console</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722805818305/4961a6d5-47e7-44f7-81fe-ffde1ff7a4bc.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722805875209/ce75ab94-9b8c-4701-89d1-fa7f89a04382.png" alt class="image--center mx-auto" /></p>
<p>Networking: It is going to leverage default VPC and it's default security group at the moment</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722805970467/5f044018-2632-44d8-9bdc-9c83478e4745.png" alt class="image--center mx-auto" /></p>
<p>Create new Application Load Balancer</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722806703739/92239e85-ea55-41e5-a695-3b4cb2d04117.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722806734900/13c86a0c-1306-4e99-ac16-8c37d97df3b8.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722806784259/0c5ab991-2eb2-4a96-9154-0493056fa65f.png" alt class="image--center mx-auto" /></p>
<p>New ECS Service created</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722806862083/f0449b67-e019-4406-85c0-d000e40623f3.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-set-up-jenkins-cicd-pipeline">Set Up Jenkins CI/CD Pipeline</h2>
<p><strong>Jenkins installation</strong></p>
<p>Create EC2 machine and Jenkins Server and install Jenkins</p>
<p><a target="_blank" href="https://www.jenkins.io/doc/tutorials/tutorial-for-installing-jenkins-on-AWS/">https://www.jenkins.io/doc/tutorials/tutorial-for-installing-jenkins-on-AWS/</a></p>
<p><strong>Part 1: Set Up Jenkins on AWS</strong></p>
<h4 id="heading-step-1-launch-an-ec2-instance-for-jenkins">Step 1: Launch an EC2 Instance for Jenkins</h4>
<ol>
<li><p><strong>Sign in to AWS Management Console</strong>.</p>
</li>
<li><p><strong>Launch EC2 Instance</strong>:</p>
<ul>
<li><p>Go to <strong>Services</strong> and select <strong>EC2</strong>.</p>
</li>
<li><p>Click on <strong>Launch Instance</strong>.</p>
</li>
</ul>
</li>
<li><p><strong>Choose AMI</strong>:</p>
<ul>
<li>Select an Amazon Linux 2 AMI (or Ubuntu, if preferred).</li>
</ul>
</li>
<li><p><strong>Choose Instance Type</strong>:</p>
<ul>
<li>Select <code>t2.medium</code> (or another type based on your needs).</li>
</ul>
</li>
<li><p><strong>Configure Instance</strong>:</p>
<ul>
<li><p>Configure VPC and Subnet.</p>
</li>
<li><p>Enable Auto-assign Public IP.</p>
</li>
</ul>
</li>
<li><p><strong>Add Storage</strong>:</p>
<ul>
<li>Configure storage as needed (8GB or more).</li>
</ul>
</li>
<li><p><strong>Add Tags</strong>:</p>
<ul>
<li>Add tags for easy identification (e.g., Key: <code>Name</code>, Value: <code>Jenkins</code>).</li>
</ul>
</li>
<li><p><strong>Configure Security Group</strong>:</p>
<ul>
<li><p>Create a new security group with the following inbound rules:</p>
<ul>
<li><p>HTTP: Port 80, Source: 0.0.0.0/0</p>
</li>
<li><p>Custom TCP Rule: Port 8080, Source: 0.0.0.0/0</p>
</li>
<li><p>SSH: Port 22, Source: 0.0.0.0/0</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Review and Launch</strong>:</p>
<ul>
<li><p>Review your settings and launch the instance.</p>
</li>
<li><p>Select or create a key pair for SSH access.</p>
</li>
</ul>
</li>
</ol>
<ul>
<li><p><strong>Connect to the EC2 Instance</strong>:</p>
<ul>
<li>Use SSH to connect to your EC2 instance.</li>
</ul>
</li>
</ul>
<pre><code class="lang-plaintext">    shCopy codessh -i "your-key-pair.pem" ec2-user@your-ec2-public-dns
</code></pre>
<ul>
<li><p><strong>Install Java</strong>:</p>
<pre><code class="lang-plaintext">  shCopy codesudo yum update -y
  sudo amazon-linux-extras install java-openjdk11 -y
</code></pre>
</li>
<li><p><strong>Add Jenkins Repository and Install Jenkins</strong>:</p>
<pre><code class="lang-plaintext">  shCopy codesudo wget -O /etc/yum.repos.d/jenkins.repo \
      https://pkg.jenkins.io/redhat-stable/jenkins.repo
  sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
  sudo yum install jenkins -y
</code></pre>
</li>
<li><p><strong>Start Jenkins</strong>:</p>
<pre><code class="lang-plaintext">  shCopy codesudo systemctl start jenkins
  sudo systemctl enable jenkins
</code></pre>
</li>
<li><p><strong>Open Jenkins in Browser</strong>:</p>
<ul>
<li><p>Navigate to <a target="_blank" href="http://your-ec2-public-dns:8080"><code>http://your-ec2-public-dns:8080</code></a> in your browser.</p>
</li>
<li><p>Retrieve the initial admin password:</p>
</li>
</ul>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723045974964/c3958398-654c-41e8-a12e-42245110de13.png" alt class="image--center mx-auto" /></p>
</li>
<li><pre><code class="lang-plaintext">      shCopy codesudo cat /var/lib/jenkins/secrets/initialAdminPassword
</code></pre>
</li>
<li><p><strong>Unlock Jenkins</strong>:</p>
<ul>
<li><p>Paste the retrieved password to unlock Jenkins.</p>
</li>
<li><p>Install suggested plugins during setup.</p>
</li>
</ul>
</li>
</ul>
<p>Configure Jenkins Credentials for AWS integration</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723046468681/c96b7be8-b3fd-45bb-b698-9f3c172ecfa1.png" alt class="image--center mx-auto" /></p>
<p>Global</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723046544337/1b17eeae-250e-48d4-ad25-f7b8259519a5.png" alt class="image--center mx-auto" /></p>
<p>Add Credentials</p>
<p>Create a jenkins user in AWS IAM role with access key and id and provide those credentials in jenkins</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723046678444/6390b7be-2f63-4703-9af4-a96fb2918ec9.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723046742974/2f8d3300-df9d-46d3-806c-c849cbd873ad.png" alt class="image--center mx-auto" /></p>
<p><strong>Part 2: Install Required Jenkins Plugins</strong></p>
<ol>
<li><p><strong>Install Plugins</strong>:</p>
<ul>
<li><p>Go to <code>Manage Jenkins</code> &gt; <code>Manage Plugins</code> &gt; <code>Available</code>.</p>
</li>
<li><p>Search and install the following plugins:</p>
<ul>
<li><p>Docker Pipeline</p>
</li>
<li><p>Amazon ECR</p>
</li>
<li><p>Pipeline</p>
</li>
<li><p>Git</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723046241984/7abd0d9c-7f4b-48ed-afa2-756f17260a97.png" alt class="image--center mx-auto" /></p>
<p><strong>Part 3: Configure Jenkins Pipeline for ECS</strong></p>
<h4 id="heading-step-1-set-up-aws-credentials-in-jenkins">Step 1: Set Up AWS Credentials in Jenkins</h4>
<ol>
<li><p><strong>Create AWS IAM User for Jenkins</strong>:</p>
<ul>
<li><p>Go to <strong>IAM</strong> &gt; <strong>Users</strong> &gt; <strong>Add user</strong>.</p>
</li>
<li><p>Username: <code>jenkins-user</code>, Access type: <code>Programmatic access</code>.</p>
</li>
<li><p>Attach existing policies directly: <code>AmazonEC2ContainerRegistryFullAccess</code>, <code>AmazonECS_FullAccess</code>, <code>AmazonS3FullAccess</code>.</p>
</li>
<li><p>Download the Access key ID and Secret access key.</p>
</li>
</ul>
</li>
<li><p><strong>Configure Credentials in Jenkins</strong>:</p>
<ul>
<li><p>Go to <code>Manage Jenkins</code> &gt; <code>Manage Credentials</code>.</p>
</li>
<li><p>Under the appropriate domain (e.g., <code>(global)</code>), add a new <strong>AWS Credentials</strong>:</p>
<ul>
<li><p>Kind: AWS Credentials</p>
</li>
<li><p>Access Key ID and Secret Access Key from the IAM user.</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<h4 id="heading-step-2-create-jenkins-pipeline">Step 2: Create Jenkins Pipeline</h4>
<ol>
<li><p><strong>Create a New Pipeline Job</strong>:</p>
<ul>
<li><p>Go to <code>Jenkins</code> &gt; <code>New Item</code>.</p>
</li>
<li><p>Enter a name (e.g., <code>ECS-Deploy-Pipeline</code>), select <strong>Pipeline</strong>, and click <strong>OK</strong>.</p>
</li>
</ul>
</li>
<li><p><strong>Configure Pipeline Script</strong>:</p>
<ul>
<li><p>In the <strong>Pipeline</strong> section, choose <strong>Pipeline script</strong>.</p>
</li>
<li><p>Enter the following example script:</p>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723046901057/8aeaf29c-a278-47c5-a0e4-fc14aaebc7f9.png" alt class="image--center mx-auto" /></p>
<p>Create a Pipline CICD</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723047018821/bac88c1e-1122-44b4-bf20-c0388f07d3e1.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-plaintext">pipeline {
    agent any

    parameters {
        string(name: 'IMAGE_TAG', defaultValue: 'latest', description: 'Docker image tag')
    }

    environment {
        AWS_ACCOUNT_ID = '913151559'
        AWS_REGION = 'us-east-1'
        ECR_REPO = 'simple-html-web-app'
        ECR_REPO_URI = '9313151559.dkr.ecr.us-east-1.amazonaws.com/simple-html-web-app'
        REGISTRY_CREDENTIAL = 'aws-credentials-id' // Ensure this matches the credentials ID in Jenkins
        IMAGE_TAG = "latest-${env.BUILD_NUMBER}" // Unique tag for each build
        ECS_CLUSTER = 'simple-html-web-app-cluster'
        ECS_SERVICE = 'ecs-demo-srv'
        TASK_DEFINITION_FAMILY = 'ecs-task-def'
    }

    stages {
        stage('Test Credentials') {
            steps {
                script {
                    withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', credentialsId: "${REGISTRY_CREDENTIAL}"]]) {
                        sh 'aws sts get-caller-identity'
                    }
                }
            }
        }

        stage('Clone Git Repository') {
            steps {
                checkout([$class: 'GitSCM',
                    branches: [[name: '*/master']],
                    doGenerateSubmoduleConfigurations: false,
                    extensions: [],
                    submoduleCfg: [],
                    userRemoteConfigs: [[credentialsId: '', url: 'https://github.com/prafulpatel16/ecs-demo.git']]
                ])
            }
        }

        stage('Build and Push Docker Image') {
            steps {
                script {
                    // Build and tag Docker image with specified tag
                    sh "docker build -t ${ECR_REPO_URI}:${params.IMAGE_TAG} ."

                    // Login to AWS ECR
                    withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', credentialsId: "${REGISTRY_CREDENTIAL}"]]) {
                        sh """
                        aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${ECR_REPO_URI}
                        """
                    }

                    // Push Docker image to ECR
                    sh "docker push ${ECR_REPO_URI}:${params.IMAGE_TAG}"
                }
            }
        }

        stage('Update ECS Task Definition') {
            steps {
                script {
                    // Load and update task definition JSON
                    def taskDefinition = readFile 'taskdef.json'
                    taskDefinition = taskDefinition.replace("REPLACE_WITH_IMAGE_TAG", "${ECR_REPO_URI}:${params.IMAGE_TAG}")

                    // Write updated task definition to file
                    writeFile file: 'taskdef.json', text: taskDefinition

                    // Register updated task definition and capture the revision number
                    withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', credentialsId: "${REGISTRY_CREDENTIAL}"]]) {
                        def registerOutput = sh(script: "aws ecs register-task-definition --cli-input-json file://taskdef.json", returnStdout: true).trim()
                        echo "Register Output: ${registerOutput}"
                        def json = readJSON text: registerOutput
                        def taskDefinitionArn = json.taskDefinition.taskDefinitionArn
                        echo "Task Definition ARN: ${taskDefinitionArn}"
                        def taskDefinitionRevision = taskDefinitionArn.tokenize(':').last()  // Extract revision number
                        echo "Task Definition Revision: ${taskDefinitionRevision}"

                        // Save the new task definition revision for later use
                        env.TASK_DEFINITION_REVISION = taskDefinitionRevision
                    }
                }
            }
        }

        stage('Deploy ECS Service') {
            steps {
                script {
                    // Update ECS service with the new task definition revision
                    withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', credentialsId: "${REGISTRY_CREDENTIAL}"]]) {
                        sh """
                        aws ecs update-service --cluster ${ECS_CLUSTER} --service ${ECS_SERVICE} --task-definition ${TASK_DEFINITION_FAMILY}:${env.TASK_DEFINITION_REVISION} --force-new-deployment
                        """
                    }
                }
            }
        }
    }
}
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723047128642/5079e81b-76dc-4f2c-aac2-ab58a066dda1.png" alt class="image--center mx-auto" /></p>
<p>CICD Pipeline is ready to build and deploy</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723047271561/c211a262-b63e-4da5-9bc3-d99e77515bb0.png" alt class="image--center mx-auto" /></p>
<p>Before Deployment verify the ECR repo and ECS task definition</p>
<p>ECR</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723047455081/27e3e72d-cb52-46ff-bc65-54c52f9e0713.png" alt class="image--center mx-auto" /></p>
<p>ECS Task Definition</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723047495832/00dfe421-9a80-49b4-a9b5-5a8cd8fb3a18.png" alt class="image--center mx-auto" /></p>
<p>Let's build the code and deploy</p>
<p>Build Manual</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723047557235/fd06ff2d-ac34-41ab-8cfd-5b3624c14cee.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723047895895/f4a683d8-a969-40b8-a0d6-0ed33550a526.png" alt class="image--center mx-auto" /></p>
<p>Build and deploy successful</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723048270460/ba9f8595-1386-4885-8fcb-eb03daa1b640.png" alt class="image--center mx-auto" /></p>
<p>Access Web application</p>
<p>Hit the ALB url: <a target="_blank" href="http://ecs-alb-611614342.us-east-1.elb.amazonaws.com/">http://ecs-alb-611614342.us-east-1.elb.amazonaws.com/</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723048304001/7e568c83-a830-4862-938d-2f54443c6133.png" alt class="image--center mx-auto" /></p>
<p>Verify post deployment that the new task definition has been created</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723048390352/2851820e-497a-4793-95fd-fb74b9ff934b.png" alt class="image--center mx-auto" /></p>
<p>Let's Change and update the code to verify that the new changes successfully apply within the CICD pipeline</p>
<p>Observe that on the web application there is text "Demo ECS" needs to be removed from the code and new code needs to be deployed through CICD</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723048528989/63d1fef8-0fd0-4616-8456-9a9b56ab1e81.png" alt class="image--center mx-auto" /></p>
<p>Go to Local VS code and remove the text</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723048596176/f08a0e39-26c0-4ec1-bae3-d6a939e0d002.png" alt class="image--center mx-auto" /></p>
<p>After removal the text</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723048636032/f5222a8a-bb74-4f08-a539-386781c24f37.png" alt class="image--center mx-auto" /></p>
<p>Commit the changes and push to the GitHub repo</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723048705946/97ad599c-6a5f-420e-b06e-d6af9cc68a41.png" alt class="image--center mx-auto" /></p>
<p>git push</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723048750854/3082490c-acb5-4e94-a1de-44c30cff05ea.png" alt class="image--center mx-auto" /></p>
<p>Expectation: New changes should be updated successfully on the web application</p>
<p>So if we look at the ECS Cluster definition there is 'ecs-task-def:28' which is curently running with the old web application and once the new changes is deployed it should be new 'ecs-task-def:29' should be created with the new changes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723048991846/d3506e9f-9889-4990-919c-f8f91b830eec.png" alt class="image--center mx-auto" /></p>
<p>Now let's build the code and push the changes through Jenkins CICD</p>
<p>Build#18</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723049137737/257ca719-19d8-4af0-b375-9bb4b6c89442.png" alt class="image--center mx-auto" /></p>
<p>Build Successful</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723049194693/7612e7ea-b12d-42c5-8d2e-cbd546aa08c5.png" alt class="image--center mx-auto" /></p>
<p>new 'ecs-task-def:29' deploying</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723049247314/a3586024-812d-4d77-9dd0-a6f934773703.png" alt class="image--center mx-auto" /></p>
<p>deployed</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723049276388/ff9f77ce-59f3-4f69-b315-2996e4360b09.png" alt class="image--center mx-auto" /></p>
<p>Let's verify the web application has updated the changes successfully</p>
<p>removed text "demo-ecs"</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723049343365/430fd670-cce4-47e4-a27b-76e675bc4fff.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-monitoring-cloudwatch">Monitoring CloudWatch</h3>
<p>Certainly! Here’s a comprehensive guide to performing a CPU load test on an ECS cluster web app and monitoring CPU utilization on CloudWatch using Apache Benchmark (ab) and wrk from an Amazon Linux 2 AMI:</p>
<h2 id="heading-step-1-set-up-amazon-linux-2-instance">Step 1: Set Up Amazon Linux 2 Instance</h2>
<ol>
<li><p><strong>Launch an Amazon Linux 2 Instance</strong>:</p>
<ul>
<li><p>Open the EC2 Dashboard in the AWS Management Console.</p>
</li>
<li><p>Launch a new instance and select Amazon Linux 2 AMI.</p>
</li>
<li><p>Choose an instance type (e.g., t2.micro).</p>
</li>
<li><p>Configure instance details, add storage, and configure security groups to allow SSH access.</p>
</li>
<li><p>Launch the instance and connect to it via SSH.</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-step-2-install-load-testing-tools-on-amazon-linux-2">Step 2: Install Load Testing Tools on Amazon Linux 2</h2>
<h3 id="heading-apache-benchmark-ab">Apache Benchmark (ab)</h3>
<ol>
<li><p><strong>Install Apache Benchmark</strong>:</p>
<pre><code class="lang-bash"> sudo yum update -y
 sudo yum install httpd-tools -y
</code></pre>
</li>
</ol>
<h3 id="heading-wrk">wrk</h3>
<ol>
<li><p><strong>Install wrk</strong>:</p>
<pre><code class="lang-bash"> sudo yum install -y git gcc
 git <span class="hljs-built_in">clone</span> https://github.com/wg/wrk.git
 <span class="hljs-built_in">cd</span> wrk
 make
 sudo cp wrk /usr/<span class="hljs-built_in">local</span>/bin
</code></pre>
</li>
</ol>
<h2 id="heading-step-3-perform-load-testing">Step 3: Perform Load Testing</h2>
<h3 id="heading-using-apache-benchmark-ab">Using Apache Benchmark (ab)</h3>
<ol>
<li><p><strong>Run a Basic Load Test</strong>:</p>
<pre><code class="lang-bash"> ab -n 1000 -c 50 http://your-alb-url/
</code></pre>
<ul>
<li><p><code>-n 1000</code>: Number of requests to perform.</p>
</li>
<li><p><code>-c 50</code>: Number of multiple requests to perform at a time.</p>
</li>
<li><p>Replace <a target="_blank" href="http://your-alb-url/"><code>http://your-alb-url/</code></a> with your actual ALB URL.</p>
</li>
</ul>
</li>
<li><p><strong>Increase the Load</strong>:</p>
<pre><code class="lang-bash"> ab -n 10000 -c 200 http://your-alb-url/
</code></pre>
</li>
</ol>
<h3 id="heading-using-wrk">Using wrk</h3>
<ol>
<li><p><strong>Run a Basic Load Test</strong>:</p>
<pre><code class="lang-bash"> wrk -t12 -c400 -d30s http://your-alb-url/
</code></pre>
<ul>
<li><p><code>-t12</code>: Number of threads to use.</p>
</li>
<li><p><code>-c400</code>: Number of connections to open.</p>
</li>
<li><p><code>-d30s</code>: Duration of the test (30 seconds).</p>
</li>
<li><p>Replace <a target="_blank" href="http://your-alb-url/"><code>http://your-alb-url/</code></a> with your actual ALB URL.</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-step-4-monitor-cpu-utilization-on-cloudwatch">Step 4: Monitor CPU Utilization on CloudWatch</h2>
<h3 id="heading-enable-cloudwatch-monitoring-for-ecs-cluster">Enable CloudWatch Monitoring for ECS Cluster</h3>
<ol>
<li><p><strong>Ensure CloudWatch Monitoring is Enabled</strong>:</p>
<ul>
<li><p>Navigate to the ECS Cluster in the AWS Management Console.</p>
</li>
<li><p>Go to the "Monitoring" tab and ensure CloudWatch metrics are enabled.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-create-cloudwatch-alarms">Create CloudWatch Alarms</h3>
<ol>
<li><p><strong>Create an Alarm for CPU Utilization</strong>:</p>
<ul>
<li><p>Open the CloudWatch Dashboard in the AWS Management Console.</p>
</li>
<li><p>Click on "Alarms" &gt; "Create Alarm".</p>
</li>
<li><p>Select the ECS cluster's CPUUtilization metric.</p>
</li>
<li><p>Set the threshold (e.g., CPU utilization &gt; 80% for 5 minutes).</p>
</li>
<li><p>Configure actions (e.g., send an SNS notification).</p>
</li>
<li><p>Review and create the alarm.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-view-metrics">View Metrics</h3>
<ol>
<li><p><strong>View CPU Utilization Metrics</strong>:</p>
<ul>
<li><p>In the CloudWatch Dashboard, navigate to "Metrics".</p>
</li>
<li><p>Select "ECS" and find the CPU utilization metrics for your cluster and services.</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-step-5-verify-and-analyze">Step 5: Verify and Analyze</h2>
<ol>
<li><p><strong>Verify Load Test Results</strong>:</p>
<ul>
<li><p>Check the output of <code>ab</code> or <code>wrk</code> for request statistics, including requests per second, mean response time, and more.</p>
</li>
<li><p>Ensure your web application is handling the load as expected.</p>
</li>
</ul>
</li>
<li><p><strong>Analyze CloudWatch Metrics</strong>:</p>
<ul>
<li><p>Go to the CloudWatch Dashboard and check the CPU utilization metrics.</p>
</li>
<li><p>Ensure that the ECS tasks are scaling properly based on the load and the alarms are triggering as expected.</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-example-cloudwatch-alarm-json-optional">Example CloudWatch Alarm JSON (Optional)</h2>
<p>If you prefer to create the CloudWatch alarm using AWS CLI, here is an example JSON configuration:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"AlarmName"</span>: <span class="hljs-string">"ECS-CPU-Utilization-High"</span>,
  <span class="hljs-attr">"AlarmDescription"</span>: <span class="hljs-string">"Alarm when ECS CPU utilization exceeds 80%"</span>,
  <span class="hljs-attr">"ActionsEnabled"</span>: <span class="hljs-literal">true</span>,
  <span class="hljs-attr">"OKActions"</span>: [],
  <span class="hljs-attr">"AlarmActions"</span>: [
    <span class="hljs-string">"arn:aws:sns:us-east-1:123456789012:MySNSTopic"</span>
  ],
  <span class="hljs-attr">"MetricName"</span>: <span class="hljs-string">"CPUUtilization"</span>,
  <span class="hljs-attr">"Namespace"</span>: <span class="hljs-string">"AWS/ECS"</span>,
  <span class="hljs-attr">"Statistic"</span>: <span class="hljs-string">"Average"</span>,
  <span class="hljs-attr">"Dimensions"</span>: [
    {
      <span class="hljs-attr">"Name"</span>: <span class="hljs-string">"ClusterName"</span>,
      <span class="hljs-attr">"Value"</span>: <span class="hljs-string">"your-ecs-cluster-name"</span>
    },
    {
      <span class="hljs-attr">"Name"</span>: <span class="hljs-string">"ServiceName"</span>,
      <span class="hljs-attr">"Value"</span>: <span class="hljs-string">"your-ecs-service-name"</span>
    }
  ],
  <span class="hljs-attr">"Period"</span>: <span class="hljs-number">300</span>,
  <span class="hljs-attr">"EvaluationPeriods"</span>: <span class="hljs-number">1</span>,
  <span class="hljs-attr">"Threshold"</span>: <span class="hljs-number">80.0</span>,
  <span class="hljs-attr">"ComparisonOperator"</span>: <span class="hljs-string">"GreaterThanThreshold"</span>
}
</code></pre>
<p>You can use the AWS CLI to create the alarm with the above configuration:</p>
<pre><code class="lang-bash">aws cloudwatch put-metric-alarm --cli-input-json file://alarm.json
</code></pre>
<p>Replace [<code>file://alarm.json</code>](file://alarm.json) with the path to your JSON file.</p>
<p>By following these steps, you can perform load testing on your ECS cluster web application and monitor its CPU utilization using CloudWatch.</p>
<p>Create Alarm</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723050815281/e0d414f9-7a32-4e59-b8dd-420bfda6373b.png" alt class="image--center mx-auto" /></p>
<p>Select Metric</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723050843843/c1876c07-54ac-49c8-a1c1-53b8d2545393.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723050861557/3050be7f-09a2-4944-b6cd-ba38ae6daa0b.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723050902248/c87a3a04-0ebd-45ab-879a-e6df8f31f992.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723050932718/01e7f04a-b905-4a6a-91ae-e19347c1cc48.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723050987007/6deda7b1-e785-450d-bc0d-4f27f2afe789.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723051016835/04dee403-23a4-46ab-ae8a-711a32143748.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723051093631/5c156439-efa0-46d6-bf22-12fa79ed0744.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723051113516/5dc9dd5c-909b-4c07-a305-c70bf4c433ee.png" alt class="image--center mx-auto" /></p>
<p><strong>Increase the Load</strong>:</p>
<p>SSH to the Load test machine</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723052022534/bfb48c3c-4a90-4d77-a50b-7bc6a7a5a5ce.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-plaintext">ab -n 10000 -c 200 http://your-alb-url/
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723052134366/3a540db6-de08-459e-a426-1d745343addf.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723052155188/c93162d5-4cc0-4c70-9232-b02be794a138.png" alt class="image--center mx-auto" /></p>
<p>Hit the ALB URL 10 to 20 times and observe the load</p>
<p>Observer the CPU Load</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723052203512/947137fd-d716-4eec-8118-4d49ff77107d.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723052236260/557135a2-eae5-4cbc-95f4-b33bcbd062ea.png" alt class="image--center mx-auto" /></p>
<p>ALarm Triggered</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723052521427/6d1b21c4-b767-450d-9af2-e2c07ed9fcd0.png" alt class="image--center mx-auto" /></p>
<p>Email Notification triggered</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723052670142/8e2f6975-93b3-43c6-9175-f375862247ac.png" alt class="image--center mx-auto" /></p>
<p>High CPU Utlilization</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723052551624/6a752d2b-c3a7-4e61-b4b7-c91c7cc8f34c.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723052593141/0ee66e25-7077-43cf-9b6f-4eb772cd0d85.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-project-conclusion">Project Conclusion</h3>
<h4 id="heading-overview-1">Overview</h4>
<p>The objective of this project was to deploy a simple HTML web application on AWS ECS using a CI/CD pipeline and to ensure the application’s performance under load through load testing and monitoring. This comprehensive exercise demonstrated the full lifecycle of a cloud-native application, from development and deployment to performance monitoring and scaling.</p>
<h4 id="heading-key-steps-and-achievements">Key Steps and Achievements</h4>
<ol>
<li><p><strong>Setting Up Jenkins and CI/CD Pipeline</strong>:</p>
<ul>
<li><p>Installed and configured Jenkins on an AWS EC2 instance.</p>
</li>
<li><p>Set up necessary Jenkins plugins for Docker, Amazon ECR, and pipeline management.</p>
</li>
<li><p>Created a Jenkins pipeline script to build, push Docker images to ECR, and deploy to ECS.</p>
</li>
</ul>
</li>
<li><p><strong>Dockerization and Deployment to AWS ECS</strong>:</p>
<ul>
<li><p>Created a Dockerfile for the simple HTML web application.</p>
</li>
<li><p>Built Docker images and pushed them to Amazon ECR.</p>
</li>
<li><p>Deployed the application to AWS ECS with appropriate task definitions and service configurations.</p>
</li>
</ul>
</li>
<li><p><strong>Infrastructure Setup on AWS</strong>:</p>
<ul>
<li><p>Configured necessary VPC, subnets, and security groups to ensure the ECS tasks had proper networking configurations.</p>
</li>
<li><p>Created IAM roles and policies to grant necessary permissions for ECS tasks and services.</p>
</li>
</ul>
</li>
<li><p><strong>Load Testing and Monitoring</strong>:</p>
<ul>
<li><p>Performed load testing using Apache Benchmark (ab) and wrk to simulate user traffic and evaluate the application's performance.</p>
</li>
<li><p>Monitored ECS cluster and service performance using Amazon CloudWatch.</p>
</li>
<li><p>Configured CloudWatch alarms to alert when CPU utilization thresholds were exceeded.</p>
</li>
</ul>
</li>
<li><p><strong>Troubleshooting and Optimization</strong>:</p>
<ul>
<li><p>Resolved common issues such as task network configuration errors and IAM role permissions.</p>
</li>
<li><p>Tuned ECS service and task definitions for optimal performance under load.</p>
</li>
</ul>
</li>
</ol>
<h4 id="heading-project-outcomes">Project Outcomes</h4>
<ol>
<li><p><strong>Successful Deployment</strong>:</p>
<ul>
<li><p>The HTML web application was successfully containerized and deployed on AWS ECS.</p>
</li>
<li><p>The CI/CD pipeline automated the process of building, testing, and deploying the application, ensuring efficient and reliable deployments.</p>
</li>
</ul>
</li>
<li><p><strong>Effective Load Testing</strong>:</p>
<ul>
<li><p>Load tests provided valuable insights into the application's performance and scalability.</p>
</li>
<li><p>Identified and mitigated potential bottlenecks, ensuring the application could handle increased traffic.</p>
</li>
</ul>
</li>
<li><p><strong>Robust Monitoring and Alerts</strong>:</p>
<ul>
<li><p>CloudWatch metrics and alarms enabled proactive monitoring of the application's performance.</p>
</li>
<li><p>Ensured timely alerts and response to any performance degradation or failures.</p>
</li>
</ul>
</li>
</ol>
<h4 id="heading-lessons-learned">Lessons Learned</h4>
<ol>
<li><p><strong>Automation is Key</strong>:</p>
<ul>
<li>Automating the deployment process using Jenkins and ECS significantly reduces manual errors and speeds up the deployment process.</li>
</ul>
</li>
<li><p><strong>Importance of Monitoring</strong>:</p>
<ul>
<li>Continuous monitoring and proactive alerting are crucial for maintaining application performance and availability.</li>
</ul>
</li>
<li><p><strong>Thorough Testing</strong>:</p>
<ul>
<li>Load testing is essential to understand how the application performs under various traffic conditions and to ensure it can scale appropriately.</li>
</ul>
</li>
</ol>
<h4 id="heading-future-work">Future Work</h4>
<ol>
<li><p><strong>Advanced CI/CD Features</strong>:</p>
<ul>
<li>Implementing advanced CI/CD features such as blue-green deployments or canary releases to minimize downtime and reduce risk during updates.</li>
</ul>
</li>
<li><p><strong>Enhanced Monitoring</strong>:</p>
<ul>
<li>Integrating more sophisticated monitoring and logging tools such as AWS X-Ray for distributed tracing and deeper insights into application performance.</li>
</ul>
</li>
<li><p><strong>Security Enhancements</strong>:</p>
<ul>
<li>Implementing more robust security measures, including using AWS Secrets Manager for managing sensitive information and enhancing IAM policies.</li>
</ul>
</li>
</ol>
<h4 id="heading-conclusion-1">Conclusion</h4>
<p>This project demonstrated the end-to-end process of deploying a web application on AWS ECS, from setting up the CI/CD pipeline to ensuring the application’s performance under load. Through this exercise, key skills in cloud infrastructure management, continuous integration, continuous deployment, and performance monitoring were reinforced, providing a solid foundation for managing and scaling cloud-native applications effectively.</p>
<h3 id="heading-key-takeaways-from-the-project">Key Takeaways from the Project</h3>
<ol>
<li><p><strong>Hands-On Experience with AWS Services</strong>:</p>
<ul>
<li><p><strong>ECS</strong>: Learned to deploy and manage containerized applications using Amazon Elastic Container Service.</p>
</li>
<li><p><strong>ECR</strong>: Managed Docker images using Amazon Elastic Container Registry.</p>
</li>
<li><p><strong>EC2</strong>: Utilized Amazon EC2 instances for running Jenkins and load testing tools.</p>
</li>
<li><p><strong>CloudWatch</strong>: Monitored application performance and set up alerts using Amazon CloudWatch.</p>
</li>
<li><p><strong>IAM</strong>: Configured and managed IAM roles and policies for secure access and permissions.</p>
</li>
<li><p><strong>SNS</strong>: Implemented Amazon SNS for email notifications on deployment status and performance alerts.</p>
</li>
</ul>
</li>
<li><p><strong>CI/CD Pipeline Implementation</strong>:</p>
<ul>
<li><p><strong>Jenkins</strong>: Set up a Jenkins server on AWS, installed necessary plugins, and created a Jenkins pipeline for automated deployments.</p>
</li>
<li><p><strong>Docker</strong>: Built, managed, and deployed Docker containers, ensuring consistent application environments.</p>
</li>
<li><p><strong>Automated Builds and Deployments</strong>: Configured Jenkins to automate the build and deployment process, reducing manual intervention and ensuring faster delivery.</p>
</li>
</ul>
</li>
<li><p><strong>Containerization Best Practices</strong>:</p>
<ul>
<li><p><strong>Dockerfile Creation</strong>: Created optimized Dockerfiles for the web application.</p>
</li>
<li><p><strong>Docker Compose</strong>: Used Docker Compose for local development and testing of multi-container applications.</p>
</li>
</ul>
</li>
<li><p><strong>Monitoring and Load Testing</strong>:</p>
<ul>
<li><p><strong>Apache Benchmark (ab) and wrk</strong>: Conducted load tests to measure the performance and resilience of the application under high traffic.</p>
</li>
<li><p><strong>CloudWatch Metrics and Alarms</strong>: Set up CloudWatch metrics and alarms to monitor CPU utilization, memory usage, and application logs.</p>
</li>
</ul>
</li>
<li><p><strong>Networking and Security</strong>:</p>
<ul>
<li><p><strong>VPC Configuration</strong>: Created and managed VPCs, subnets, and security groups to ensure secure and isolated network environments.</p>
</li>
<li><p><strong>Security Groups and IAM Roles</strong>: Configured security groups to allow necessary traffic and set up IAM roles with least privilege access.</p>
</li>
</ul>
</li>
<li><p><strong>Handling Real-World Scenarios</strong>:</p>
<ul>
<li><p><strong>Troubleshooting Deployment Issues</strong>: Resolved common deployment errors such as network configuration issues and IAM role permissions.</p>
</li>
<li><p><strong>Scaling and Performance Optimization</strong>: Learned to scale ECS services and optimize performance based on load testing results.</p>
</li>
<li><p><strong>Automated Notifications</strong>: Implemented SNS for real-time notifications on deployment status and performance issues.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-scenarios-and-experiences">Scenarios and Experiences</h3>
<ol>
<li><p><strong>Scenario: Automated CI/CD Pipeline with Jenkins</strong>:</p>
<ul>
<li><strong>Experience</strong>: Set up a Jenkins server on an EC2 instance, configured it with necessary plugins, and created a pipeline that automated the build, test, and deployment of a Dockerized web application to ECS. This demonstrated an understanding of continuous integration and continuous deployment best practices.</li>
</ul>
</li>
<li><p><strong>Scenario: Containerization and Deployment on ECS</strong>:</p>
<ul>
<li><strong>Experience</strong>: Containerized a simple HTML web application using Docker, pushed the image to Amazon ECR, and deployed it on ECS. This involved creating and configuring task definitions, clusters, and services in ECS, showcasing knowledge of container orchestration.</li>
</ul>
</li>
<li><p><strong>Scenario: Load Testing and Monitoring</strong>:</p>
<ul>
<li><strong>Experience</strong>: Conducted load testing using Apache Benchmark (ab) and wrk to simulate high traffic on the web application. Monitored application performance with CloudWatch and set up alarms to trigger notifications via SNS in case of performance degradation. This highlighted skills in performance testing and monitoring.</li>
</ul>
</li>
<li><p><strong>Scenario: Handling Deployment Errors</strong>:</p>
<ul>
<li><strong>Experience</strong>: Faced and resolved issues related to ECS service creation, IAM role permissions, and network configurations. This involved troubleshooting errors such as "ResourceInitializationError" and ensuring proper network settings for ECS tasks to access ECR. This scenario demonstrated problem-solving skills and the ability to debug deployment issues.</li>
</ul>
</li>
<li><p><strong>Scenario: Secure and Scalable Network Setup</strong>:</p>
<ul>
<li><strong>Experience</strong>: Configured VPCs, subnets, and security groups to create a secure and scalable network environment for the ECS cluster. This included setting up proper routing for ENIs and ensuring security group rules allowed necessary traffic. This showcased an understanding of AWS networking and security best practices.</li>
</ul>
</li>
</ol>
<h3 id="heading-interview-discussion-points">Interview Discussion Points</h3>
<ul>
<li><p>Discuss the <strong>end-to-end deployment process</strong> using Jenkins and ECS, highlighting the automation of build and deployment stages.</p>
</li>
<li><p>Explain the <strong>benefits of containerization</strong> and how Docker was used to ensure consistent application environments.</p>
</li>
<li><p>Describe the <strong>monitoring setup with CloudWatch</strong> and how it helped in maintaining application performance and uptime.</p>
</li>
<li><p>Share insights on <strong>load testing results</strong> and how they influenced performance optimization and scaling decisions.</p>
</li>
<li><p>Highlight the <strong>troubleshooting steps</strong> taken to resolve deployment issues and the importance of proper IAM and network configurations.</p>
</li>
<li><p>Emphasize the <strong>importance of security</strong> in cloud deployments, mentioning the setup of VPCs, subnets, and security groups.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Multi-Region Deployment of Java Web Application in Azure DevOps]]></title><description><![CDATA[Project Title: Multi-Region Deployment of Java Web Application in Azure DevOps
Project Description:
Our project is dedicated to crafting a resilient deployment strategy for a Java web application using Azure DevOps. We prioritize high availability an...]]></description><link>https://praful.cloud/multi-region-deployment-of-java-web-application-in-azure-devops</link><guid isPermaLink="true">https://praful.cloud/multi-region-deployment-of-java-web-application-in-azure-devops</guid><category><![CDATA[azure-devops]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Sat, 10 Feb 2024 21:43:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1707601019600/f03f92f7-e7d5-4409-a76c-60d4c33de577.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Project Title: Multi-Region Deployment of Java Web Application in Azure DevOps</strong></p>
<p><strong>Project Description:</strong></p>
<p>Our project is dedicated to crafting a resilient deployment strategy for a Java web application using Azure DevOps. We prioritize high availability and reliability by deploying the application across two distinct Azure regions: WestUS and EastUS. Additionally, we implement a primary-secondary deployment model within the development environment, bolstering fault tolerance and fortifying disaster recovery capabilities.</p>
<p><strong>For more insights, visit my project blog:</strong> <a target="_blank" href="https://praful-cloud.gitbook.io/azure-devops/">https://praful-cloud.gitbook.io/azure-devops/</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707600989904/7d5f35ad-10ac-4041-9390-fa301289f694.png" alt class="image--center mx-auto" /></p>
<p><strong>Key Features and Components:</strong></p>
<ol>
<li><p><a target="_blank" href="http://1.Azure">1.<strong>Azure</strong></a> <strong>DevOps Pipelines:</strong> We utilize Azure DevOps Pipelines to automate the build and deployment processes of our Java web application. Through pipelines, we define tasks for building the application, running tests, and packaging the artifacts for deployment.</p>
</li>
<li><p><a target="_blank" href="http://2.Azure">2.<strong>Azure</strong></a> <strong>App Service:</strong> The Java web application is hosted on Azure App Service, a fully managed platform for building, deploying, and scaling web apps. We configure separate instances of Azure App Service in both WestUS and EastUS regions to ensure redundancy and minimize downtime.</p>
</li>
<li><p>3.<strong>Release Pipelines:</strong> We establish two distinct release pipelines for deploying the application in the WestUS and EastUS regions, respectively. Each release pipeline orchestrates the deployment process, ensuring seamless delivery of updates to the application.</p>
</li>
<li><p>4.<strong>Primary-Secondary Deployment:</strong> Within the development environment, we implement a primary-secondary deployment pattern to enhance resilience. This involves deploying the application to two separate environments: Dev-Primary and Dev-Secondary. In case of failures or issues in the primary environment, traffic can be automatically rerouted to the secondary environment, minimizing disruptions to development workflows.</p>
</li>
<li><p>5.<strong>Service Connections:</strong> To facilitate secure integration between Azure DevOps and Azure resources, we configure service connections that provide the necessary permissions for deploying to Azure App Service instances in multiple regions.</p>
</li>
</ol>
<p><img src="https://files.gitbook.com/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FIvLv5SCu33o8tAyuV93R%2Fuploads%2FPpx3ysG8srEhF7Rqdr6P%2Fimage.png?alt=media&amp;token=6cb21d0c-5651-4b31-ab04-171db243d418" alt /></p>
<p>Web App1</p>
<p><img src="https://files.gitbook.com/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FIvLv5SCu33o8tAyuV93R%2Fuploads%2FwHxPB57UqLnFmOV2my6L%2Fimage.png?alt=media&amp;token=6cb33dc3-995b-4707-87fe-ec43b6c7d110" alt /></p>
<p>Web App2</p>
<p><img src="https://files.gitbook.com/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FIvLv5SCu33o8tAyuV93R%2Fuploads%2FNRT1hwqXXIINzukYjzGk%2Fimage.png?alt=media&amp;token=c271018f-03a7-497b-bdf7-df8f477c7cf7" alt /></p>
<p><strong>Project Goals:</strong></p>
<ul>
<li><p><strong>High Availability:</strong> By deploying the application in two different Azure regions, we aim to ensure high availability and minimize downtime caused by region-specific failures or disruptions.</p>
</li>
<li><p><strong>Fault Tolerance:</strong> The primary-secondary deployment pattern enhances fault tolerance within the development environment, enabling rapid failover and recovery in case of issues.</p>
</li>
<li><p><strong>Automated Deployment:</strong> Through the use of Azure DevOps pipelines and release pipelines, we achieve automated deployment of the Java web application, reducing manual intervention and improving deployment efficiency.</p>
</li>
<li><p><strong>Resilient Development Workflow:</strong> By implementing redundancy and failover mechanisms, we create a resilient development workflow that allows developers to continue working uninterrupted, even in the event of infrastructure failures.</p>
</li>
</ul>
<p><strong>Conclusion:</strong></p>
<p>Our project leverages Azure DevOps and Azure services to implement a multi-region deployment strategy for a Java web application, ensuring high availability, fault tolerance, and resilience. By deploying the application in two distinct regions and establishing primary-secondary deployment patterns, we enhance the reliability of the application deployment process, providing a robust foundation for development and operations teams.</p>
]]></content:encoded></item><item><title><![CDATA[🚀 Seamless Network Connectivity - AWS VPC Peering Deployment]]></title><description><![CDATA[🌐 Introduction: In the ever-expanding landscape of cloud computing, Virtual Private Cloud (VPC) Peering stands as a key architectural element, facilitating secure communication between distinct VPCs. This blog delves into the nuances of VPC Peering,...]]></description><link>https://praful.cloud/seamless-network-connectivity-aws-vpc-peering-deployment</link><guid isPermaLink="true">https://praful.cloud/seamless-network-connectivity-aws-vpc-peering-deployment</guid><category><![CDATA[AWS #CloudEngineering #CloudComputing #AmazonWebServices #AWSArchitecture #DevOps #CloudSolutions #CloudSecurity #InfrastructureAsCode #AWSCertification #Serverless #AWSCommunity #TechBlogs #CloudExperts #CloudMigration #CloudOps #AWSJobs #TechIndustry #CareerInTech #InnovationInCloud #devops #cloudengineerjobs #devopsjobs #azure #gcp #oci #cloudjobs, #kubernetes]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Fri, 01 Dec 2023 06:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1702942663371/6e1f4ee1-9e4b-4878-a0dc-90010a08a311.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>🌐 Introduction: In the ever-expanding landscape of cloud computing, Virtual Private Cloud (VPC) Peering stands as a key architectural element, facilitating secure communication between distinct VPCs. This blog delves into the nuances of VPC Peering, shedding light on its significance and practical applications. A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can launch AWS resources, such as Amazon EC2 instances, into your VPC.</p>
<p>A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different Regions (also known as an inter-Region VPC peering connection).</p>
<p>VPC Peering Lifecycle A VPC peering connection goes through various stages starting from when the request is initiated. At each stage, there may be actions that you can take, and at the end of its lifecycle, the VPC peering connection remains visible in the Amazon VPC console and API or command line output for a period of time.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0738qmhl535nri38pjyv.png" alt="Image description" /></p>
<p><strong>VPC Peering Connection Lifecycle:</strong></p>
<p><strong>Initiating-request:</strong> A request for a VPC peering connection is initiated, moving to either pending-acceptance or failed state.</p>
<p><strong>Failed:</strong> The VPC peering connection request has failed. It remains visible for 2 hours and cannot be accepted, rejected, or deleted during this period.</p>
<p><strong>Pending-acceptance:</strong> The request awaits acceptance from the accepter VPC owner. It can be accepted, rejected, or deleted within 7 days. If no action is taken, it expires after 7 days.</p>
<p><strong>Expired:</strong> The VPC peering connection request has expired. No action can be taken, and it remains visible for 2 days to both VPC owners.</p>
<p><strong>Rejected:</strong> The accepter VPC owner rejects a pending-acceptance request. It remains visible to the requester for 2 days and to the accepter for 2 hours.</p>
<p><strong>Provisioning:</strong> The request has been accepted and is in the process of becoming active.</p>
<p><strong>Active:</strong> The VPC peering connection is active, allowing traffic flow. It can be deleted by either VPC owner but cannot be rejected.</p>
<p><strong>Deleting:</strong> Applies to an inter-Region VPC peering connection being deleted. A deletion request is submitted, and the connection transitions to deleted.</p>
<p><strong>Deleted:</strong> An active connection is deleted by either owner, or a pending-acceptance request is deleted by the requester. It remains visible for a specified duration to both parties.</p>
<p>🚀 Features:</p>
<p>Inter-VPC Connectivity: VPC Peering establishes a connection between VPCs, enabling seamless communication using private IP addresses. Cross-Account Connectivity: It supports secure connections between VPCs across different AWS accounts, fostering collaboration and resource sharing. Transitive Peering: VPC Peering can be extended beyond two VPCs, creating a network topology that allows interconnectedness across multiple VPCs.</p>
<p>🎯 Objective: This blog aims to provide a comprehensive understanding of VPC Peering, unraveling its features and showcasing a real-time use case to highlight its practical utility.</p>
<p>🚀 Use Case - Real-time Web and DB VPC in Two Different Regions: Consider a scenario where a web application hosted in one AWS region necessitates real-time access to a database residing in another region. VPC Peering plays a pivotal role in establishing a secure and efficient connection between the web and database VPCs, facilitating seamless data exchange.</p>
<p>🔗 Solution Diagram: VPC Peering Solution Diagram</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2qcc8srnsyqgng4lbn66.png" alt="Image description" /></p>
<pre><code class="lang-plaintext">provider "aws" {
  profile = var.profile
  region  = var.region_web
  alias   = "region-web"
}

provider "aws" {
  profile = var.profile
  region  = var.region_db
  alias   = "region-db"
}


#Create VPC in us-east-1
resource "aws_vpc" "vpc_useast" {
  provider             = aws.region-web
  cidr_block           = "10.0.0.0/16"
  enable_dns_support   = true
  enable_dns_hostnames = true
  tags = {
    Name = "master-vpc-jenkins"
  }

}

#Create VPC in us-west-2
resource "aws_vpc" "vpc_uswest" {
  provider             = aws.region-db
  cidr_block           = "192.168.0.0/16"
  enable_dns_support   = true
  enable_dns_hostnames = true
  tags = {
    Name = "worker-vpc-jenkins"
  }

}

#Initiate Peering connection request from us-east-1
resource "aws_vpc_peering_connection" "useast1-uswest-2" {
  provider    = aws.region-web
  peer_vpc_id = aws_vpc.vpc_uswest.id
  vpc_id      = aws_vpc.vpc_useast.id
  #auto_accept = true
  peer_region = var.region_db

}

#Create IGW in us-east-1
resource "aws_internet_gateway" "igw" {
  provider = aws.region-web
  vpc_id   = aws_vpc.vpc_useast.id
}

#Create IGW in us-west-2
resource "aws_internet_gateway" "igw-oregon" {
  provider = aws.region-db
  vpc_id   = aws_vpc.vpc_uswest.id
}

#Accept VPC peering request in us-west-2 from us-east-1
resource "aws_vpc_peering_connection_accepter" "accept_peering" {
  provider                  = aws.region-db
  vpc_peering_connection_id = aws_vpc_peering_connection.useast1-uswest-2.id
  auto_accept               = true
}

#Create route table in us-east-1
resource "aws_route_table" "internet_route" {
  provider = aws.region-web
  vpc_id   = aws_vpc.vpc_useast.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.igw.id
  }
  route {
    cidr_block                = "192.168.1.0/24"
    vpc_peering_connection_id = aws_vpc_peering_connection.useast1-uswest-2.id
  }
  lifecycle {
    ignore_changes = all
  }
  tags = {
    Name = "Master-Region-RT"
  }
}

#Overwrite default route table of VPC(Master) with our route table entries
resource "aws_main_route_table_association" "set-master-default-rt-assoc" {
  provider       = aws.region-web
  vpc_id         = aws_vpc.vpc_useast.id
  route_table_id = aws_route_table.internet_route.id
}
#Get all available AZ's in VPC for master region
data "aws_availability_zones" "azs" {
  provider = aws.region-web
  state    = "available"
}

#Create subnet # 1 in us-east-1
resource "aws_subnet" "subnet_1" {
  provider          = aws.region-web
  availability_zone = element(data.aws_availability_zones.azs.names, 0)
  vpc_id            = aws_vpc.vpc_useast.id
  cidr_block        = "10.0.1.0/24"
}

#Create subnet #2  in us-east-1
resource "aws_subnet" "subnet_2" {
  provider          = aws.region-web
  vpc_id            = aws_vpc.vpc_useast.id
  availability_zone = element(data.aws_availability_zones.azs.names, 1)
  cidr_block        = "10.0.2.0/24"
}


#Create subnet in us-west-2
resource "aws_subnet" "subnet_1_oregon" {
  provider   = aws.region-db
  vpc_id     = aws_vpc.vpc_uswest.id
  cidr_block = "192.168.1.0/24"
}

#Create route table in us-west-2
resource "aws_route_table" "internet_route_oregon" {
  provider = aws.region-db
  vpc_id   = aws_vpc.vpc_uswest.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.igw-oregon.id
  }
  route {
    cidr_block                = "10.0.1.0/24"
    vpc_peering_connection_id = aws_vpc_peering_connection.useast1-uswest-2.id
  }
  lifecycle {
    ignore_changes = all
  }
  tags = {
    Name = "Worker-Region-RT"
  }
}

#Overwrite default route table of VPC(Worker) with our route table entries
resource "aws_main_route_table_association" "set-worker-default-rt-assoc" {
  provider       = aws.region-db
  vpc_id         = aws_vpc.vpc_uswest.id
  route_table_id = aws_route_table.internet_route_oregon.id
}


#Create SG for allowing TCP/8080 from * and TCP/22 from your IP in us-east-1
resource "aws_security_group" "jenkins-sg" {
  provider    = aws.region-web
  name        = "jenkins-sg"
  description = "Allow TCP/8080 &amp; TCP/22"
  vpc_id      = aws_vpc.vpc_useast.id
  ingress {
    description = "Allow 22 from our public IP"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = [var.external_ip]
  }
  ingress {
    description = "allow anyone on port 8080"
    from_port   = 8080
    to_port     = 8080
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    description = "allow traffic from us-west-2"
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["192.168.1.0/24"]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

#Create SG for LB, only TCP/80,TCP/443 and access to jenkins-sg
resource "aws_security_group" "lb-sg" {
  provider    = aws.region-web
  name        = "lb-sg"
  description = "Allow 443 and traffic to Jenkins SG"
  vpc_id      = aws_vpc.vpc_useast.id
  ingress {
    description = "Allow 443 from anywhere"
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    description = "Allow 80 from anywhere for redirection"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    description     = "Allow traffic to jenkins-sg"
    from_port       = 0
    to_port         = 0
    protocol        = "tcp"
    security_groups = [aws_security_group.jenkins-sg.id]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

#Create SG for allowing TCP/22 from your IP in us-west-2
resource "aws_security_group" "jenkins-sg-oregon" {
  provider = aws.region-db

  name        = "jenkins-sg-oregon"
  description = "Allow TCP/8080 &amp; TCP/22"
  vpc_id      = aws_vpc.vpc_uswest.id
  ingress {
    description = "Allow 22 from our public IP"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = [var.external_ip]
  }
  ingress {
    description = "Allow traffic from us-east-1"
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["10.0.1.0/24"]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}
</code></pre>
<pre><code class="lang-plaintext">variable "external_ip" {
  type    = string
  default = "0.0.0.0/0"
}

variable "profile" {
  type    = string
  default = "default"
}

variable "region_web" {
  type    = string
  default = "us-west-1"
}

variable "region_db" {
  type    = string
  default = "ca-central-1"
}

# Variables
variable "ami_web" {
  description = "AMI ID for EC2 instances"
  default = "ami-0cbd40f694b804622"
}

variable "ami_db" {
  description = "AMI ID for EC2 instances"
  default = "ami-06873c81b882339ac"
}

variable "instance_type" {
  description = "EC2 instance type"
  default     = "t2.micro"
}
# AWS EC2 Instance Key Pair
variable "instance_keypair" {
  description = "AWS EC2 Key pair that need to be associated with EC2 Instance"
  type        = string
  default     = "vpc-key.pem"
}
</code></pre>
<pre><code class="lang-plaintext">output "VPC-ID-US-EAST-1" {
  value = aws_vpc.vpc_useast.id
}

output "VPC-ID-US-WEST-2" {
  value = aws_vpc.vpc_uswest.id
}

output "PEERING-CONNECTION-ID" {
  value = aws_vpc_peering_connection.useast1-uswest-2.id
}
# Output IPs of Web and DB Instances
output "web_instance_ip" {
  value = aws_instance.web_instance.private_ip
}

output "db_instance_ip" {
  value = aws_instance.db_instance.private_ip
}
</code></pre>
<p>VPC Peering in an Action:</p>
<p>terraform init</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9dkv2lgqg0f4hlllhevd.png" alt="Image description" /></p>
<p>terraform plan</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4h0vi2jqj669t5wc1l7m.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mxgcfzjih99xjya9dlgt.png" alt="Image description" /></p>
<p>terraform apply -auto-approve</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1rwgnt4gog9juur300g0.png" alt="Image description" /></p>
<p>Let's verify from the aws console</p>
<p>Us-west-1</p>
<p>web-instance</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2l1tjsmiuz8v2j8hj3li.png" alt="Image description" /></p>
<p>ca-central-a db-instance</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bqz6gptf9d5yy8xy3f8u.png" alt="Image description" /></p>
<p>Verify that peering connection is Active in US-west-1</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cmpstmig3uhtebg76aya.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xei91ukfkykrj0en46pm.png" alt="Image description" /></p>
<p>Verify vpc peering connectin in ca-central-1</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/73rm6vsgq753q4yt71wj.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sg94q6c8bnmchszfgv79.png" alt="Image description" /></p>
<p>Let's verify the connection from web-instance to db-instance on private connection</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5a9y1xlbmkqbacds3lfy.png" alt="Image description" /></p>
<p>db-isntance private ip: 192.168.1.245</p>
<p>ping from web-instance to db instance on private connection is successful</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9rso7dztxxbdbeplhnv4.png" alt="Image description" /></p>
<p>ping from db-instance to web-instance on private ip 10.0.1.25</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wb91hm0n03wqexfa7gsv.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sahns6hkaqhiqokhr2r8.png" alt="Image description" /></p>
<p>terraform destroy -auto-approve</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kzne6co5c0wdgl77yf2u.png" alt="Image description" /></p>
<p>Conclusion: In conclusion, VPC Peering emerges as a foundational element for crafting intricate and interconnected AWS architectures. Its ability to simplify network communication, support cross-account connectivity, and enable transitive peering positions it as a versatile solution for a myriad of scenarios. As organizations navigate the complexities of modern cloud environments, VPC Peering stands as a reliable ally in building robust and scalable architectures.</p>
<p>Explore the power of VPC Peering to enhance the connectivity and collaboration between your AWS resources, creating a network infrastructure that aligns with the demands of today's dynamic cloud landscape.</p>
<p>#AWS #CloudEngineering #CloudComputing #AmazonWebServices #AWSArchitecture #DevOps #CloudSolutions #CloudSecurity #InfrastructureAsCode #AWSCertification #Serverless #AWSCommunity #TechBlogs #CloudExperts #CloudMigration #CloudOps #AWSJobs #TechIndustry #CareerInTech #InnovationInCloud #devops #cloudengineerjobs #devopsjobs #azure #gcp #oci #cloudjobs, #kubernetes</p>
<p>Connect with me on these platforms and stay updated with the latest in technology and development! 🚀🔗😊</p>
<p>🌐 <strong>Website:</strong> <a target="_blank" href="http://praful.cloud">praful.cloud</a> 🚀<br />🔗 <strong>LinkedIn:</strong> <a target="_blank" href="https://linkedin.com/in/prafulpatel16">Connect with me on LinkedIn</a> 🤝<br />💻 <strong>GitHub:</strong> <a target="_blank" href="https://github.com/prafulpatel16/prafulpatel16">Explore my projects on GitHub</a> 📂<br />🎥 <strong>YouTube:</strong> <a target="_blank" href="https://www.youtube.com/@prafulpatel16">Check out my tech tutorials on YouTube</a> 🎬<br />📝 <strong>Medium:</strong> <a target="_blank" href="https://medium.com/@prafulpatel16">Read my tech articles on Medium</a> 📚<br />🔗 <strong>Dev:</strong> <a target="_blank" href="https://dev.to/prafulpatel16">Follow me on Dev for developer-centric content</a> 🖥️</p>
<p>PRAFUL PATEL</p>
]]></content:encoded></item><item><title><![CDATA[🚀  AWS - Exporting Data from Amazon RDS to Amazon S3 Using AWS DMS]]></title><description><![CDATA[🚀 Introduction: The world of cloud computing demands streamlined and efficient data movement. Our exploration begins with understanding how to optimize this data transfer process, focusing on the synergy between Amazon RDS, a managed relational data...]]></description><link>https://praful.cloud/aws-exporting-data-from-amazon-rds-to-amazon-s3-using-aws-dms</link><guid isPermaLink="true">https://praful.cloud/aws-exporting-data-from-amazon-rds-to-amazon-s3-using-aws-dms</guid><category><![CDATA[AWS #CloudEngineering #CloudComputing #AmazonWebServices #AWSArchitecture #DevOps #CloudSolutions #CloudSecurity #InfrastructureAsCode #AWSCertification #Serverless #AWSCommunity #TechBlogs #CloudExperts #CloudMigration #CloudOps #AWSJobs #TechIndustry #CareerInTech #InnovationInCloud #devops #cloudengineerjobs #devopsjobs #azure #gcp #oci #cloudjobs, #kubernetes]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Wed, 29 Nov 2023 23:02:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1701297230731/ef0f1cda-28ee-4cd4-80ec-983f64f92237.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>🚀 Introduction:</strong> The world of cloud computing demands streamlined and efficient data movement. Our exploration begins with understanding how to optimize this data transfer process, focusing on the synergy between Amazon RDS, a managed relational database service, and Amazon S3, a scalable and secure object storage solution. The star of our show is AWS Database Migration Service (DMS), which simplifies the complex task of migrating and replicating databases.</p>
<p><strong>Features:</strong></p>
<p><strong>Data Replication:</strong></p>
<ul>
<li><p><strong>Description:</strong> AWS DMS enables real-time data replication between various supported databases. Changes made in the source database are automatically reflected in the target, ensuring data consistency.</p>
</li>
<li><p><strong>Use Cases:</strong> Continuous data synchronization, real-time reporting, and analytics.</p>
</li>
</ul>
<p><strong>Database Migration:</strong></p>
<ul>
<li><p><strong>Description:</strong> DMS simplifies the migration of databases to and from the AWS Cloud. It supports homogeneous and heterogeneous migrations, allowing seamless transitions between different database engines.</p>
</li>
<li><p><strong>Use Cases:</strong> Cloud migration, database version upgrades, and platform changes.</p>
</li>
</ul>
<p><strong>Change Data Capture (CDC):</strong></p>
<ul>
<li><p><strong>Description:</strong> DMS captures and tracks changes in the source database, enabling incremental updates in the target. This feature is crucial for minimizing downtime during migrations.</p>
</li>
<li><p><strong>Use Cases:</strong> Minimizing downtime during migrations, supporting ongoing application operations.</p>
</li>
</ul>
<p><strong>Schema Conversion:</strong></p>
<ul>
<li><p><strong>Description:</strong> DMS assists in converting the source database schema to match the target database, ensuring compatibility during migrations between different database engines.</p>
</li>
<li><p><strong>Use Cases:</strong> Migrating to a different database engine, ensuring seamless data structure transitions.</p>
</li>
</ul>
<p><strong>Data Filtering:</strong></p>
<ul>
<li><p><strong>Description:</strong> DMS allows the selective migration of data based on specific criteria, reducing the need to migrate entire databases.</p>
</li>
<li><p><strong>Use Cases:</strong> Migrating specific subsets of data, optimizing migration bandwidth.</p>
</li>
</ul>
<p><strong>Task Scheduling:</strong></p>
<ul>
<li><p><strong>Description:</strong> DMS supports the scheduling of migration and replication tasks, allowing users to plan and automate data transfer activities.</p>
</li>
<li><p><strong>Use Cases:</strong> Automating routine data transfer tasks, optimizing resource utilization.</p>
</li>
</ul>
<p><strong>Security and Encryption:</strong></p>
<ul>
<li><p><strong>Description:</strong> DMS ensures the security of data during transit by supporting encryption options for data in motion.</p>
</li>
<li><p><strong>Use Cases:</strong> Meeting security compliance standards, protecting sensitive data during migrations.</p>
</li>
</ul>
<p><strong>Pre-requisite:</strong></p>
<p>Before embarking on this journey, ensure you have all the necessary prerequisites in place. This includes AWS credentials, appropriate access to your Amazon RDS instance, a configured S3 bucket, and the required permissions for AWS DMS.</p>
<p><strong>🎯 Objective:</strong> The primary goal of this blog post is to provide a step-by-step guide to exporting data from an Amazon RDS instance (specifically MySQL) to Amazon S3. We aim to empower users with the knowledge to automate and simplify this data export process, enabling them to harness the full potential of AWS services for their data management needs.</p>
<p><strong>🚀 Use Case:</strong> Consider a scenario where organizations frequently require the export of data from their Amazon RDS MySQL database to Amazon S3. This could be for various reasons such as running analytics, creating secure backups, or facilitating seamless collaboration across different services. Our use case centers around addressing the challenges and intricacies of this data export process, offering a practical and efficient solution.</p>
<p><strong>Cloud Adoption:</strong></p>
<ul>
<li><strong>Scenario:</strong> An organization is transitioning from an on-premises database to the AWS Cloud. DMS is employed to migrate the existing database to Amazon RDS with minimal downtime.</li>
</ul>
<p><strong>Database Version Upgrade:</strong></p>
<ul>
<li><strong>Scenario:</strong> A company is upgrading its database engine version to leverage new features and improvements. DMS is utilized to perform the upgrade seamlessly, ensuring data integrity.</li>
</ul>
<p><strong>Continuous Data Synchronization:</strong></p>
<ul>
<li><strong>Scenario:</strong> In a scenario where real-time data updates are critical, such as in e-commerce applications, DMS is used to replicate changes from the transactional database to a reporting database.</li>
</ul>
<p><strong>Data Warehousing:</strong></p>
<ul>
<li><strong>Scenario:</strong> An organization wants to consolidate data from multiple databases into a central data warehouse on Amazon Redshift. DMS facilitates the ongoing data replication for analytics purposes.</li>
</ul>
<p><strong>Scenario Usage:</strong></p>
<p><strong>Scenario: Migrating from an On-Premises Oracle Database to Amazon Aurora MySQL Database</strong></p>
<ol>
<li><p><strong>Setup:</strong></p>
<ul>
<li><p>Set up source and target endpoints in AWS DMS for the Oracle database and Aurora MySQL database.</p>
</li>
<li><p>Configure the necessary connection details, security settings, and migration task settings.</p>
</li>
</ul>
</li>
<li><p><strong>Data Replication:</strong></p>
<ul>
<li><p>Initiate a full load of existing data from Oracle to Aurora using DMS.</p>
</li>
<li><p>Activate Change Data Capture (CDC) to capture ongoing changes in the Oracle database.</p>
</li>
</ul>
</li>
<li><p><strong>Continuous Synchronization:</strong></p>
<ul>
<li><p>Monitor the ongoing replication process to ensure real-time updates from Oracle to Aurora.</p>
</li>
<li><p>Test the synchronization by making changes in the Oracle database and verifying their timely reflection in Aurora.</p>
</li>
</ul>
</li>
<li><p><strong>Schema Conversion:</strong></p>
<ul>
<li><p>Utilize DMS schema conversion tools to handle any necessary schema transformations during the migration.</p>
</li>
<li><p>Ensure compatibility between Oracle and Aurora MySQL data structures.</p>
</li>
</ul>
</li>
<li><p><strong>Completion:</strong></p>
<ul>
<li><p>Once satisfied with the synchronization and migration, complete the DMS task.</p>
</li>
<li><p>Redirect applications to use the Aurora MySQL database as the new primary data source.</p>
</li>
</ul>
</li>
</ol>
<p><strong>🌐 Solution Diagram:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/izrritilttul9dt40c6d.png" alt="Image description" /></p>
<p><strong>Tools &amp; Technologies Covered:</strong></p>
<ul>
<li><p><strong>AWS Cloud ☁️:</strong> Foundation for the solution, providing limitless possibilities for building and deploying applications.</p>
</li>
<li><p><strong>Networking, VPC, Security Group, VPC Endpoint 🏞️:</strong> Ensures secure and efficient communication between services.</p>
</li>
<li><p><strong>RDS (MySQL) 🗄️:</strong> Manages and maintains the MySQL database.</p>
</li>
<li><p><strong>S3 📤:</strong> Scalable object storage for securely storing and retrieving data.</p>
</li>
<li><p><strong>AWS Secret Manager 🔐:</strong> Safely stores and manages sensitive information.</p>
</li>
<li><p><strong>AWS DMS (Database Migration Service) 🔄:</strong> Facilitates seamless data migration between databases.</p>
</li>
</ul>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mye4u6ftqlnw4gulnmgw.png" alt="Image description" /></p>
<p><strong>Create VPC Endpoint:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d4khxv02ky6z5q4sm68y.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fx391q5ww0tn29anjh5e.png" alt="Image description" /></p>
<p><strong>Select VPC, Subnets and Security Group:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qpkp7ss5tlmgnf658gx6.png" alt="Image description" /></p>
<p><strong>Click to create endpoint:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bn9ngbb56oyrv946p7xe.png" alt="Image description" /></p>
<p><strong>Copy the S3 VPC Endpoint ID:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hwhldmkc1ptz14lo3z09.png" alt="Image description" /></p>
<p><strong>Endpoint ID: vpce-08ec6969fd89be2fc</strong></p>
<p><strong>Go to S3 Service:</strong></p>
<p><strong>Select the Bucket created earlier:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jtv5cy2fteuawgpqzhbv.png" alt="Image description" /></p>
<p><strong>Click to Permission - Edit:</strong></p>
<p>![Image description](<a target="_blank" href="https://dev-to-uploads.s3.amazonaws">https://dev-to-uploads.s3.amazonaws</a></p>
<p>.com/uploads/articles/3x5rvrg346oijvs1a308.png)</p>
<p><strong>Enter the policy to policy section:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ythaxlihjk8o1i1ptk5y.png" alt="Image description" /></p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-attr">"Id"</span>: <span class="hljs-string">"Access-to-bucket-using-specific-endpoint"</span>,
    <span class="hljs-attr">"Statement"</span>: [
        {
            <span class="hljs-attr">"Sid"</span>: <span class="hljs-string">"Access-to-specific-VPCE"</span>,
            <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-attr">"Principal"</span>: <span class="hljs-string">"*"</span>,
            <span class="hljs-attr">"Action"</span>: [<span class="hljs-string">"s3:List*"</span>, <span class="hljs-string">"s3:Put*"</span>, <span class="hljs-string">"s3:Get*"</span>],
            <span class="hljs-attr">"Resource"</span>: [
                <span class="hljs-string">"arn:aws:s3:::bucket-lab-rds-export-nbtwmnxdjwpsjtdi"</span>,
                <span class="hljs-string">"arn:aws:s3:::bucket-lab-rds-export-nbtwmnxdjwpsjtdi/*"</span>
            ],
            <span class="hljs-attr">"Condition"</span>: {
                <span class="hljs-attr">"StringEquals"</span>: {
                    <span class="hljs-attr">"aws:sourceVpce"</span>: <span class="hljs-string">"vpce-08ec6969fd89be2fc"</span>
                }
            }
        },
        {
            <span class="hljs-attr">"Sid"</span>: <span class="hljs-string">"Access-to-specific-iam-user"</span>,
            <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-attr">"Principal"</span>: {
                <span class="hljs-attr">"AWS"</span>: <span class="hljs-string">"arn:aws:iam::715900322913:user/username"</span>
            },
            <span class="hljs-attr">"Action"</span>: [<span class="hljs-string">"s3:List*"</span>, <span class="hljs-string">"s3:Get*"</span>],
            <span class="hljs-attr">"Resource"</span>: [
                <span class="hljs-string">"arn:aws:s3:::bucket-lab-rds-export-nbtwmnxdjwpsjtdi"</span>,
                <span class="hljs-string">"arn:aws:s3:::bucket-lab-rds-export-nbtwmnxdjwpsjtdi/*"</span>
            ]
        }
    ]
}
</code></pre>
<p><strong>S3 Bucket policy successfully updated.</strong></p>
<p><strong>Go to Secret Manager:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9usm6lcyron5bjwte4xt.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gp3ej8jlg7t621yxmvxi.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p1kd9kilmheuibal9fya.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rp6cu9d9hut1ysi6qb3l.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/12xf80aet6gnqutt0rdm.png" alt="Image description" /></p>
<p><strong>Copy ARN of the stored secret:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t4k1b63m57wwubsyrdcx.png" alt="Image description" /></p>
<p><strong>Go to Database Migration Service:</strong></p>
<p><strong>Ensure that Replication instance is available:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sg6xozx92vh8u2x7ibiy.png" alt="Image description" /></p>
<p><strong>Go to Migration Data &gt; Endpoints Create Endpoint:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1b6huezjmgkdw6amkf6f.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gbnw0xmw3yngaihr0mkx.png" alt="Image description" /></p>
<p><strong>Endpoint created:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/67jv51nywt5pwna0vfsq.png" alt="Image description" /></p>
<p><strong>Create Target Endpoint:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nhkvmnb5x18azyqmbyk1.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ii72h472zwte2st3x1i.png" alt="Image description" /></p>
<p><strong>Endpoint created:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/61p1hmtgxqgmvj41sbpm.png" alt="Image description" /></p>
<p><strong>Test Source Endpoint Connection:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uwk3zvryfed51z60ig60.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j9kipq8ywqpqvm4n6tw8.png" alt="Image description" /></p>
<p><strong>Status: Testing:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gkik35clncun47blq1pr.png" alt="Image description" /></p>
<p><strong>Status: Successful:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jgwnldmqlh360f444f30.png" alt="Image description" /></p>
<p><strong>Database Migration Tasks Create Task:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/92zat41ad8m120zb5hnr.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gzup6pchnqh42ba8fek2.png" alt="Image description" /></p>
<p><strong>Task is in progress:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/urna0v2m26elnhijosro.png" alt="Image description" /></p>
<p><strong>Task: Started:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pt7byebhfsys2h6p7bw5.png" alt="Image description" /></p>
<p><strong>Status: Load Complete:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2jl924wy9cgbtb2lhx0l.png" alt="Image description" /></p>
<p><strong>Verify that "export" folder created in AWS S3 bucket:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2mfu8fbnt9qn9xpn3lbx.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/74qn5zjmqtdd3ivb6nnw.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2aa50gwz7fhnft4xsk8l.png" alt="Image description" /></p>
<p><strong>Verify that .csv file is loaded into the S3 folder:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/roc34qwh6x2yywzsweb7.png" alt="Image description" /></p>
<p>)</p>
<p>Open .csv file and verify that sample data is loaded</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oxnyex2sc9dlj6nvogc4.png" alt="Image description" /></p>
<p><strong>Conclusion:</strong> In conclusion, the efficiency of exporting data from Amazon RDS to Amazon S3 plays a pivotal role in optimizing data workflows. Leveraging the capabilities of AWS DMS, we aim to simplify and automate this process, ensuring data integrity and accessibility. Join us on this journey as we unlock the power of AWS services in enhancing your data management strategies. 🌐📈🚀</p>
<p>AWS Ref: <a target="_blank" href="https://aws.amazon.com/dms/">https://aws.amazon.com/dms/</a></p>
<p>🌐 <strong>Website:</strong> <a target="_blank" href="http://praful.cloud">praful.cloud</a> 🚀<br />🔗 <strong>LinkedIn:</strong> <a target="_blank" href="https://linkedin.com/in/prafulpatel16">Connect with me on LinkedIn</a> 🤝<br />💻 <strong>GitHub:</strong> <a target="_blank" href="https://github.com/prafulpatel16/prafulpatel16">Explore my projects on GitHub</a> 📂<br />🎥 <strong>YouTube:</strong> <a target="_blank" href="https://www.youtube.com/@prafulpatel16">Check out my tech tutorials on YouTube</a> 🎬<br />📝 <strong>Medium:</strong> <a target="_blank" href="https://medium.com/@prafulpatel16">Read my tech articles on Medium</a> 📚<br />🔗 <strong>Dev:</strong> <a target="_blank" href="https://dev.to/prafulpatel16">Follow me on Dev for developer-centric content</a> 🖥️</p>
<p>Connect with me on these platforms and stay updated with the latest in Cloud/DevOps technology 🚀🔗😊</p>
<p><a target="_blank" href="https://ca.linkedin.com/in/prafulpatel16?trk=profile-badge">PRAFUL PATEL</a></p>
<p>AWS #CloudEngineering #CloudComputing #AmazonWebServices #AWSArchitecture #DevOps #CloudSolutions #CloudSecurity #InfrastructureAsCode #AWSCertification #Serverless #AWSCommunity #TechBlogs #CloudExperts #CloudMigration #CloudOps #AWSJobs #TechIndustry #CareerInTech #InnovationInCloud #devops #cloudengineerjobs #devopsjobs #azure #gcp #oci #cloudjobs, #kubernetes</p>
]]></content:encoded></item><item><title><![CDATA[🌐AWS-Seamless Network Connectivity - AWS Transit Gateway Deployment]]></title><description><![CDATA[🚀 Introduction In the dynamic realm of cloud infrastructure, the deployment of AWS Transit Gateway stands out as a revolutionary solution. This robust networking service enables seamless connectivity and efficient management of resources across mult...]]></description><link>https://praful.cloud/seamless-network-connectivity-aws-transit-gateway-deployment</link><guid isPermaLink="true">https://praful.cloud/seamless-network-connectivity-aws-transit-gateway-deployment</guid><category><![CDATA[#AWSCommunityBuilders #CloudEngineering #CloudComputing #AmazonWebServices #AWSArchitecture #DevOps #CloudSolutions #CloudSecurity #InfrastructureAsCode #AWSCertification #Serverless #AWSCommunity #TechBlogs #CloudExperts #CloudMigration #CloudOps #AWSJobs #TechIndustry #CareerInTech #InnovationInCloud #devops #cloudengineerjobs #devopsjobs #azure #gcp #oci #cloudjobs #kubernetes]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Thu, 23 Nov 2023 05:04:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1700715733962/b04e89c3-563c-4ed4-8539-3ce7efa11fab.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>🚀 Introduction In the dynamic realm of cloud infrastructure, the deployment of AWS Transit Gateway stands out as a revolutionary solution. This robust networking service enables seamless connectivity and efficient management of resources across multiple Amazon Virtual Private Clouds (VPCs). Join us on a journey where we unravel the intricacies of AWS Transit Gateway deployment, unlocking new possibilities for your organization's network architecture.</p>
<p>🎯 Objective</p>
<p>Our primary objective is to demystify the deployment process of AWS Transit Gateway, empowering organizations to build a flexible and scalable network infrastructure. From enhanced connectivity to simplified management, our exploration aims to equip you with the knowledge to optimize your network resources and elevate your cloud experience.</p>
<p>🚀 Use Case</p>
<p>Let's dive into a real-world scenario that demonstrates the tangible benefits of AWS Transit Gateway deployment for different departments within your organization.</p>
<p>Sales Department: Imagine a scenario where the Sales team operates across various regions, each with its dedicated VPC. With AWS Transit Gateway, seamless communication is established between these VPCs, ensuring that sales applications and data are easily accessible. This not only enhances collaboration but also accelerates sales processes, contributing to increased efficiency and faster decision-making.</p>
<p>Marketing Department: For the Marketing team, running campaigns often involves diverse applications and services distributed across different VPCs. AWS Transit Gateway streamlines the connectivity, allowing marketing resources to interact effortlessly. Whether it's accessing analytics data or coordinating marketing tools, the deployment ensures a unified and efficient network, boosting overall productivity.</p>
<p>HR Department: In the HR domain, privacy and secure communication are paramount. AWS Transit Gateway facilitates a secure network environment, ensuring that HR-related applications and databases are interconnected with enhanced security measures. This ensures confidential employee data is transmitted securely, aligning with compliance standards and bolstering data integrity.</p>
<p>This use case exemplifies how AWS Transit Gateway deployment caters to the distinct needs of various departments within your organization, fostering collaboration, security, and efficiency.</p>
<p>#Transit Gateway Features:</p>
<p>Hub and Spoke Architecture:</p>
<p>Description: AWS Transit Gateway follows a hub-and-spoke architecture, allowing centralized connectivity and management. Global Reach:</p>
<p>Description: Transit Gateway supports a global network that spans multiple AWS Regions, providing a unified and scalable solution. Inter-Region Peering:</p>
<p>Description: Facilitates peering between Transit Gateways in different regions, enabling seamless communication across regions. VPN and Direct Connect Attachment:</p>
<p>Description: Connects remote networks using VPN and AWS Direct Connect, providing secure and reliable communication. Routing Control:</p>
<p>Description: Offers flexible and granular routing control, allowing customization of the traffic flow within the network. Network Manager Integration:</p>
<p>Description: Integrates with AWS Network Manager, simplifying the management of global networks and providing a centralized view. Scale Out:</p>
<p>Description: Scales horizontally to accommodate a growing number of VPCs and on-premises networks, ensuring scalability. Security Integration:</p>
<p>Description: Seamlessly integrates with AWS security features, allowing the enforcement of security policies across the network. CloudWatch Metrics and Monitoring:</p>
<p>Description: Provides CloudWatch metrics for monitoring network performance and enabling proactive management. Resource Group Tagging:</p>
<p>Description: Supports resource group tagging, making it easier to organize and manage resources within the Transit Gateway. Multicast Support:</p>
<p>Description: Enables multicast traffic support, allowing the transmission of data to multiple recipients simultaneously. Centralized Network Inspection:</p>
<p>Description: Facilitates centralized network inspection and monitoring for enhanced visibility and control. Transit Gateway Connect:</p>
<p>Description: Introduces Transit Gateway Connect for simplified VPN connectivity and better traffic engineering capabilities. VPC Ingress Routing:</p>
<p>Description: Offers VPC Ingress Routing, allowing for more granular control over the egress path of traffic leaving a VPC.</p>
<p>🛠️ Solution Diagram</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p1jjgaorrm0l5c5gnyn7.png" alt="Image description" /></p>
<p>Tools &amp; Technologies Covered:</p>
<ol>
<li><p>AWS cloud</p>
</li>
<li><p>AWS vpc</p>
</li>
<li><p>AWS subnets</p>
</li>
<li><p>AWS internet gateway</p>
</li>
<li><p>AWS route tables</p>
</li>
<li><p>AWS security groups</p>
</li>
<li><p>AWS EC2 machine</p>
</li>
<li><p>Transit Gateway</p>
</li>
</ol>
<hr />
<h2 id="heading-transit-gateway-deployment">TRANSIT GATEWAY DEPLOYMENT</h2>
<ol>
<li><p>Create VPCs</p>
</li>
<li><p>Create Subnets</p>
</li>
<li><p>Create internet gateway</p>
</li>
<li><p>Attach internet gateway to VPCs</p>
</li>
<li><p>Create route tables • Subnet association • Add route entries</p>
</li>
<li><p>Create security groups</p>
</li>
<li><p>Transit gateway • Create transit gateway • Attach VPCs to transit gateway • Add routes between transit gateway and vpcs • Launch EC2 webservers in each vpcs • Test the transit gateway connectivity</p>
</li>
</ol>
<p>Create VPCs</p>
<p>VPC VPC CIDR Block Availability Zone Availability Zone CIDR Block VPC-01-HR 20.0.0.0/16 Us-east-1a 20.0.0.0/24</p>
<p>VPC-02-SALES 192.168.0.0/16 Us-east-1a 192.168.0.0/24</p>
<p>VPC-03-MRKT 172.10.0.0/16 Us-east-1a 172.10.0.0/24</p>
<p>Create VPC-01-HR</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6d7q3osyaf11rx7nkfuj.png" alt="Image description" /></p>
<p>Create VPC-02-SALES</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h0hdcwxrzejkygqx1rq7.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d6q6unyy3dgo0lp0rxtb.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hyri4yrvu1yewo64sl77.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y2ef5xnmg0xxxh1m2yq4.png" alt="Image description" /></p>
<p>Create Subnets</p>
<p>VPC VPC CIDR Block Availability Zone Availability Zone CIDR Block VPC-01-HR 20.0.0.0/16 Us-east-1a 20.0.0.0/24 Public-subne01-HR Us-east-1a 20.0.1.0/24 Private-subnet02-HR Us-east-1b 20.0.2.0/24 VPC-02-SALES 192.168.0.0/16 Us-east-1a 192.168.0.0/24 Public-subnet01-SALES Us-east-1a 192.168.1.0/24 Private-subnet02-SALES Us-east-1b 192.168.2.0/24 VPC-03-MRKT 172.10.0.0/16 Us-east-1a 172.10.0.0/24 Public-subnet01-MRKT Us-east-1a 172.10.1.0/24 Private-subnet02-MRKT Us-east-1b 172.10.2.0/24</p>
<p>VPC-01-HR 20.0.0.0/16 Us-east-1a 20.0.0.0/24 Public-subne01-HR Us-east-1a 20.0.1.0/24 Private-subnet02-HR Us-east-1b 20.0.2.0/24</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2o7dqks8r34r39lzyu58.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rtg0e85lejkx8ril4jk3.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rp18380zkz5xhaf1787h.png" alt="Image description" /></p>
<p>VPC-02-SALES 192.168.0.0/16 Us-east-1a 192.168.0.0/24 Public-subnet01-SALES Us-east-1a 192.168.1.0/24 Private-subnet02-SALES Us-east-1b 192.168.2.0/24</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5e7d3ueqr1yegj4pyy8k.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/spn2hn748y5okxfcry6d.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/036tt5x4d7jmblmsgo2q.png" alt="Image description" /></p>
<p>VPC-03-MRKT 172.10.0.0/16 Us-east-1a 172.10.0.0/24 Public-subnet01-MRKT Us-east-1a 172.10.1.0/24 Private-subnet02-MRKT Us-east-1b 172.10.2.0/24</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0s0pjuiijx43dxi14kgq.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/imqanqlg4daf8zr2m256.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u9lo0361t3borqxt98nb.png" alt="Image description" /></p>
<p>Create Internet gateway</p>
<p>Internet Gateway VPC-01-HR IGW01-HR 172.10.0.0/24 VPC-02-SALES IGW02-SALES 172.10.1.0/24 VPC-03-MRKT IGW03-MRKT 172.10.2.0/24</p>
<p>VPC-01-HR IGW01-HR</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkuv6kvrdoxaszxw34pd.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dq5xo91hjksz4kfr68zr.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9rsbwuimj2yub8ecdcg2.png" alt="Image description" /></p>
<p>VPC-02-SALES IGW02-SALES</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vqkyzm7u4ftqv6vuwgkf.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8d0rpcmevexzglka4546.png" alt="Image description" /></p>
<p>Attach to VPC</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r8hrpcx990pcits1e8pk.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/laodcx72xdxme2j572pr.png" alt="Image description" /></p>
<p>VPC-03-MRKT IGW03-MRKT 172.10.2.0/24</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9amut4yosr1ki7vs5loz.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ts5o13dtovg9eq1xg7m.png" alt="Image description" /></p>
<p>Attached to VPC</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h64bsjng88qnqv0r4nrh.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7qcfd06miuytkz3ress7.png" alt="Image description" /></p>
<p>Create Route tables</p>
<p>VPC Route Tables VPC-01-HR Public-RT01-HR VPC-02-SALES Public-RT02-SALES VPC-03-MRKT Public-RT03-MRKT</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/07qwii6thtqpnfp0i4mj.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7nv2y0gjsooxkzlm6z31.png" alt="Image description" /></p>
<p>Edit routes – Add IGW</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1p0dokpb01ntu1ub0swx.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h87u6jl9rae7t3e5hf7p.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/03i31ez5wnkkf5pt1ezi.png" alt="Image description" /></p>
<p>Subnet Association</p>
<p>Associate public HR subnet</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o9tt9uerlgiqxtb29h3i.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0h83lt516pzcnh4a6i70.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vibuzkcqc51bfgo7hvle.png" alt="Image description" /></p>
<p>Create Public-RT02-SALES</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wwqt0zsxt5hg7r758026.png" alt="Image description" /></p>
<p>Add IGW</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mmt982t0kqdq7ddhn478.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c8j5ofo4lzah2bpobi25.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/19hkjejlcgj89jix0meu.png" alt="Image description" /></p>
<p>Subnet Association:</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1oseli93ukfthtnqvfnv.png" alt="Image description" /></p>
<p>Associate public subnet</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dsbo3c1n5xg4cjgvoae.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nntohfmp0fo9t55m9qd7.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/umtq5pec1edm4mi0uxyq.png" alt="Image description" /></p>
<p>Create Route Public-RT03-MRKT</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y9atyb6tzxi89wxsut7o.png" alt="Image description" /></p>
<p>Add igw</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ecpa2a4mo6d5hqfbmud5.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n40e1qpnnzs9p7yaoosf.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hi6vx7vhlb7fwsaw6m0k.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7udbwfzx3pfbee8w24vz.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qfb1q5cwd59hsrvzkbhl.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cycs5lavvstt1ox8i2v8.png" alt="Image description" /></p>
<p>Three route tables created</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3wdx8piqdxgw73mytf2h.png" alt="Image description" /></p>
<p>Create Transit gateway</p>
<p>Transit gateway: prafect-tgw</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/glhqxza92ssjjkgsk9jp.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rzucf33si4hpga3xx4ll.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m11a880iaminwbofhwbi.png" alt="Image description" /></p>
<p>Create Transit gateway attachments: Attachment01: VPC01-HR</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mipyy6oq12xe2b62iodp.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3s1nrgz4qw6on88kbinp.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/utebmhcguxpn509ujb7m.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bgsioap81r2ugve26lm2.png" alt="Image description" /></p>
<p>Attachment02: VPC02-SALES</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/csoyan5k651hfhpjrpjb.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tbgq18k8eo58xz27f96g.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/il7fzxfmsgmorzu53sb2.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kk60e16mhib3m19oqir0.png" alt="Image description" /></p>
<p>Attachment02: VPC03-MRKT</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yixeijqgp18kb29hckui.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/flo175765rrw51g0rltd.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t5msggfdnqxobj1017vq.png" alt="Image description" /></p>
<p>Association</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4jer6nhxygb2zoazh1pk.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ao0mpx2q0zum9i2y5wa.png" alt="Image description" /></p>
<p>Propagations</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u2y7lgq120dgnzif6hnb.png" alt="Image description" /></p>
<p>Routes</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uh7hz3lkzvmo7ogvwi3s.png" alt="Image description" /></p>
<p>Update Route Tables of VPCs VPC VPC CIDR Block Availability Zone Availability Zone CIDR Block VPC-01-HR 20.0.0.0/16 Us-east-1a 20.0.0.0/24 VPC-02-SALES 192.168.0.0/16 Us-east-1a 192.168.0.0/24 VPC-03-MRKT 172.10.0.0/16 Us-east-1a 172.10.0.0/24</p>
<p>Add cross routes for VPC01-HR Go to Route Table : public-RT01-HR Add TGW routes for VPC02-SALES and VPC03-MRKT</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q3wktkv6ssipg3n383m0.png" alt="Image description" /></p>
<p>VPC-02-SALES 192.168.0.0/16</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p39fhvekyg20vni8e9lg.png" alt="Image description" /></p>
<p>VPC-03-MRKT 172.10.0.0/16</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wttgkgorcj2gx4ahe3tk.png" alt="Image description" /></p>
<p>Add Cross route for VPC02-SALES</p>
<p>Select VPC02-SALES Add routes for VPC01-HR and VPC03-MRKT VPC-01-HR 20.0.0.0/16 Us-east-1a 20.0.0.0/24 VPC-03-MRKT 172.10.0.0/16 Us-east-1a 172.10.0.0/24</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cyu957pl7ae8mqlzjd4y.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bekepjgymq9yiuv4ezl3.png" alt="Image description" /></p>
<p>Cross routes added</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hlwyvqzxiodqg6el0u4q.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n3x10ek9orhhsi0pa6hv.png" alt="Image description" /></p>
<p>VPC VPC CIDR Block Availability Zone Availability Zone CIDR Block VPC-01-HR 20.0.0.0/16 Us-east-1a 20.0.0.0/24 VPC-02-SALES 192.168.0.0/16 Us-east-1a 192.168.0.0/24</p>
<p>Select Route table VPC-3-MRKT</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/egf381a3j1c03fw0u6in.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qu1f726yuvxfo7zg3igu.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f29d1zyc74ns22o0z94x.png" alt="Image description" /></p>
<p>Cross route added</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4grbpwnvz9uz1gx8qjan.png" alt="Image description" /></p>
<p>Create Security Groups:</p>
<p>VPC01-HR Security Group: Public-SG01-HR</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xg7o05a9pouq4wnw3ssx.png" alt="Image description" /></p>
<p>Add rule:</p>
<p>Rule 1: All SSH , 22, source: 0.0.0.0/0 Rule 2: All ICMP, All, source: 0.0.0.0/0</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p37b8a5h1o04fr5603qe.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0g7snjy7fd1ku71j3r5e.png" alt="Image description" /></p>
<p>VPC02-SALES Security Group: Public-SG02-SALES</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fjttbjqshv1u9l2euo8.png" alt="Image description" /></p>
<p>Add rules: Rule 1: All SSH , 22, source: 0.0.0.0/0 Rule 2: All ICMP, All, source: 0.0.0.0/0</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ringdahls0496qgaikee.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s7y7exp6ray6uah30rrv.png" alt="Image description" /></p>
<p>VPC02-MRKT Security Group: Public-SG03-MRKT</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yck9l6ukefwy0ec9s87u.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z5khkqpm3nptldtwids4.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mgod4vf060wfxsnael4o.png" alt="Image description" /></p>
<p>Launch EC2 machines in each VPC’s</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fe0jzcuyl5idmee9t1ob.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sg70n99khlb2kgnqbiyp.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j2j43dbm1564ticmd17q.png" alt="Image description" /></p>
<p>Repeat the same steps for VPC02-SALES &amp; VPC03-MRKT</p>
<p>Launch webserver02-SALES</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g21kbp5yd61fhu6fkkn1.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b604r022etlekwu8t9d0.png" alt="Image description" /></p>
<p>Launch webserver03-MRKT</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2h0bpw4tc4gquimxzb15.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4yh0g4vzokkds5ecii10.png" alt="Image description" /></p>
<p>All EC2 webservers launched</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sothtreaj13lp1540oqb.png" alt="Image description" /></p>
<p>Verify the Connectivity from All VPC’s Webservers with each other Check connectivity via TGW</p>
<p>Connect to EC2 webserver01-HR from VPC01-HR Ping private IPs of VPC02-SALES – webserver02-SALES and VPC03-MRKT-webserver03-MRKT</p>
<p>Open webserver01-HR and connect</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w6qwkuqvbv3ppedfsm67.png" alt="Image description" /></p>
<p>Click Connect</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e00i8tn4rqldtfw65uey.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/arfs9j1c2nuqab8rni93.png" alt="Image description" /></p>
<p>Get the Private IPs of both the webservers</p>
<p>Webserver02-SALES Private ip: 192.168.1.173</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k6kapv9sja9cjjw99j6t.png" alt="Image description" /></p>
<p>Test the connection: Test 1 – From VPC01-HR to VCP02-SALES Expected: The ping should return the ICMP response</p>
<p>Ping 192.168.1.173</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8veiwoh43znf9go3l64t.png" alt="Image description" /></p>
<p>Actual: The ping has returned the ICMP response</p>
<h2 id="heading-ping-successful-to-private-ip-from-webserver01-hr-to-webserver02-sales-which-are-on-two-different-departments-vpcs-and-connected-through-tgw">Ping successful to private ip from webserver01-HR to webserver02-SALES which are on two different departments’ VPC’s and connected through TGW</h2>
<p>Webserver03-MRKT Private ip: 172.10.1.155</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/amp1ju55veb0n7tadimb.png" alt="Image description" /></p>
<p>Test the connection: Test 2 – From VPC01-HR to VCP02-MRKT</p>
<p>Expected: The ping should return the ICMP response</p>
<p>Ping 172.10.1.155</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hg2x6n0wx6yn6y7y7gik.png" alt="Image description" /></p>
<p>Actual: The ping has returned the ICMP response</p>
<h2 id="heading-ping-successful-to-private-ip-from-webserver01-hr-to-webserver03-mrkt-which-are-on-two-different-departments-vpcs-and-connected-through-tgw">Ping successful to private ip from webserver01-HR to webserver03-MRKT which are on two different departments’ VPC’s and connected through TGW</h2>
<p>Connect to EC2 webserver02-SALES from VPC02-SALES</p>
<p>Ping private IPs of VPC01-HR – webserver01-HR and VPC03-MRKT-webserver03-MRKT</p>
<p>Open webserver02-SALES and connect</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0bbzponchwd19mi3h4sx.png" alt="Image description" /></p>
<p>Click to Connect</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lluj282s2mlnzhwm74zw.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ref5l9rml6akst0gg63.png" alt="Image description" /></p>
<p>Webserver01-HR Private ip: 20.0.1.139</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p6ll5de1h3it789y9t2b.png" alt="Image description" /></p>
<p>Test the connection: Test 1 – From VPC02-SALES to VCP01-HR</p>
<p>Expected: The ping should return the ICMP response</p>
<p>Ping 20.0.1.139</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6iywv213m985tdlakhxa.png" alt="Image description" /></p>
<p>Actual: The ping has returned the ICMP response</p>
<h2 id="heading-ping-successful-to-private-ip-from-webserver02-sales-to-webserver01-hr-which-are-on-two-different-departments-vpcs-and-connected-through-tgw">Ping successful to private ip from webserver02-SALES to webserver01-HR which are on two different departments’ VPC’s and connected through TGW</h2>
<p>Webserver03-MRKT Private ip: 172.10.1.155</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vo4txzmvbsn224ihdbbo.png" alt="Image description" /></p>
<p>Test the connection: Test 2 – From VPC02-SALES to VCP02-MRKT</p>
<p>Expected: The ping should return the ICMP response</p>
<p>Ping 172.10.1.155</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/32odvc048di5ba9ap1v7.png" alt="Image description" /></p>
<p>Actual: The ping has returned the ICMP response</p>
<p>Ping successful to private ip from webserver02-SALES to webserver03-MRKT which are on two different departments’ VPC’s and connected through TGW</p>
<p>Clean up project</p>
<p>In conclusion, the deployment of AWS Transit Gateway stands as a pivotal solution that seamlessly connects and manages the networking needs of various departments, including Sales, Marketing, and HR. This robust infrastructure allows for efficient communication, data transfer, and collaboration across the organization's different segments. By leveraging the power of AWS Transit Gateway, businesses can achieve enhanced scalability, simplified network management, and improved overall performance. As we navigate the evolving landscape of cloud technologies, embracing solutions like AWS Transit Gateway becomes integral to fostering a more connected, streamlined, and future-ready enterprise. Here's to unlocking the full potential of network architecture and paving the way for innovation in the digital era.</p>
<p>Connect with me on these platforms and stay updated with the latest in technology and development! 🚀🔗😊</p>
<p>🌐 <strong>Website:</strong> <a target="_blank" href="http://praful.cloud">praful.cloud</a> 🚀<br />🔗 <strong>LinkedIn:</strong> <a target="_blank" href="https://linkedin.com/in/prafulpatel16">Connect with me on LinkedIn</a> 🤝<br />💻 <strong>GitHub:</strong> <a target="_blank" href="https://github.com/prafulpatel16/prafulpatel16">Explore my projects on GitHub</a> 📂<br />🎥 <strong>YouTube:</strong> <a target="_blank" href="https://www.youtube.com/@prafulpatel16">Check out my tech tutorials on YouTube</a> 🎬<br />📝 <strong>Medium:</strong> <a target="_blank" href="https://medium.com/@prafulpatel16">Read my tech articles on Medium</a> 📚<br />🔗 <a target="_blank" href="http://Dev.to"><strong>Dev.to</strong></a><strong>:</strong> <a target="_blank" href="https://dev.to/prafulpatel16">Follow me on</a> <a target="_blank" href="http://Dev.to">Dev.to</a> <a target="_blank" href="https://dev.to/prafulpatel16">for cloud/devops-centric content</a> 🖥️</p>
<p>#AWSCommunityBuilders #CloudEngineering #CloudComputing #AmazonWebServices #AWSArchitecture #DevOps #CloudSolutions #CloudSecurity #InfrastructureAsCode #AWSCertification #Serverless #AWSCommunity #TechBlogs #CloudExperts #CloudMigration #CloudOps #AWSJobs #TechIndustry #CareerInTech #InnovationInCloud #devops #cloudengineerjobs #devopsjobs #azure #gcp #oci #cloudjobs #kubernetes</p>
<p><a target="_blank" href="https://ca.linkedin.com/in/prafulpatel16?trk=profile-badge">PRAFUL PATEL</a></p>
]]></content:encoded></item><item><title><![CDATA[🚀 AWS - Optimizing Web App Performance with AWS CloudFront, New Relic Monitoring, and Terraform Deployment]]></title><description><![CDATA[🚀 Introduction
In today's rapidly evolving digital landscape, deploying web applications efficiently and ensuring optimal performance is essential for delivering a seamless user experience. This project aims to address these critical aspects by leve...]]></description><link>https://praful.cloud/aws-optimizing-web-app-performance-with-aws-cloudfront-new-relic-monitoring-and-terraform-deployment</link><guid isPermaLink="true">https://praful.cloud/aws-optimizing-web-app-performance-with-aws-cloudfront-new-relic-monitoring-and-terraform-deployment</guid><category><![CDATA[AWS #CloudEngineering #CloudComputing #AmazonWebServices #AWSArchitecture #DevOps #CloudSolutions #CloudSecurity #InfrastructureAsCode #AWSCertification #Serverless #AWSCommunity #TechBlogs #CloudExperts #CloudMigration #CloudOps #AWSJobs #TechIndustry #CareerInTech #InnovationInCloud #devops #cloudengineerjobs #devopsjobs #azure #gcp #oci #cloudjobs, #kubernetes]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Thu, 02 Nov 2023 21:43:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1698961015021/e7878021-3068-4c6f-b790-54f9e4878ec1.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>🚀 Introduction</p>
<p>In today's rapidly evolving digital landscape, deploying web applications efficiently and ensuring optimal performance is essential for delivering a seamless user experience. This project aims to address these critical aspects by leveraging Infrastructure as Code (IAC) through Terraform, harnessing the content delivery power of AWS CloudFront, and implementing robust monitoring using New Relic.</p>
<p>🎯 Objective</p>
<p>The objective of this post is to bring and spread a knowledge about #aws cloud services, how to ? and where to consume the #aws services to solve the real world business challenges</p>
<p>🚀 Use Case:</p>
<p>Manual Deployment<br />Terraform (IaC) Deployment<br />New Relic Integration</p>
<p>This project aims to address these critical aspects by leveraging Infrastructure as Code (IAC) through Terraform, harnessing the content delivery power of AWS CloudFront, and implementing robust monitoring using New Relic.</p>
<p>Automated deployment of web applications is at the heart of this initiative, as it not only streamlines the provisioning of cloud infrastructure but also lays the foundation for future scalability across multiple cloud platforms, free from vendor lock-in constraints. By utilizing Terraform, a cloud-agnostic open-source tool, we ensure that our infrastructure can adapt to evolving requirements seamlessly.</p>
<p>The project also emphasizes the acceleration of web application performance through AWS CloudFront, a content delivery network (CDN) service. By strategically caching content at edge locations around the world, CloudFront enhances the user experience, minimizing latency and ensuring swift content delivery.</p>
<p>To maintain a vigilant eye on the performance and health of our web application, we integrate New Relic monitoring. This comprehensive tool provides us with invaluable insights into infrastructure visibility and response monitoring. With the aid of AWS Lambda functions, we securely send logs from our AWS S3 bucket to New Relic for real-time analysis.</p>
<p>This project not only underscores the importance of automated deployment and performance optimization but also provides a detailed, step-by-step guide for implementing these solutions. Whether you're new to these technologies or a seasoned professional, this project equips you with the knowledge and tools to harness the full potential of Terraform, CloudFront, and New Relic for your web applications.</p>
<p>🛠️ Solution Diagram:</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--mc9DPFHe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fzov23tyjy1632cpuiqq.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mc9DPFHe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fzov23tyjy1632cpuiqq.png" alt="Image description" /></a></p>
<p>The web application should deploy with the following criteria’s &amp; requirements:</p>
<ol>
<li><p><strong>Requirement 1:</strong></p>
<ul>
<li>The web application should be deployed within the AWS Cloud platform.</li>
</ul>
</li>
<li><p><strong>Requirement 2:</strong></p>
<ul>
<li>The web application should be accessed securely.</li>
</ul>
</li>
<li><p><strong>Requirement 3:</strong></p>
<ul>
<li>The viewer’s experience should be seamless without any delays while accessing the web content and pages.</li>
</ul>
</li>
<li><p><strong>Requirement 4:</strong></p>
<ul>
<li>The web application should be accessible from around the world with a custom domain: <a target="_blank" href="http://www.prafect.link">www.prafect.link</a>.</li>
</ul>
</li>
<li><p><strong>Requirement 5:</strong></p>
<ul>
<li>The web application logs should be stored in object storage and then sent to third-party monitoring solutions.</li>
</ul>
</li>
<li><p><strong>Requirement 6:</strong></p>
<ul>
<li>The web application should be deployed in an automated way without any vendor locking tools.</li>
</ul>
</li>
</ol>
<p><strong>Solution:</strong><br />Let’s discuss and analyze the solution for the above requirements:</p>
<ol>
<li><p><strong>Solution 1: AWS S3</strong></p>
<ul>
<li>From the AWS Cloud platform, there is a service called S3, which provides a service to deploy a static web application.</li>
</ul>
</li>
<li><p><strong>Solution 2: AWS ACM</strong></p>
<ul>
<li>AWS ACM is the solution that provides a wildcard certificate through which the web application can be secured with SSL/TLS communication.</li>
</ul>
</li>
<li><p><strong>Solution 3: AWS Cloudfront</strong></p>
<ul>
<li>AWS Cloudfront is the content delivery network service through which the web app content will be cached on edge locations nearby the viewers' proximity to improve the user's experience.</li>
</ul>
</li>
<li><p><strong>Solution 4: AWS Route 53</strong></p>
<ul>
<li>AWS Route 53 provides a DNS service through which the web application can have its own custom domain: <a target="_blank" href="http://www.prafect.link">www.prafect.link</a>.</li>
</ul>
</li>
<li><p><strong>Solution 5: AWS S3</strong></p>
<ul>
<li>AWS S3 provides a service to store the logs and push the logs to an external monitoring tool. New Relic can be used, where S3 logs will be pushed to New Relic using a Lambda function.</li>
</ul>
</li>
<li><p><strong>Solution 6: Terraform</strong></p>
<ul>
<li>Terraform is the solution to provision the resources and infrastructure in an Infrastructure as Code (IAC) way. It's a cloud-agnostic open-source tool that can further scale up based on the requirements for multi-cloud.</li>
</ul>
</li>
</ol>
<p><strong>Project Cost Estimation:</strong><br />(Note: This cost is not any actual cost; it's just an estimation based on high-level requirements. Prices may vary based on adding and removing services based on requirements.)<br />Ref: <a target="_blank" href="https://calculator.aws/#/addService">AWS Pricing Calculator</a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--LcGMyoab--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u8pql5bnrzpjao3vo6c7.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LcGMyoab--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u8pql5bnrzpjao3vo6c7.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://aws.amazon.com/getting-started/projects/host-static-website/services-costs/">Host a Static Website on AWS: Services Costs</a></p>
<p><strong>Tools &amp; Technologies Covered:</strong></p>
<ul>
<li><p>AWS Cloud</p>
</li>
<li><p>AWS S3</p>
</li>
<li><p>AWS Certificate Manager</p>
</li>
<li><p>AWS Cloudfront</p>
</li>
<li><p>AWS Route 53</p>
</li>
<li><p>AWS Lambda</p>
</li>
<li><p>AWS CloudWatch</p>
</li>
<li><p>New Relic (Monitoring)</p>
</li>
<li><p>Terraform (Infrastructure as Code)</p>
</li>
<li><p>Visual Studio Code (IDE)</p>
</li>
<li><p>GitHub</p>
</li>
</ul>
<p>Ref: AWS Services Documentation:<br />AWS S3: <a target="_blank" href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-bucket-intro.html￼AWS">https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-bucket-intro.html<br />AWS</a> Cloudfront:<br /><a target="_blank" href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.SimpleDistribution.html">https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.SimpleDistribution.html</a></p>
<p><strong>Pre-requisite:</strong></p>
<ol>
<li><p>AWS Free Tier Account</p>
</li>
<li><p>AWS IAM User created with programmatic access</p>
</li>
<li><p>AWS Route53 hosted domain</p>
</li>
<li><p>Visual Studio Code configured</p>
</li>
<li><p>Latest Terraform version installed</p>
</li>
<li><p>GitHub Account</p>
</li>
<li><p>Gitbash installed on desktop</p>
</li>
<li><p>New Relic account set up</p>
</li>
<li><p>Web application source code ready</p>
</li>
</ol>
<p><strong>AGENDA:</strong><br />To achieve the goal of explaining the user how to configure each and every service within the console to make them familiar with the ins and outs of all service components, and then to present the automated way in which the entire web application resources are provisioned within a few minutes using Terraform (IaC).</p>
<p>a. Manual way web application configuration and optimizing web speed to accelerate web performance.</p>
<p>b. Automated way using Terraform to deploy the complete web application.</p>
<hr />
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--xgJthUat--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/397o0tghk37fg8r4u07s.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xgJthUat--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/397o0tghk37fg8r4u07s.png" alt="Image description" /></a></p>
<p><strong>MANUAL DEPLOYMENT</strong></p>
<p><strong>1. Set up Web Application:</strong></p>
<ul>
<li><p>Step 1: Register a Custom Domain with Route 53</p>
</li>
<li><p>Step 2: Create Two Buckets</p>
</li>
<li><p>Step 3: Configure Your Root Domain Bucket for Website Hosting</p>
</li>
<li><p>Step 4: Configure Your Subdomain Bucket for Website Redirect</p>
</li>
<li><p>Step 5: Configure Logging for Website Traffic</p>
</li>
<li><p>Step 6: Upload Index and Website Content</p>
</li>
<li><p>Step 7: Upload an Error Document</p>
</li>
<li><p>Step 8: Edit S3 Block Public Access Settings</p>
</li>
<li><p>Step 9: Attach a Bucket Policy</p>
</li>
<li><p>Step 10: Test Your Domain Endpoint</p>
</li>
<li><p>Step 11: Add Alias Records for Your Domain and Subdomain</p>
</li>
<li><p>Step 12: Test the Website</p>
</li>
</ul>
<p><strong>2. Perform Latency Test without Cloudfront:</strong></p>
<ul>
<li><p>a. Test from Canada</p>
</li>
<li><p>b. Test from London</p>
</li>
<li><p>c. Test from Singapore</p>
</li>
</ul>
<p><strong>3. Configure Certificate Manager to Make the Website Secure:</strong></p>
<ul>
<li><p>a. Request a Certificate</p>
</li>
<li><p>b. Validate - Add CNAME Records to Route53</p>
</li>
<li><p>c. Redirection Bucket – Change the Request Server from HTTP to HTTPS</p>
</li>
</ul>
<p><strong>4. Accelerate Web Performance Using Cloudfront:</strong></p>
<ul>
<li><p>a. Create Cloudfront Distribution</p>
</li>
<li><p>b. Validate that Cloudfront Domain is Displayed on the Web</p>
</li>
<li><p>c. Point Cloudfront Endpoint with Route53 Custom Domain</p>
</li>
<li><p>d. Measure the Latency Test with Cloudfront</p>
<ul>
<li><p>i. Test from Canada</p>
</li>
<li><p>ii. Test from London</p>
</li>
<li><p>iii. Test from Singapore</p>
</li>
</ul>
</li>
</ul>
<p><strong>Implementation in Action:</strong></p>
<ol>
<li>Set up Web application</li>
</ol>
<p>Step 1: Register a custom domain with Route 53</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--xTtmcAan--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/skzlp3nl09115nibvr0w.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xTtmcAan--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/skzlp3nl09115nibvr0w.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--TSI2eiUA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s5s34isstt4b01qoxhpn.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TSI2eiUA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s5s34isstt4b01qoxhpn.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--cDQaG95L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9e1zmd2bw2wiliy8enbn.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cDQaG95L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9e1zmd2bw2wiliy8enbn.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--GqGSBSeE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4288m9ure4zg2fb9c2ee.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GqGSBSeE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4288m9ure4zg2fb9c2ee.png" alt="Image description" /></a></p>
<p>Step 2: Create two buckets</p>
<p>Main bucket: <a target="_blank" href="http://prafect.link">prafect.link</a><br />Second bucket: <a target="_blank" href="http://www.prafect.link">www.prafect.link</a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--vkZd4trW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6angcli7wqdjx986sqjx.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vkZd4trW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6angcli7wqdjx986sqjx.png" alt="Image description" /></a></p>
<p>Step 3: Upload index and website content<br />Step 4: Upload an error document</p>
<p>Go to local terminal<br />AWS Configure : provide credentials<br />Go to source code directory: webfiles<br />aws s3 sync . s3://<a target="_blank" href="http://prafect.link">prafect.link</a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--tNd5UPU4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s6c72uaubrcyknagt9n3.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tNd5UPU4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s6c72uaubrcyknagt9n3.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--vJErgDQA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yzxfm5opxybig3gt6epp.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vJErgDQA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yzxfm5opxybig3gt6epp.png" alt="Image description" /></a></p>
<p>Step 5: Configure your root domain bucket for website hosting<br />Enable Static web</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--UPKHOOVM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cp6tdwo0lww6lwl8t3g7.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UPKHOOVM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cp6tdwo0lww6lwl8t3g7.png" alt="Image description" /></a></p>
<p>Step 6: Configure your subdomain bucket for website redirect<br />Bucket redirection: <a target="_blank" href="http://www.prafect.link">www.prafect.link</a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--4JQCEgPv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/esyu1gghoqt9idqu54ov.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4JQCEgPv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/esyu1gghoqt9idqu54ov.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--xN89ZFWW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ob87mqjjllmsywo0fxj6.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xN89ZFWW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ob87mqjjllmsywo0fxj6.png" alt="Image description" /></a></p>
<p>Step 7: Edit S3 Block Public Access settings<br />Buck Policy:</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--7A4uHzNQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lddwftfkwioiq9vd0ue2.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7A4uHzNQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lddwftfkwioiq9vd0ue2.png" alt="Image description" /></a></p>
<p>Step 9: Test your domain endpoint</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--RpJAPHg3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fkg1nh6ekvoq3locwu3m.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RpJAPHg3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fkg1nh6ekvoq3locwu3m.png" alt="Image description" /></a></p>
<p>Verify that “index.html” is displayed the web page<br />Access web page using endpoint<br />S3 web endpoint<br />URL : <a target="_blank" href="http://prafect.link.s3-website-us-east-1.amazonaws.com￼index.html">http://prafect.link.s3-website-us-east-1.amazonaws.com<br />index.html</a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--WRvzio3X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9gwlp6lvppmnl9ir815e.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WRvzio3X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9gwlp6lvppmnl9ir815e.png" alt="Image description" /></a></p>
<p>Verify that “error.html” is displayed the error page<br />Access “error.html”</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--pCkaKP7p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w8dmhujmk4avec7cvdfj.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pCkaKP7p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w8dmhujmk4avec7cvdfj.png" alt="Image description" /></a></p>
<p>Step 10: Add alias records for your domain and subdomain<br />Add Custom domain with Route53<br />Verify that hosted zone exist</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ptn05UZa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n2p3lsmpygxse7e9c896.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ptn05UZa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n2p3lsmpygxse7e9c896.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--1BUoUB73--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sh4e02xrb0mfvgd2iaqo.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1BUoUB73--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sh4e02xrb0mfvgd2iaqo.png" alt="Image description" /></a></p>
<p>Create A record to add custom domain to web application<br />Prafect.link<br />Point out alias to AWS S3 website Endpoint</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--RLEEuM_k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vrullpszlpxsevg16iak.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RLEEuM_k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vrullpszlpxsevg16iak.png" alt="Image description" /></a></p>
<p>Create second alias for <a target="_blank" href="http://www.prafect.link">www.prafect.link</a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--RzppZM0R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j4m1bq72f0lwavfkejbl.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RzppZM0R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j4m1bq72f0lwavfkejbl.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--SrZlbjvw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fk97g0nvqrmu9zrpn3ly.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SrZlbjvw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fk97g0nvqrmu9zrpn3ly.png" alt="Image description" /></a></p>
<p>Step 11: Test the website<br />Access the web application using custom domain<br /><a target="_blank" href="http://www.prafect.link/">www.prafect.link</a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--VLGPjo3C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/086l4yooyucnegmyk8m3.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VLGPjo3C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/086l4yooyucnegmyk8m3.png" alt="Image description" /></a></p>
<ol>
<li>Perform Latency test without Cloudfront a. Test from Canada b. Test from London c. Test from Singapore</li>
</ol>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--Xi5imISo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sk77hx9rglyivcnb6ufv.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Xi5imISo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sk77hx9rglyivcnb6ufv.png" alt="Image description" /></a></p>
<p>Measure the performance of the web application<br />Certainly, here is the performance measurement presented in a table format:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Origin</td><td>Region</td><td>Page</td><td>DNS (ms)</td><td>Connection (ms)</td><td>Response (ms)</td></tr>
</thead>
<tbody>
<tr>
<td>Canada</td><td>US-east-1</td><td>Home Page</td><td>189.39</td><td>102.67</td><td></td></tr>
<tr>
<td></td><td></td><td>Contact Us</td><td>48.25</td><td>154.24</td><td></td></tr>
<tr>
<td></td><td></td><td>Portfolio</td><td>49.70</td><td>123.31</td><td></td></tr>
<tr>
<td>London</td><td></td><td>Home Page</td><td></td><td></td><td>322.29</td></tr>
<tr>
<td></td><td></td><td>Contact Us</td><td></td><td></td><td>347.94</td></tr>
<tr>
<td></td><td></td><td>Portfolio</td><td></td><td></td><td>533.79</td></tr>
<tr>
<td>Singapore</td><td></td><td>Home Page</td><td></td><td></td><td>789.50</td></tr>
<tr>
<td></td><td></td><td>Contact Us</td><td></td><td></td><td>762.48</td></tr>
<tr>
<td></td><td></td><td>Portfolio</td><td></td><td></td><td>792.65</td></tr>
</tbody>
</table>
</div><p>Validate response header that it is coming from Amazon S3</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--M7gbqI6a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ox4tzcy1mws83ek523h9.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M7gbqI6a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ox4tzcy1mws83ek523h9.png" alt="Image description" /></a></p>
<p>Test 1: Access from Canada<br />Home Page</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z-vQTtlC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kjjmw2mxxnzqso1xwm8i.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z-vQTtlC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kjjmw2mxxnzqso1xwm8i.png" alt="Image description" /></a></p>
<p>About US</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--5ryXyxqb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o249y3h2onr85pk9g738.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5ryXyxqb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o249y3h2onr85pk9g738.png" alt="Image description" /></a></p>
<p>Portfolio</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--K0rDBoIh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aq7ffwkwvgnpv1bv0i6o.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K0rDBoIh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aq7ffwkwvgnpv1bv0i6o.png" alt="Image description" /></a></p>
<p>Switch the location using VPN other than Canada to test the latency</p>
<p>Country : London</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--uOTngLtK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hvzebxi2won93d1v7r3i.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uOTngLtK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hvzebxi2won93d1v7r3i.png" alt="Image description" /></a></p>
<p>HomePage</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--ITe62CGN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mk17qcjhmj0hihjjlxp2.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ITe62CGN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mk17qcjhmj0hihjjlxp2.png" alt="Image description" /></a></p>
<p>Contact Us</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--ibqKHpy---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vnvzxbl50dz2qlmpwqjy.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ibqKHpy---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vnvzxbl50dz2qlmpwqjy.png" alt="Image description" /></a></p>
<p>Portfolio</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--xXTfn5GL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7n1q07om1xgj11tkjrgj.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xXTfn5GL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7n1q07om1xgj11tkjrgj.png" alt="Image description" /></a></p>
<p>Country: Singapore<br />HomePage</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--UY3JNloe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ejjnixrwh3286p6c292g.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UY3JNloe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ejjnixrwh3286p6c292g.png" alt="Image description" /></a></p>
<p>About Us</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--clLEinJ2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s8icbbrp5hiyept3zbh5.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--clLEinJ2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s8icbbrp5hiyept3zbh5.png" alt="Image description" /></a></p>
<p>Portfolio</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--jIwBdCB8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/idns71hpqzslb7zbug17.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jIwBdCB8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/idns71hpqzslb7zbug17.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--w8RplUuo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r92sppq95n0myc91i73d.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w8RplUuo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r92sppq95n0myc91i73d.png" alt="Image description" /></a></p>
<p>3.Configure the Certificate Manager<br />How to secure Web application?<br />1.Request a certificate<br />Certificate Manager<br />SSL Certificate<br />Request certificate</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--In1vKEL8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/94i58wrgd0ktoqyriue3.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--In1vKEL8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/94i58wrgd0ktoqyriue3.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--jVCyk3Zo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/56b1mdvb94nvx2zzlr5s.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jVCyk3Zo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/56b1mdvb94nvx2zzlr5s.png" alt="Image description" /></a></p>
<p>Status: Pending Validation</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--lYYB7-86--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qo5ugivse6t3ccdsv4vx.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lYYB7-86--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qo5ugivse6t3ccdsv4vx.png" alt="Image description" /></a></p>
<p>2.Create records in Route53</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--s8opjjG---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e7ip7q24w3swyidc8fsn.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s8opjjG---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e7ip7q24w3swyidc8fsn.png" alt="Image description" /></a></p>
<p>3.Go to Route53 and validate that two CNAME records created</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--Mc6m9W1K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/grzv9yb21s4oj8yw7jx5.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Mc6m9W1K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/grzv9yb21s4oj8yw7jx5.png" alt="Image description" /></a></p>
<p>SSL certificate issued successfully</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--VADv5MY3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jcs41jstv2rr6z5p501l.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VADv5MY3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jcs41jstv2rr6z5p501l.png" alt="Image description" /></a></p>
<p>Verify that web application is access using secure https<br /><a target="_blank" href="http://www.prafect.link">http://www.prafect.link</a> : not secure</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--2XWQwEqs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jqprrdceut2tueabjzbs.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2XWQwEqs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jqprrdceut2tueabjzbs.png" alt="Image description" /></a></p>
<ol>
<li>Do one change on redirection bucket and redirect to “https” Change from “http” to “https”</li>
</ol>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--mtBtK0TQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gu5i1jpngo1kl299yskz.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mtBtK0TQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gu5i1jpngo1kl299yskz.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--aXBKO6bd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d5didrtllsnxccmn99it.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aXBKO6bd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d5didrtllsnxccmn99it.png" alt="Image description" /></a></p>
<p>5.Accelerate the web performance using Cloudfront</p>
<p>a. Create cloudfront distribution<br />b. Validate that cloudfront domain is displayed on the web<br />c. Point the cloudfront endpoint with the Route53 custom domain<br />d. Measure the Latency test with Cloudfront<br />i. Test from Canada<br />ii. Test from London<br />iii. Test from Singapore</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--YV3yVKUI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0q845k032f1u1f7gwodq.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YV3yVKUI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0q845k032f1u1f7gwodq.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--5vZ42lGi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/leqzw0oxxfe8kla5wqq1.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5vZ42lGi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/leqzw0oxxfe8kla5wqq1.png" alt="Image description" /></a></p>
<p>Create Cloudfront distribution<br />Origin domain: Provide AWS S3 web endpoint:<br /><a target="_blank" href="http://prafect.link.s3-website-us-east-1.amazonaws.com">http://prafect.link.s3-website-us-east-1.amazonaws.com</a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--dkzrWO3h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iwc5kqvmq1rrhanq76i7.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dkzrWO3h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iwc5kqvmq1rrhanq76i7.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--4xlNLR1C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h7ewcfetqrdbouy8nnsr.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4xlNLR1C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h7ewcfetqrdbouy8nnsr.png" alt="Image description" /></a></p>
<p>Use only North America and Europe</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--0s0WWC9K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/smmu9fl5ud2w85d7rnf4.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0s0WWC9K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/smmu9fl5ud2w85d7rnf4.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--NB2sgLbN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yt0me0dyru15obaf5so0.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NB2sgLbN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yt0me0dyru15obaf5so0.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--oiIxAFW0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q8ffmpoiqmb19ppr4rge.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oiIxAFW0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q8ffmpoiqmb19ppr4rge.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--zO0G58fK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rm8e7bo3zlskgvms7gw2.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zO0G58fK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rm8e7bo3zlskgvms7gw2.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--wt6lGIn6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ui1zkpcb678gcgr7nfx3.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wt6lGIn6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ui1zkpcb678gcgr7nfx3.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--sqA1osnf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7ngwh7ggzm46pan5v75.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sqA1osnf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7ngwh7ggzm46pan5v75.png" alt="Image description" /></a></p>
<p>Cloudfront deployment in progress</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--FP1HwRms--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/flkqv72sl1t4dno0axzc.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FP1HwRms--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/flkqv72sl1t4dno0axzc.png" alt="Image description" /></a></p>
<p>Cloudfront deployed</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--8Hqsqzed--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/91vt86zwl2gg80dij0bg.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8Hqsqzed--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/91vt86zwl2gg80dij0bg.png" alt="Image description" /></a></p>
<p>Custom origin</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--d6WeiNsP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mmoqeye57cjkgqfrmop9.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d6WeiNsP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mmoqeye57cjkgqfrmop9.png" alt="Image description" /></a></p>
<p>Validate that cloudfront domain is correctly deployed</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--sXwkzJiL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9jxrp4wwmzrselp2n91p.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sXwkzJiL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9jxrp4wwmzrselp2n91p.png" alt="Image description" /></a></p>
<p>Access web application using cloudfront url<br />URL: <a target="_blank" href="https://d3hffwkpocdftx.cloudfront.net">https://d3hffwkpocdftx.cloudfront.net</a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--cfBYVFuF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s5zq5f1pxpyyqt4ticu0.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cfBYVFuF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s5zq5f1pxpyyqt4ticu0.png" alt="Image description" /></a></p>
<h2 id="heading-security-checkpoint">Security Checkpoint:</h2>
<p>How to Secure AWS S3 web end point and restrict to access the web application allowing access through only Cloudfront web endpoint ?</p>
<p>Go to S3 Bucket policy<br />Go to Origin<br />Edit Origin</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--XFSx0O_m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/quwgysbq720lgtpux97r.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XFSx0O_m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/quwgysbq720lgtpux97r.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--ymsEjS2i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dcoyxzrwu6ew6k8up0d.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ymsEjS2i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dcoyxzrwu6ew6k8up0d.png" alt="Image description" /></a></p>
<p>Replace this Bucket policy with cloudfront policy<br />{<br />"Version": "2012-10-17",<br />"Statement": [<br />{<br />"Sid": "PublicReadGetObject",<br />"Effect": "Allow",<br />"Principal": "<em>",<br />"Action": "s3:GetObject",<br />"Resource": "arn:aws:s3:::</em><a target="_blank" href="http://prafect.link/&quot;￼}￼]￼"><em>prafect.link/</em>"<br />}<br />]  
</a>}</p>
<p>{<br />"Version": "2008-10-17",<br />"Id": "PolicyForCloudFrontPrivateContent",<br />"Statement": [<br />{<br />"Sid": "AllowCloudFrontServicePrincipal",<br />"Effect": "Allow",<br />"Principal": {<br />"Service": "<a target="_blank" href="http://cloudfront.amazonaws.com">cloudfront.amazonaws.com</a>"<br />},<br />"Action": "s3:GetObject",<br />"Resource": "arn:aws:s3:::<a target="_blank" href="http://prafect.link/*&quot;,￼&quot;Condition">prafect.link/*",<br />"Condition</a>": {<br />"StringEquals": {<br />"AWS:SourceArn": "arn:aws:cloudfront::914141388779:distribution/EU0BLI2K9XWK3"<br />}<br />}<br />}<br />]<br />}</p>
<p>Restrict to access from S3 web endpoint and allow only from cloudfront distribution endpoint</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--qhfk7nCS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rkkldymlvuke4gon0c45.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qhfk7nCS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rkkldymlvuke4gon0c45.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--uqE0w-sn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ej3jlzthy4mgt3e4kv91.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uqE0w-sn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ej3jlzthy4mgt3e4kv91.png" alt="Image description" /></a></p>
<p>Access Web url:<br />Expected: it should block the request using S3 endpoint : PASSED</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--B5FoEbJq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jfph3qcyqtzuyabee72e.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B5FoEbJq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jfph3qcyqtzuyabee72e.png" alt="Image description" /></a></p>
<p>Only allowed using cloudfront endpoint</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--3LS6bL8H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/795qhumzl1iw685r5av7.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3LS6bL8H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/795qhumzl1iw685r5av7.png" alt="Image description" /></a></p>
<p>Point out Cloudfront URL to Route53</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--yN8aP6Bq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9m4erdmgglv1ioqq7iaz.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yN8aP6Bq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9m4erdmgglv1ioqq7iaz.png" alt="Image description" /></a></p>
<p>Replace the s3 web endpoint with cloudfront endpoint<br />Delete the record sets</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--dpAowenJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9z9zr2xo5lzatt6qwycx.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dpAowenJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9z9zr2xo5lzatt6qwycx.png" alt="Image description" /></a></p>
<p>Create New record set for <a target="_blank" href="http://Prafect.link">Prafect.link</a><br />Routing Policy: Simple Routing</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--jXMusDR8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/61xcerzuc5un0v55i6y3.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jXMusDR8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/61xcerzuc5un0v55i6y3.png" alt="Image description" /></a></p>
<p>Create New record set for <a target="_blank" href="http://www.Prafect.link">www.Prafect.link</a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--jr0z3ga0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3udtlbp7q8mgodpzamit.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jr0z3ga0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3udtlbp7q8mgodpzamit.png" alt="Image description" /></a></p>
<p>Record sets created successfully</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--uauwm8k8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dx6vf7hznct3wh05xa4i.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uauwm8k8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dx6vf7hznct3wh05xa4i.png" alt="Image description" /></a></p>
<p>Measure the performance of the web application<br />Perform Latency test</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--BeYUNITc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g9jc9oxm9i07lwsk7itw.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BeYUNITc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g9jc9oxm9i07lwsk7itw.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--BnBGGtAA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6p4r1uxy9cgv7pk8ipyl.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BnBGGtAA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6p4r1uxy9cgv7pk8ipyl.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--Kbtyce2y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mljn1snj37ao8n59f99h.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Kbtyce2y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mljn1snj37ao8n59f99h.png" alt="Image description" /></a></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Origin</td><td>Region</td><td>Cloudfront</td><td>Page</td><td>DNS</td><td>Connection</td><td>SSL</td><td>Response</td></tr>
</thead>
<tbody>
<tr>
<td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr>
<td></td><td>Canada</td><td>No Cloudfront</td><td>Home Page</td><td>189.39 ms</td><td>102.67 ms</td><td></td><td></td></tr>
<tr>
<td></td><td></td><td>With Cloudfront</td><td>Home Page</td><td>1.06 ms</td><td>22.08 ms</td><td></td><td></td></tr>
<tr>
<td></td><td></td><td>No Cloudfront</td><td>Contact Us</td><td>48.25 ms</td><td>25.33 ms</td><td>22.59 ms</td><td></td></tr>
<tr>
<td></td><td></td><td>With Cloudfront</td><td>Contact Us</td><td>46.67 ms</td><td></td><td></td><td></td></tr>
<tr>
<td></td><td></td><td>No Cloudfront</td><td>Portfolio</td><td>49.70 ms</td><td>123.31 ms</td><td></td><td></td></tr>
<tr>
<td></td><td></td><td>With Cloudfront</td><td>Portfolio</td><td>65.42 ms</td><td>25.43 ms</td><td>25.43 ms</td><td>23.75 ms</td></tr>
<tr>
<td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr>
<td></td><td>London</td><td>No Cloudfront</td><td>Home Page</td><td></td><td>322.29 ms</td><td></td><td></td></tr>
<tr>
<td></td><td></td><td>With Cloudfront</td><td>Home Page</td><td>121.18 ms</td><td>121.18 ms</td><td>121.18 ms</td><td>252.88 ms</td></tr>
<tr>
<td></td><td></td><td>No Cloudfront</td><td>Contact Us</td><td></td><td>347.94 ms</td><td></td><td></td></tr>
<tr>
<td></td><td></td><td>With Cloudfront</td><td>Contact Us</td><td>122.69 ms</td><td>122.69 ms</td><td>125.31 ms</td><td></td></tr>
<tr>
<td></td><td></td><td>No Cloudfront</td><td>Portfolio</td><td></td><td>533.79 ms</td><td></td><td></td></tr>
<tr>
<td></td><td></td><td>With Cloudfront</td><td>Portfolio</td><td>120.55 ms</td><td>120.55 ms</td><td>119.54 ms</td><td></td></tr>
<tr>
<td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr>
<td></td><td>Singapore</td><td>No Cloudfront</td><td>Home Page</td><td></td><td>789.50 ms</td><td></td><td></td></tr>
<tr>
<td></td><td></td><td>With Cloudfront</td><td>Home Page</td><td>375.48 ms</td><td>375.48 ms</td><td>409.51 ms</td><td></td></tr>
<tr>
<td></td><td></td><td>No Cloudfront</td><td>Contact Us</td><td></td><td>762.48 ms</td><td></td><td></td></tr>
<tr>
<td></td><td></td><td>With Cloudfront</td><td>Contact Us</td><td>294.09 ms</td><td>388.27 ms</td><td>390.91 ms</td><td></td></tr>
<tr>
<td></td><td></td><td>No Cloudfront</td><td>Portfolio</td><td></td><td>792.65 ms</td><td></td><td></td></tr>
<tr>
<td></td><td></td><td>With Cloudfront</td><td>Portfolio</td><td>382.05 ms</td><td>382.05 ms</td><td>378.65 ms</td></tr>
</tbody>
</table>
</div><p>Validate that web application is secure</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--iIRNLXGo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dz9t27dxwqfmssrgydjj.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iIRNLXGo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dz9t27dxwqfmssrgydjj.png" alt="Image description" /></a></p>
<p>Verify Header that the page is coming from cloudfront cache</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--bNqEyQgb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hja0mn9qd9w4cbwsr7ky.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bNqEyQgb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hja0mn9qd9w4cbwsr7ky.png" alt="Image description" /></a></p>
<p>Test from Canada<br />HomePage</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s---TJoM7X---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4ely3pbilcdhebfm8to.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s---TJoM7X---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4ely3pbilcdhebfm8to.png" alt="Image description" /></a></p>
<p>About Us</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--j-OBLbv8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pl5bby1hf67sbbsz410a.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j-OBLbv8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pl5bby1hf67sbbsz410a.png" alt="Image description" /></a></p>
<p>Test from London</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--Es_JbhGI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nj9jz8m8xy0wp4uhgyxx.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Es_JbhGI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nj9jz8m8xy0wp4uhgyxx.png" alt="Image description" /></a></p>
<p>Home Page</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--GL16Gmea--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i4u66thgc49onn6tknet.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GL16Gmea--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i4u66thgc49onn6tknet.png" alt="Image description" /></a></p>
<p>Contact Us</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--PMix_BRA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kaqbotg98kdyj5bqf49i.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PMix_BRA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kaqbotg98kdyj5bqf49i.png" alt="Image description" /></a></p>
<p>Portfolio</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--nbptzq4h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6su3czejdrnf7g5iw2vy.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nbptzq4h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6su3czejdrnf7g5iw2vy.png" alt="Image description" /></a></p>
<p>Test from Singapore</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--bnGIPMCh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ccghstrr0qprnumtsn25.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bnGIPMCh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ccghstrr0qprnumtsn25.png" alt="Image description" /></a></p>
<p>Home Page</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--MBDiVIg3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nxiva3nhmmccyo1f15dz.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MBDiVIg3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nxiva3nhmmccyo1f15dz.png" alt="Image description" /></a></p>
<p>Contact Us</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--SPz1GtpB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3zpgc9mcki9rdy5556el.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SPz1GtpB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3zpgc9mcki9rdy5556el.png" alt="Image description" /></a></p>
<p>Portfolio</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--c73v36aO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ik48k5vhojfy9dg3lun1.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c73v36aO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ik48k5vhojfy9dg3lun1.png" alt="Image description" /></a></p>
<p>Optimize the Cloudfront performance<br />Point DNS to United Kingdom<br />Change the Routing Policy: Geolocation<br />Location: United Kingdom</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--2r92lT2T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ffp5s4uhkdt8h6pyiwom.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2r92lT2T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ffp5s4uhkdt8h6pyiwom.png" alt="Image description" /></a></p>
<p>Keep alive time</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--2Hcy5kYH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/44qw639pbhs7cpd8uupg.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2Hcy5kYH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/44qw639pbhs7cpd8uupg.png" alt="Image description" /></a></p>
<p>Change the timeout to 60</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--2N44gmgE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gbuvc2b2fg0klvbp6tcd.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2N44gmgE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gbuvc2b2fg0klvbp6tcd.png" alt="Image description" /></a></p>
<p>Cloudfront speed increase tips:<br />Cloudfront – Keep alive time need to increate: 60<br />DNS TTL time – need to increase: 60<br />SSL time need to increase</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--s7eIQPEI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6fywudc8qzh2o7fkxln1.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s7eIQPEI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6fywudc8qzh2o7fkxln1.png" alt="Image description" /></a></p>
<p>Cloudwatch</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--C6ZJpxOY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qvuv07wcuqk4snlhvs0x.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C6ZJpxOY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qvuv07wcuqk4snlhvs0x.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--bZOycnam--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pg2nuzbz46f6p9fk5ck4.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bZOycnam--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pg2nuzbz46f6p9fk5ck4.png" alt="Image description" /></a></p>
<h2 id="heading-new-relic">NEW RELIC</h2>
<h2 id="heading-integration">INTEGRATION</h2>
<p>Objective: The objective is to monitor Cloudfront logs using the external monitoring tool New Relic and integrate it with AWS S3. This will be achieved by implementing comprehensive monitoring through an AWS Lambda function. The Lambda function will be configured to trigger and send Cloudfront logs from an AWS S3 bucket to New Relic, providing infrastructure visibility and response monitoring.</p>
<p>Technology &amp; Tools: New Relic, AWS S3, AWS Lambda</p>
<hr />
<p>New relic</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--sQfP7Q0N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2mp9whuq83t9y0kkv9sb.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sQfP7Q0N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2mp9whuq83t9y0kkv9sb.png" alt="Image description" /></a></p>
<p>Select AWS</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--wSz7Z5ao--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fbqvrr3rk1pmp16dajje.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wSz7Z5ao--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fbqvrr3rk1pmp16dajje.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--crCt0eJ2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5qv9kiqfg08ngejldmik.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--crCt0eJ2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5qv9kiqfg08ngejldmik.png" alt="Image description" /></a></p>
<p>Standard logs<br /><a target="_blank" href="https://docs.newrelic.com/docs/logs/forward-logs/cloudfront-web-logs/#enable-standard-logging">https://docs.newrelic.com/docs/logs/forward-logs/cloudfront-web-logs/#enable-standard-logging</a></p>
<p>Create logs bucket in S3</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--dxj_1qY4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hp3uoulhdnqbkgwn8yph.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dxj_1qY4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hp3uoulhdnqbkgwn8yph.png" alt="Image description" /></a></p>
<p>Enable standard logging<br />Go to Cloudfront Enable standard logging</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--CMO-Pi7K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/biprl2io4mmvfkpvcp1j.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CMO-Pi7K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/biprl2io4mmvfkpvcp1j.png" alt="Image description" /></a></p>
<p>Go to this link: <a target="_blank" href="https://serverlessrepo.aws.amazon.com/applications￼Search">https://serverlessrepo.aws.amazon.com/applications<br />Search</a>: newrelic</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s---veIQ7tm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aex9mwb164xzd19gs1sv.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s---veIQ7tm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aex9mwb164xzd19gs1sv.png" alt="Image description" /></a></p>
<p>Click :Deploy</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--jiBgabTp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xrlm77u91kjxd1t34vxz.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jiBgabTp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xrlm77u91kjxd1t34vxz.png" alt="Image description" /></a></p>
<p>1.Scroll to the Application settings and enter your New Relic license key.</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--RhDDuSVP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2mojmv79r6avqr8ecmr8.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RhDDuSVP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2mojmv79r6avqr8ecmr8.png" alt="Image description" /></a></p>
<p>Logtype: built in parsing rulesets : <a target="_blank" href="https://docs.newrelic.com/docs/logs/ui-data/built-log-parsing-rules/￼Deployment">https://docs.newrelic.com/docs/logs/ui-data/built-log-parsing-rules/<br />Deployment</a> in progress</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--n4ulwoFY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ggu5ch4c8vwzjp4l978.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--n4ulwoFY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ggu5ch4c8vwzjp4l978.png" alt="Image description" /></a></p>
<p>Lambda deployment complete</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--DNeClvXI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/60nwo99vswcctjp85bl8.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DNeClvXI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/60nwo99vswcctjp85bl8.png" alt="Image description" /></a></p>
<p>Create Lambda trigger</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--cMKAWDUl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/br155vgzumpfj2rvkm35.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cMKAWDUl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/br155vgzumpfj2rvkm35.png" alt="Image description" /></a></p>
<p>Add Trigger</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--paVKbXNe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j3vavf9wp8jmunk8yutr.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--paVKbXNe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j3vavf9wp8jmunk8yutr.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--PoinC53H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bzwjp24s1tloa8nz0r5p.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PoinC53H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bzwjp24s1tloa8nz0r5p.png" alt="Image description" /></a></p>
<p>Trigger added successfully</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--6RxmiNKV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/96dga69gmi8jusgqpy3x.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6RxmiNKV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/96dga69gmi8jusgqpy3x.png" alt="Image description" /></a></p>
<p>Go back to newrelic<br />If configure correct it will look like this</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--p_v-6f40--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v2iv3qnzxhf0l2hz0x9a.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p_v-6f40--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v2iv3qnzxhf0l2hz0x9a.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--At_f_5aC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/phupwegtucfgv2yp785q.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--At_f_5aC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/phupwegtucfgv2yp785q.png" alt="Image description" /></a></p>
<p>View logs</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--nRw4it7P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8dwd80pvc44akgnoodl3.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nRw4it7P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8dwd80pvc44akgnoodl3.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s---zcfhFCp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zzbirji0ihc0eplhcznq.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s---zcfhFCp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zzbirji0ihc0eplhcznq.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--QYXGE-JL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7jr3y1feimyqat5ky0dm.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QYXGE-JL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7jr3y1feimyqat5ky0dm.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--l1i8N41M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/spdl1pc8z6ipe7cb9icg.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--l1i8N41M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/spdl1pc8z6ipe7cb9icg.png" alt="Image description" /></a></p>
<p>Objective: Automating Web Application Performance Enhancement</p>
<p>Objective: The objective is to automate the provisioning of services and configure CloudFront to improve the performance of a web application.</p>
<p>Technology &amp; Tools: AWS S3, Terraform, CloudFront</p>
<p>Terraform<br />Terraform project source:<br />GitHub: <a target="_blank" href="https://github.com/prafulpatel16/terraform-aws-tests.git￼Project">https://github.com/prafulpatel16/terraform-aws-tests.git<br />Project</a> directory: gocloud-test/10-tf-static-web-complete/</p>
<p>WHAT IS TERRAFORM ?</p>
<p>HashiCorp Terraform is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle. Terraform can manage low-level components like compute, storage, and networking resources, as well as high-level components like DNS entries and SaaS features.</p>
<p>How does Terraform work?</p>
<p>HashiCorp and the Terraform community have already written thousands of providers to manage many different types of resources and services. You can find all publicly available providers on the Terraform Registry, including Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP), Kubernetes, Helm, GitHub, Splunk, DataDog, and many more.</p>
<p>The core Terraform workflow consists of three stages:</p>
<p>Definition: You define resources, which may span multiple cloud providers and services. For example, you might create a configuration to deploy an application on virtual machines in a Virtual Private Cloud (VPC) network with security groups and a load balancer.</p>
<p>Planning: Terraform creates an execution plan describing the infrastructure it will create, update, or destroy based on the existing infrastructure and your configuration.</p>
<p>Application: On approval, Terraform performs the proposed operations in the correct order, respecting any resource dependencies. For example, if you update the properties of a VPC and change the number of virtual machines in that VPC, Terraform will recreate the VPC before scaling the virtual machines.</p>
<p><strong>Implementation Phase:</strong></p>
<p><strong>Phase 1: AWS Cloud Configuration via Terminal</strong></p>
<ol>
<li><p>Go to GitBash terminal and execute the command "aws configure."</p>
</li>
<li><p>Provide the IAM user's secret key and secret access key.</p>
</li>
<li><p>Set the AWS region to "us-east-1."</p>
</li>
<li><p>Specify the output type as "json."</p>
</li>
</ol>
<p><strong>Phase 2: Create Terraform Project and Define Terraform Files and Folders</strong></p>
<ol>
<li><p>Open Visual Studio Code (VS Code) and create a new folder named "tf-static-web."</p>
</li>
<li><p>Organize the file structure for the Terraform project: a. Create a new file "<a target="_blank" href="http://01-providers.tf">01-providers.tf</a>." b. Create a new file "<a target="_blank" href="http://02-main.tf">02-main.tf</a>." c. Create a new file "<a target="_blank" href="http://03-variables.tf">03-variables.tf</a>." d. Create a new file "<a target="_blank" href="http://04-outputs.tf">04-outputs.tf</a>." e. Create a new file "terraform.tfvars." f. Create a folder for the web application source code, named "webfiles," and upload the web application files into it.</p>
</li>
<li><p>Save all the created files within VS Code.</p>
</li>
</ol>
<p><strong>Phase 3: Configure and Write Terraform Resources into Respective Files</strong></p>
<ol>
<li><p>Open the "<a target="_blank" href="http://01-providers.tf">01-providers.tf</a>" file and define the Terraform and provider blocks.</p>
</li>
<li><p>Open the "<a target="_blank" href="http://02-main.tf">02-main.tf</a>" file and write all the necessary S3 resources.</p>
</li>
<li><p>Open the "<a target="_blank" href="http://03-variables.tf">03-variables.tf</a>" file and define the variables for dynamic access.</p>
</li>
<li><p>Open the "<a target="_blank" href="http://04-outputs.tf">04-outputs.tf</a>" file and specify the required outputs.</p>
</li>
<li><p>Open the "terraform.tfvars" file and configure variables for dynamic access that will override default variable values.</p>
</li>
</ol>
<p><strong>Phase 4: Terraform fmt, Terraform validate, Terraform plan</strong></p>
<ol>
<li><p>Go to the Git Bash terminal and execute the following Terraform commands:</p>
</li>
<li><p>Run "terraform init."</p>
</li>
<li><p>Execute "terraform fmt."</p>
</li>
<li><p>Validate the configuration with "terraform validate."</p>
</li>
<li><p>Apply the configuration with "terraform apply."</p>
</li>
<li><p>Observe that the "terraform.tfstate" is stored locally.</p>
</li>
</ol>
<p><strong>Phase 5: Verify that Resources Are Created in AWS Cloud</strong></p>
<ol>
<li><p>Access the AWS S3 console and confirm that the bucket has been created.</p>
</li>
<li><p>Verify that the web files have been uploaded to the bucket.</p>
</li>
<li><p>Confirm that the necessary permissions have been configured.</p>
</li>
<li><p>Ensure that the web endpoint has been created.</p>
</li>
</ol>
<p><strong>Phase 6: Verify that the Web Application Is Accessible Successfully</strong></p>
<ol>
<li><p>In the terminal, examine the output values.</p>
</li>
<li><p>Copy the endpoint URL and access it from a web browser.</p>
</li>
<li><p>Confirm that the "index.html" displays the web page.</p>
</li>
<li><p>Validate that the "error.html" is displayed as the error page.</p>
</li>
</ol>
<p>AWS Configuration:</p>
<p>Go to terminal and type<br />$aws configure</p>
<p>Terraform file structure:</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--W-gV9fRh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z1zg4teb2j9n0kffi9li.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W-gV9fRh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z1zg4teb2j9n0kffi9li.png" alt="Image description" /></a></p>
<p><strong>Phase 2: Create Terraform Project and Define Terraform Files and Folders</strong></p>
<ol>
<li><p>Open Visual Studio Code (VS Code) and create a new folder named "tf-static-web."</p>
</li>
<li><p>Organize the file structure for the Terraform project: a. Create a new file "<a target="_blank" href="http://01-providers.tf">01-providers.tf</a>." b. Create a new file "<a target="_blank" href="http://02-main.tf">02-main.tf</a>." c. Create a new file "<a target="_blank" href="http://03-variables.tf">03-variables.tf</a>." d. Create a new file "<a target="_blank" href="http://04-outputs.tf">04-outputs.tf</a>." e. Create a new file "terraform.tfvars." f. Create a folder for the web application source code, named "webfiles," and upload the web application files into it.</p>
</li>
</ol>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--6ZzFE5Ik--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gqqk49ihvsn93qt5gh5g.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6ZzFE5Ik--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gqqk49ihvsn93qt5gh5g.png" alt="Image description" /></a></p>
<p><strong>Phase 3: Configure and Write Terraform Resources into Respective Files</strong></p>
<ol>
<li><p>Open the "<a target="_blank" href="http://01-providers.tf">01-providers.tf</a>" file and define the Terraform and provider blocks.</p>
</li>
<li><p>Open the "<a target="_blank" href="http://02-main.tf">02-main.tf</a>" file and write all the necessary S3 resources.</p>
</li>
<li><p>Open the "<a target="_blank" href="http://03-variables.tf">03-variables.tf</a>" file and define the variables for dynamic access.</p>
</li>
<li><p>Open the "<a target="_blank" href="http://04-outputs.tf">04-outputs.tf</a>" file and specify the required outputs.</p>
</li>
<li><p>Open the "terraform.tfvars" file and configure variables for dynamic access that will override default variable values.</p>
</li>
</ol>
<p>01_Providers.tf</p>
<ul>
<li><p>Terraform block</p>
</li>
<li><p>Providers block</p>
</li>
<li><p>Terraform block<br />  Providers file contains terraform block which includes a “backend” configurations for state file storage and also “required_provisioners” contains a source and version of the terraform provider.</p>
</li>
<li><p>Providers block</p>
</li>
</ul>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--2aZq6T6X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/abutqcy0elgsxsrl6m4h.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2aZq6T6X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/abutqcy0elgsxsrl6m4h.png" alt="Image description" /></a></p>
<p>This block contains a cloud providers information about region and profile on which the cloud services needs to be provisioned.</p>
<p>02_main.tf</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--I8aTEqCW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dpr0nui27qv01sna6fay.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I8aTEqCW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dpr0nui27qv01sna6fay.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--7RSGtSof--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/waz6rk6lc6jqcgxpa9uh.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7RSGtSof--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/waz6rk6lc6jqcgxpa9uh.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--K4dpXykC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sdpsr8vmbnhgtunarf5y.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K4dpXykC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sdpsr8vmbnhgtunarf5y.png" alt="Image description" /></a></p>
<p>03_variables.tf</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--EwjxwLaK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jtqpd0ksg9h14afx0y7g.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EwjxwLaK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jtqpd0ksg9h14afx0y7g.png" alt="Image description" /></a></p>
<p>04_outputs.tf</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--pi_LsA4M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5fqm9fjdfo3erf8arhcu.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pi_LsA4M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5fqm9fjdfo3erf8arhcu.png" alt="Image description" /></a></p>
<p>05_terraform.tfvars.tf</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--wwT-ZrqZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wp4abz9u8zj8a0j9hnso.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wwT-ZrqZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wp4abz9u8zj8a0j9hnso.png" alt="Image description" /></a></p>
<p>Webfiles</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--ujqKxass--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ycavj9saqkr0yyo4px3.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ujqKxass--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ycavj9saqkr0yyo4px3.png" alt="Image description" /></a></p>
<p>AWS configured successfully</p>
<pre><code class="lang-plaintext">Phase 4: Terraform fmt, terraform validate, terraform plan
</code></pre>
<p>1) Go to git bash terminal and apply the terraform commands<br />2) Terraform init<br />3) Terraform fmt<br />4) Terraform validate<br />5) Terraform apply<br />6) Observe that “terraform.tfstate” is stored locally</p>
<p>Terraform init</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--TgPkA7CY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ziptynmzfj3ge2x9t52.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TgPkA7CY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ziptynmzfj3ge2x9t52.png" alt="Image description" /></a></p>
<p>Terraform fmt</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--rzGlDq-V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i3q5ms8fiaxbkvur3ovk.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rzGlDq-V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i3q5ms8fiaxbkvur3ovk.png" alt="Image description" /></a></p>
<p>Terraform validate</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--NPP7AOj4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/prr6vmwde3cp2t3rq24c.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NPP7AOj4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/prr6vmwde3cp2t3rq24c.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--PDKtL4Cl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b2uxq1v4c4raurhmabn1.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PDKtL4Cl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b2uxq1v4c4raurhmabn1.png" alt="Image description" /></a></p>
<p>Terraform plan</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--Punnt0gS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3xg9t0ty4jjz26x8s53a.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Punnt0gS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3xg9t0ty4jjz26x8s53a.png" alt="Image description" /></a></p>
<p>Terraform apply</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--Olu1_SNR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i51dq48ndo38kvx72lo8.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Olu1_SNR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i51dq48ndo38kvx72lo8.png" alt="Image description" /></a></p>
<p>static-web-statefile<br />Terraform.tfstate</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--V2wCWOPG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fkuw6720gqybtf9nad36.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V2wCWOPG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fkuw6720gqybtf9nad36.png" alt="Image description" /></a></p>
<pre><code class="lang-plaintext">Phase 5: Verify that resources are created In AWS cloud
</code></pre>
<p>1) Go to AWS S3 and verify that bucket is created<br />2) Verify that web files are uploaded into bucket<br />3) Verify that permission has been created</p>
<p>Terraform destroy</p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--j2hZSjga--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/41f7vbakgarrvej6p2vq.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j2hZSjga--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/41f7vbakgarrvej6p2vq.png" alt="Image description" /></a></p>
<p><a target="_blank" href="https://res.cloudinary.com/practicaldev/image/fetch/s--yTy2ATKL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qtdw0obq1zehyu43h0um.png"><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yTy2ATKL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qtdw0obq1zehyu43h0um.png" alt="Image description" /></a></p>
<p>4) Verify that web endpoint is created</p>
<pre><code class="lang-plaintext">Phase 6: Verify that web application accessible successfully
</code></pre>
<p>1) Go to terminal and observe the outputs value<br />2) Copy the endpoint url and access from web browser<br />3) Verify that “index.html” is displayed the web page<br />4) Verify that “error.html” is displayed the error page</p>
<p>Terraform destroy</p>
<p><strong>Conclusion:</strong></p>
<p>This project successfully optimized web application performance with AWS CloudFront, New Relic, and Terraform. We deployed a secure, globally accessible web app using Terraform's infrastructure as code (IAC). By fine-tuning latency, implementing SSL/TLS, and integrating CloudFront, we significantly improved user experience. Leveraging AWS Lambda, we monitored CloudFront logs and pushed data to New Relic, enhancing infrastructure visibility. This project demonstrates the power of cloud-native solutions and monitoring tools in delivering fast, reliable web applications.</p>
<p>Congratulations!!!! 🔥🚀</p>
<p>Let's Stay Connected:</p>
<ol>
<li><p>🌐 Website: <a target="_blank" href="https://www.praful.cloud/">Visit my Website</a></p>
</li>
<li><p>💼 LinkedIn: <a target="_blank" href="https://linkedin.com/in/prafulpatel16">Connect with me on LinkedIn</a></p>
</li>
<li><p>🐙 GitHub: <a target="_blank" href="https://github.com/prafulpatel16">Check out my GitHub</a></p>
</li>
<li><p>🎬 YouTube: <a target="_blank" href="https://www.youtube.com/@prafulpatel16">Subscribe to my YouTube Channel</a></p>
</li>
<li><p>✍️ Medium: <a target="_blank" href="https://medium.com/@prafulpatel16">Read my articles on Medium</a></p>
</li>
<li><p>📝 <a target="_blank" href="http://Dev.to">Dev.to</a>: <a target="_blank" href="https://dev.to/prafulpatel16">Follow me on</a> <a target="_blank" href="http://Dev.to">Dev.to</a></p>
</li>
</ol>
<p>AWS #CloudEngineering #CloudComputing #AmazonWebServices #AWSArchitecture #DevOps #CloudSolutions #CloudSecurity #InfrastructureAsCode #AWSCertification #Serverless #AWSCommunity #TechBlogs #CloudExperts #CloudMigration #CloudOps #AWSJobs #TechIndustry #CareerInTech #InnovationInCloud #devops #cloudengineerjobs #devopsjobs #azure #gcp #oci #cloudjobs, #kubernetes</p>
]]></content:encoded></item><item><title><![CDATA[🌐 AWS - Seamless Web App Integration with RDS MySQL & Automated Deployment 🤖]]></title><description><![CDATA[🚀 Introduction
In the ever-evolving landscape of cloud computing, efficiency and automation are the keys to success. Imagine a seamless integration between Amazon Elastic Compute Cloud (EC2) instances and Amazon Relational Database Service (RDS) usi...]]></description><link>https://praful.cloud/aws-seamless-web-app-integration-with-rds-mysql-automated-deployment</link><guid isPermaLink="true">https://praful.cloud/aws-seamless-web-app-integration-with-rds-mysql-automated-deployment</guid><category><![CDATA[#AWS #CloudEngineering #CloudComputing #AmazonWebServices #AWSArchitecture #DevOps #CloudSolutions #CloudSecurity #InfrastructureAsCode #AWSCertification #Serverless #AWSCommunity #TechBlogs #CloudExperts #CloudMigration #CloudOps #AWSJobs #TechIndustry #CareerInTech #InnovationInCloud #devops #cloudengineerjobs #devopsjobs #azure #gcp #oci #cloudjobs]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Wed, 25 Oct 2023 03:13:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1698182293914/782cfc63-b25c-4dab-ade7-7ba4d4fb0887.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction"><strong>🚀 Introduction</strong></h3>
<p>In the ever-evolving landscape of cloud computing, efficiency and automation are the keys to success. Imagine a seamless integration between Amazon Elastic Compute Cloud (EC2) instances and Amazon Relational Database Service (RDS) using the powerful tool, Terraform. This journey embarks on an exciting exploration of how to effortlessly bring these two fundamental AWS services together, streamlining your infrastructure deployment process while minimizing manual configuration steps. Let's delve into this automation adventure step by step, unlocking the full potential of your AWS resources! 🌐🛠️💼</p>
<h3 id="heading-objective">🎯 Objective</h3>
<p>The objective of this post is to bring and spread a knowledge about #aws cloud services, how to ? and where to consume the #aws services to solve the real world business challenges</p>
<h3 id="heading-use-case">🚀 Use Case:</h3>
<p>Imagine seamlessly integrating your web application with an RDS MySQL database while automating the entire deployment process. In a real-time use case scenario, this means that when your web application needs to access or store data, it can do so effortlessly with the RDS MySQL database. This integration ensures your application can quickly and efficiently manage user data, transactions, or any other data-related functions without manual intervention.</p>
<p>Moreover, by automating the deployment process, you eliminate the need for manual configuration and setup. When your web application needs to scale, recover from failures, or adapt to changes, it can do so automatically. This ensures that your application remains responsive, available, and resilient, providing a seamless experience for both users and administrators.</p>
<p>This real-time use case exemplifies how combining RDS MySQL with automated deployment can streamline your web application's data management, making it more robust and adaptable to the demands of a dynamic online environment.</p>
<h3 id="heading-challenge-automating-rds-endpoint-and-mysql-connection-string"><strong>Challenge: Automating RDS Endpoint and MySQL Connection String</strong></h3>
<p><strong>Objective</strong>: Can you automate the generation of an RDS endpoint and MySQL connection string using Terraform?</p>
<p><strong>Scenario</strong>: You're managing a dynamic cloud environment, and part of your task involves setting up an RDS database. To ensure the applications running on your EC2 instances can seamlessly connect to the database, you need an automated way to retrieve the RDS endpoint and construct the MySQL connection string.</p>
<p><strong>Challenge</strong>: Create a Terraform solution that automatically fetches the RDS endpoint and uses it to construct the MySQL connection string in the user_data section of an EC2 instance.</p>
<p><strong>Solution</strong>: In your Terraform configuration, you can use the <code>templatefile</code> function to automatically generate the MySQL connection string with the RDS endpoint and other database credentials. Here's an example of how this can be achieved in Terraform:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698198112942/64b92a3b-2aae-4007-8cd9-c4c0af9c59a0.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698198191110/13137873-794c-4222-a053-a737f261b846.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-plaintext">user_data = templatefile("user_data.tfpl", { rds_endpoint = "${aws_db_instance.rds.endpoint}", user = var.database_user, password = var.database_password, dbname = var.database_name })
</code></pre>
<p>This code automatically fetches the RDS endpoint (<code>aws_db_instance.rds.endpoint</code>) and uses it to construct the MySQL connection string in the user_data section of your EC2 instance.</p>
<p>With this solution, you've automated the process of setting up the MySQL connection, ensuring that your EC2 instances can seamlessly connect to the RDS database without manual intervention. 🚀🔗🤖</p>
<h3 id="heading-tools-amp-technologies-covered-in-our-aws-cloud-infrastructure-journey"><strong>Tools &amp; Technologies Covered in Our AWS Cloud Infrastructure Journey 🛠️🌐</strong></h3>
<ol>
<li><p><strong>AWS Cloud ☁️</strong>: Our foundation for this journey, the AWS Cloud, provides limitless possibilities for building and deploying applications.</p>
</li>
<li><p><strong>VPC 🏞️</strong>: The Virtual Private Cloud gives us control over our network environment.</p>
<ul>
<li><p>Subnets 🌐: These segments of the VPC help us organize and secure resources effectively.</p>
</li>
<li><p>Internet Gateway 🚪: The gateway to the internet, enabling external access and communication.</p>
</li>
<li><p>Route Tables 🚦: These are like roadmaps for network traffic within the VPC, ensuring data flows where it should.</p>
</li>
<li><p>Security Groups 🔒: Acting as virtual bouncers, security groups manage inbound and outbound traffic to keep our resources safe.</p>
</li>
</ul>
</li>
<li><p><strong>EC2 Machine 💻</strong>: Elastic Compute Cloud instances are like virtual Swiss Army knives, ready to handle various computing tasks.</p>
</li>
<li><p><strong>RDS Database - MySQL 🗄️</strong>: Amazon RDS provides us with a managed MySQL database to store and manage our data securely.</p>
</li>
<li><p><strong>Mobaxterm SSH Client 🚀</strong>: Mobaxterm gives us the keys to securely access and manage our EC2 instances using SSH.</p>
</li>
<li><p><strong>Terraform ⛏️</strong>: This Infrastructure as Code tool automates the provisioning of cloud resources with elegance.</p>
</li>
<li><p><strong>Shell Script 🐚</strong>: Good ol' shell scripts, our trusty sidekicks for automating tasks and configurations.</p>
</li>
</ol>
<h3 id="heading-solution-diagram">🛠️ <strong>Solution Diagram:</strong></h3>
<p>Streamlining Web App Integration with RDS MySQL and Terraform Automation</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698201290159/ecf4b174-821f-4cff-8d7c-afa10c47bb8c.png" alt class="image--center mx-auto" /></p>
<p>🚀 <strong>GitHub Repository</strong>: <a target="_blank" href="https://github.com/prafulpatel16/terraform-aws.git">terraform-aws</a></p>
<p>02-aws-rds-integration-tf</p>
<h3 id="heading-description">📂<strong>Description:</strong></h3>
<p>Amazon Elastic Compute Cloud (Amazon EC2) ⚙️ provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud ☁️. Using Amazon EC2 reduces hardware costs 💰, allowing you to develop and deploy applications faster ⏩. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security 🔐 and networking 🌐, and manage storage 📂. You can add capacity (scale up) to handle compute-heavy tasks, such as monthly or yearly processes 📅, or spikes in website traffic 🚀. When usage decreases, you can reduce capacity (scale down) again ⬇️.</p>
<p>Features of EC2:</p>
<ol>
<li><p>Instances</p>
</li>
<li><p>Amazon Machine Images (AMIs)</p>
</li>
<li><p>Instance types</p>
</li>
<li><p>Key pairs</p>
</li>
<li><p>Instance store volumes</p>
</li>
<li><p>Amazon EBS volumes</p>
</li>
<li><p>Regions, Availability Zones, Local Zones, AWS Outposts, and Wavelength Zones</p>
</li>
<li><p>Security groups</p>
</li>
<li><p>Elastic IP addresses</p>
</li>
<li><p>Tags</p>
</li>
<li><p>Virtual private clouds (VPCs)</p>
</li>
</ol>
<p>📚 <a target="_blank" href="https://docs.aws.amazon.com/ec2/?nc2=h_ql_doc_ec2">AWS EC2 Documentation</a> 🦁</p>
<p>What is Amazon RDS?</p>
<p>Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks.</p>
<p><strong>DB instances</strong></p>
<p>A <em>DB instance</em> is an isolated database environment in the AWS Cloud. The basic building block of Amazon RDS is the DB instance.</p>
<p><strong>DB engines</strong></p>
<p>A <em>DB engine</em> is the specific relational database software that runs on your DB instance. Amazon RDS currently supports the following engines:</p>
<ul>
<li><p>MariaDB</p>
</li>
<li><p>Microsoft SQL Server</p>
</li>
<li><p>MySQL</p>
</li>
<li><p>Oracle</p>
</li>
<li><p>PostgreSQL</p>
</li>
</ul>
<p>📚 <a target="_blank" href="https://docs.aws.amazon.com/rds/?icmpid=docs_homepage_featuredsvcs">AWS RDS Documentation</a> 🐘</p>
<h3 id="heading-solution"><strong>💡 Solution:</strong></h3>
<p>The challenge of seamlessly integrating a web application with an RDS MySQL database and automating the deployment process demands an effective solution. This is where Terraform automation comes into play, providing a powerful toolset to address these challenges.</p>
<p>Using Terraform, we can define and provision the necessary infrastructure, including the RDS MySQL instance, security groups, and other components. This infrastructure is described in code, making it easy to manage, version, and replicate. By automating this process, we ensure that our RDS MySQL database and the associated resources are consistently deployed according to our specifications.</p>
<p>Terraform's automation capabilities further extend to the web application's deployment. We can define the deployment process as code, specifying how the application should be packaged, configured, and launched. This eliminates the need for manual intervention, reducing the risk of human errors and ensuring a more predictable and reliable deployment process.</p>
<p>By integrating our web application with the RDS MySQL database and automating the deployment with Terraform, we create a robust and efficient solution. This approach enables the web application to seamlessly connect with the database, allowing for data storage and retrieval. Moreover, the automation ensures that the entire process is repeatable and can be scaled as needed, making it a cost-effective and time-saving solution for real-time use cases.</p>
<p>In summary, Terraform automation solves the challenges of integrating a web application with an RDS MySQL database and streamlining the deployment process. It empowers us to efficiently manage our infrastructure and application deployments, ultimately resulting in a more agile and responsive system.</p>
<p><strong>Implementation Steps</strong> 🛠️</p>
<p>Planning your Terraform files and structure is a crucial step in effectively managing your infrastructure as code. Here's a description of the key considerations and steps to help you plan your Terraform files and structure.</p>
<p>Terraform Planning:</p>
<ol>
<li><p><strong>Project Scope and Goals:</strong> Begin by clearly defining the scope and goals of your Terraform project. What infrastructure components are you managing, and what do you aim to achieve with Terraform? Having a well-defined project scope will guide your decisions throughout the planning process.</p>
</li>
<li><p><strong>File Organization:</strong> Think about how you want to organize your Terraform files. A common approach is to create separate directories for different components or environments (e.g., "network," "app," "prod," "dev"). This makes it easier to manage and maintain your code as your project grows.</p>
</li>
<li><p><strong>Provider Configuration:</strong> Identify the cloud providers (e.g., AWS, Azure, Google Cloud) and configure the required provider blocks in your Terraform files. Ensure you have the necessary credentials and access rights to interact with these providers.</p>
</li>
<li><p><strong>Variables and Input Data:</strong> Define the variables that your Terraform modules will use. These can include region-specific settings, instance types, security groups, and more. You can organize variables in separate files or use <a target="_blank" href="http://variable.tf">variable.tf</a> files within each module.</p>
</li>
<li><p><strong>Outputs:</strong> Define outputs in your modules to extract information you need after provisioning infrastructure. Outputs can include IP addresses, URLs, or any other attributes that are required for application configuration.</p>
</li>
</ol>
<p><strong>Phase 1: Setting the Stage</strong> 🏗️</p>
<p>To get started, we'll establish the groundwork for this automation project. We'll create the necessary Terraform files that define the infrastructure components, including <a target="_blank" href="http://main.tf"><strong>main.tf</strong></a>, <a target="_blank" href="http://variables.tf"><strong>variables.tf</strong></a>, and <a target="_blank" href="http://outputs.tf"><strong>outputs.tf</strong></a>. These files serve as the blueprints for our EC2 and RDS instances.</p>
<p><strong>Phase 2: Preparing EC2 for Integration</strong> 🛠️</p>
<p>The first step towards complete automation is configuring the EC2 instance. We'll set up the server, ensuring it's ready to host Praful's Portfolio web application. This includes installing any required software and dependencies to support the dynamic web page.</p>
<p><strong>Phase 3: RDS Database Creation</strong> 🗄️</p>
<p>The heart of this project is the AWS RDS MySQL database. We'll automate the process of creating the database, specifying the necessary parameters such as the database engine, version, and security settings. Terraform's power will shine as it orchestrates the creation of a robust and secure database environment.</p>
<p><strong>Phase 4: Database-EC2 Integration</strong> 📊</p>
<p>This is where the magic happens. We'll automate the integration of the EC2 instance and the RDS MySQL database, ensuring they communicate seamlessly. Terraform will handle the network configurations, security groups, and database access permissions. This phase ensures that the web application can efficiently interact with the database.</p>
<p><strong>Phase 5: Dynamic Web Page and Data Flow</strong> 💼</p>
<p>Praful's Portfolio web application features a dynamic employee (Emp) page. With this automation, we'll allow users to input employee details, such as name and location, directly on the web page. The data entered by users will be automatically saved into the RDS MySQL database, creating a smooth data flow from the web application to the database. This phase showcases the full potential of automation in keeping data up-to-date and synchronized.</p>
<p>Pre-requisite:</p>
<ol>
<li><p>AWS Free Tier</p>
</li>
<li><p>Web Application source code</p>
</li>
<li><p>Webserver installation script file</p>
</li>
<li><p>SSH Client</p>
</li>
</ol>
<h3 id="heading-terraform-file-structure">Terraform file structure:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698194738124/701ba36f-4d56-4b10-aaaa-fe547315a63e.png" alt class="image--center mx-auto" /></p>
<p>user_data.tfpl</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698195227653/af3a05db-9c7e-4494-b659-33e8d94aa944.png" alt class="image--center mx-auto" /></p>
<p>Write all aws services terraform configurations:</p>
<p>Configure VPC</p>
<p>Configure Subnets</p>
<p>Configure internet gateway</p>
<p>Configure Security Group</p>
<p>Configure Route Tables for webserver and RDS Db server</p>
<p>Configure EC2 Machine with userdata script</p>
<p>Configure RDS DB MYSQL Server</p>
<p>Configure AWS Provider</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698194804728/f1cc9cb2-b67f-4ef5-9515-39311dffaef5.png" alt class="image--center mx-auto" /></p>
<p>main.tf</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698195429700/de35c5c0-fbaa-4ca7-a1a8-c64c4d10d2ae.png" alt class="image--center mx-auto" /></p>
<p>variables.tf</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698195373695/1de894c9-f1ae-4146-bd5f-dcb33226e81f.png" alt class="image--center mx-auto" /></p>
<p>outputs.tf</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698195393367/0a9f37a5-97d0-47a9-af85-f5e3a8e63fd9.png" alt class="image--center mx-auto" /></p>
<p>Let's do automate the infrastructure:</p>
<p>terraform init</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698195841299/4b445729-ae01-444d-baf6-93d2c8223fff.png" alt class="image--center mx-auto" /></p>
<p>terraform fmt</p>
<p>terraform validate</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698195905235/e44f9ebc-b3f7-48f8-bd14-d2a5348c06b8.png" alt class="image--center mx-auto" /></p>
<p>terraform plan</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698195980227/3511be65-fe5f-4d2d-938b-e24983494954.png" alt class="image--center mx-auto" /></p>
<p>terraform apply -auto-approve</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698196184187/1af06449-a266-4317-a173-e1e7283d14fb.png" alt class="image--center mx-auto" /></p>
<p>terraform apply complete</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698196623180/972fc0be-f626-4518-9e60-9a4f80310bb7.png" alt class="image--center mx-auto" /></p>
<p>Let's validate in the AWS console that the services are created.</p>
<p>VPC created</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698196825546/aeb1b20b-1502-44fb-bc72-fb4e77cb6353.png" alt class="image--center mx-auto" /></p>
<p>EC2 web server is running</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698196737282/2596a791-3481-484c-8ac6-fc56400c9b09.png" alt class="image--center mx-auto" /></p>
<p>RDS MySQL instances created</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698196908677/deb30a44-1c07-4619-8a18-3d251f32ab23.png" alt class="image--center mx-auto" /></p>
<p>Let's copy the IP address in the browser to access web application</p>
<p>Web application accessed successfully</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698196964367/fa17efb2-1344-4515-b480-3942ea03cde3.png" alt class="image--center mx-auto" /></p>
<p>Access emp.php page</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698197019628/b396538b-a6ff-4a4d-9885-37b5c48efff7.png" alt class="image--center mx-auto" /></p>
<p>Insert some test data into web app and validate that it is inserted successfully</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698197128920/6adc09f6-65dd-4855-be91-3408ff5561f7.png" alt class="image--center mx-auto" /></p>
<p>Let's login to RDS MySQL database and validate from backend that data is present in database</p>
<p>Login to mysql through web server as jump host</p>
<p>Copy the RDS endpoint</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698197217625/f28cb08a-10e3-4eec-bbcf-8e385c0ba046.png" alt class="image--center mx-auto" /></p>
<p>RDS Endpoint:</p>
<p><a target="_blank" href="http://terraform-20231025010716744900000001.c5xf4htadaog.us-east-1.rds.amazonaws.com">terraform-20231025010716744900000001.c5xf4htadaog.us-east-1.rds.amazonaws.com</a></p>
<p>Connect to EC2 server</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698197263588/089a7fc2-a5d7-4589-9505-5f9994f4c4d0.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698197287257/bba890c1-08a0-4420-aae1-275e5fc8024a.png" alt class="image--center mx-auto" /></p>
<p>Provide MySQL connection string to login</p>
<p>mysql -h &lt;rdsendpoint&gt; -u &lt;username&gt; -p</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698197508188/5dd2ffe2-b3bf-4ca8-8b3d-a1691c024ee5.png" alt class="image--center mx-auto" /></p>
<p>Login successfully to MySQL</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698197537170/b3511ccb-0e70-45dc-b539-c118c3cfe3f1.png" alt class="image--center mx-auto" /></p>
<p>Verify that "empdb" database is created and exists in database</p>
<p>show databases;</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698197603860/752ad041-bebe-4b0d-a6af-84721602b403.png" alt class="image--center mx-auto" /></p>
<p>use empdb;</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698197642832/b7093fa2-2ab0-4b2a-bdde-0610bef5b465.png" alt class="image--center mx-auto" /></p>
<p>Verify that "EMPLOYEE" table exists in db</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698197694650/c85b99f0-e747-4e5d-9f3a-bac09c78e054.png" alt class="image--center mx-auto" /></p>
<p>Query table "Employees" to validate that data entered from web app is EXISTS in database</p>
<p>select * from EMPLOYEES;</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698197872195/2ba9d761-9503-4c21-bd2a-faddfb4d14be.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698197825651/1bb864e4-1a31-4f17-8abd-bc71a15f03af.png" alt class="image--center mx-auto" /></p>
<p>Congratulations: we have done it.</p>
<p>Now, let's destroy all the resources from the AWS to avoid unnecessary billing.</p>
<p>terraform destroy -auto-approve</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698198468152/40b5c768-9465-4d97-827b-78b2a76fccc3.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698198807212/4f07bc47-1a5c-4707-aba7-03742119c86e.png" alt class="image--center mx-auto" /></p>
<p><strong>Conclusion:</strong> 🌟</p>
<p>This automation project demonstrates the power of Terraform in streamlining complex infrastructure setups. By integrating Praful's Portfolio web application with an AWS RDS MySQL database, we've created a dynamic and efficient data management system. This blog post serves as your guide through this exciting journey, providing insights and expertise to help you master automation in the AWS cloud.</p>
<p>Get ready to explore the beauty of complete automation as we dive into the integration of EC2 and AWS RDS! 🔗✨</p>
<p>Happy automating! 🚀</p>
<p><strong>Let's Stay Connected:</strong></p>
<p>🌐 <strong>Website:</strong> <a target="_blank" href="https://www.praful.cloud/"><strong>Visit my website</strong></a> for the latest updates and articles.</p>
<p>💼 <strong>LinkedIn:</strong> Connect with me on <a target="_blank" href="https://linkedin.com/in/prafulpatel16"><strong>LinkedIn</strong></a> for professional networking and insights.</p>
<p>📎 <strong>GitHub:</strong> Check out my projects and repositories on <a target="_blank" href="https://github.com/prafulpatel16/prafulpatel16"><strong>GitHub</strong></a>.</p>
<p>🎥 <strong>YouTube:</strong> Subscribe to my <a target="_blank" href="https://www.youtube.com/@prafulpatel16"><strong>YouTube channel</strong></a> for tech tutorials and more.</p>
<p>📝 <strong>Medium:</strong> Find my tech articles on <a target="_blank" href="https://medium.com/@prafulpatel16"><strong>Medium</strong></a>.</p>
<p>📰 <a target="_blank" href="http://Dev.to"><strong>Dev.to</strong></a><strong>: Explore my developer-focused content on</strong> <a target="_blank" href="http://Dev.to"><strong>Dev.to</strong></a><strong>.</strong></p>
<p>Let's connect and stay updated with the latest in technology and development! 🚀🔗</p>
<p>#AWS #CloudEngineering #CloudComputing #AmazonWebServices #AWSArchitecture #DevOps #CloudSolutions #CloudSecurity #InfrastructureAsCode #AWSCertification #Serverless #AWSCommunity #TechBlogs #CloudExperts #CloudMigration #CloudOps #AWSJobs #TechIndustry #CareerInTech #InnovationInCloud #devops #cloudengineerjobs #devopsjobs #azure #gcp #oci #cloudjobs</p>
]]></content:encoded></item><item><title><![CDATA[🌐 AWS - Automated Import of Data Dump and Initialization of an AWS RDS Database]]></title><description><![CDATA[🚀 Introduction
In the dynamic world of cloud computing, mastering the right tools and technologies is the key to unleashing your full potential. In this journey, we'll explore a comprehensive toolkit of AWS services and other essential tools that ev...]]></description><link>https://praful.cloud/aws-automated-import-of-data-dump-and-initialization-of-an-aws-rds-database</link><guid isPermaLink="true">https://praful.cloud/aws-automated-import-of-data-dump-and-initialization-of-an-aws-rds-database</guid><category><![CDATA[#AWS #CloudEngineering #CloudComputing #AmazonWebServices #AWSArchitecture #DevOps #CloudSolutions #CloudSecurity #InfrastructureAsCode #AWSCertification #Serverless #AWSCommunity #TechBlogs #CloudExperts #CloudMigration #CloudOps #AWSJobs #TechIndustry #CareerInTech #InnovationInCloud #devops #cloudengineerjobs #devopsjobs #azure #gcp #oci #cloudjobs]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Mon, 23 Oct 2023 02:38:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1698028606216/4c3e1f33-e0f9-4611-b5e1-1f9a52b76018.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">🚀 Introduction</h3>
<p>In the dynamic world of cloud computing, mastering the right tools and technologies is the key to unleashing your full potential. In this journey, we'll explore a comprehensive toolkit of AWS services and other essential tools that every cloud enthusiast and aspiring cloud engineer should be familiar with. Buckle up as we dive into a world of cloud possibilities!</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9taxkdh30w8o03a630fg.png" alt="Image description" /></p>
<h3 id="heading-objective">Objective:</h3>
<p>The objective of this post is to bring and spread a knowledge about #aws cloud services, how to ? and where to consume the #aws services to solve the real world business challenges</p>
<p><a target="_blank" href="https://github.com/prafulpatel16/terraform-aws/blob/master/01-aws-rds-dump-tf/rds-data-import-tf.md">GitHub Repository</a> 🚀</p>
<h3 id="heading-description">Description:</h3>
<h3 id="heading-use-case">🚀 Use Case:</h3>
<p>In a real-world scenario, our database team faced the challenge of deploying database services and importing substantial data dumps for a migration project. This project aimed to transfer a large volume of data into a MySQL database before launching a web application.</p>
<h3 id="heading-solution">💡 Solution:</h3>
<p>I embraced this challenge and embarked on a mission to efficiently resolve the migration of massive data dumps to an #aws RDS MySQL database. And I did it the smart way - with automation, leveraging the power of Terraform.</p>
<p><strong>Solution Diagram:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0qkkmxlm76gr8ye9l8b.png" alt="Image description" /></p>
<p>Stay tuned for an exciting journey into the world of seamless database migration in the AWS cloud. 🌥️💾💼</p>
<p>The solution:</p>
<p><strong>1. Manual way create AWS RDS MySQL instance</strong> 2. Automated way create AWS RDS instance and import datadump .sql file ****</p>
<h4 id="heading-tools-amp-technologies-covered">💻Tools &amp; Technologies Covered</h4>
<p>Let's take a closer look at the tools and technologies we'll be delving into:</p>
<h4 id="heading-aws-services">AWS Services</h4>
<p>🌐 VPC (Virtual Private Cloud) - Create isolated network environments in the AWS cloud. 🔒 Security Groups - Define and manage inbound/outbound traffic rules to your AWS resources. 💻 EC2 Machine (Elastic Compute Cloud) - Launch scalable virtual servers in the cloud. 🗄️ AWS RDS (Relational Database Service) - MySQL - Manage MySQL databases with ease, handled by AWS.</p>
<h4 id="heading-other-tools">Other Tools</h4>
<p>🔧 Terraform - The Infrastructure as Code tool to automate and manage cloud resources. 🐙 GitHub - Collaborate, store, and version your code effectively. 🖋️ VS Code (Visual Studio Code) - A versatile, free code editor for a seamless development experience.</p>
<p>**1. Manual way create AWS RDS MySQL instance</p>
<ol>
<li>Search enter RDS,from AWS Console and click the RDS result under Services:</li>
</ol>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/49wsuhj48rkp1i11j8q7.png" alt="Image description" /></p>
<ol>
<li>RDS dashboard, click Subnet Groups from the left-hand menu: Subnet Groups</li>
</ol>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t80ever8t605tcktylzf.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cidl1tkyjrvmyaijrwf5.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mz2bj93a4ph79pfgvtho.png" alt="Image description" /></p>
<p>Create Security Group</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tbk4d242dv5ovm42na6b.png" alt="Image description" /></p>
<p>Create Database</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q0jpt79dycsrxw5zdkgl.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/err8lu7hxnn0cseu1oxj.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dk0m5iuq5wzjvjxd40at.png" alt="Image description" /></p>
<p>Go to AWS systems manager Click to Session Manager</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ycrs7kel6fongucnutmg.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sg27ie9ec3oqci2tyeor.png" alt="Image description" /></p>
<p>Session manager started</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/82lu5pgp18nhangmm30g.png" alt="Image description" /></p>
<p>Login to session and install mysql client</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vhvsldvbh67l0xspdb3x.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u2hzwylaywukc90byw66.png" alt="Image description" /></p>
<p>Go to RDS instance and copy the RDS Endpoint</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pxzfxrwsv6zhddlhswxc.png" alt="Image description" /></p>
<p>Login to mysql</p>
<p>mysql -h -u username -p dbname</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sm2wxczsp0q3spjeg52s.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w4doyywvdwk7ixfr7flv.png" alt="Image description" /></p>
<p><strong>Now Let's do it complete automated way How to import database into database using terraform</strong></p>
<p><strong>2. Automated way create AWS RDS instance and import datadump .sql file</strong> **</p>
<p>Objective: The objective of this task is to import "obbs" data to AWS RDS automated way.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jytjfg5e38h4tmtnvv47.png" alt="Image description" /></p>
<p>Step-by-Step Instructions Follow these steps to automate the deployment of the AWS RDS and the initialization of the database:</p>
<p>Make sure you have the Terraform configuration files (<a target="_blank" href="http://main.tf">main.tf</a>, <a target="_blank" href="http://variables.tf">variables.tf</a>, and <a target="_blank" href="http://outputs.tf">outputs.tf</a>) in a directory.</p>
<p>Place your obbs.sql file and connect_to_rds.sh script in the same directory.</p>
<p>Open a terminal and navigate to the directory containing your Terraform files and the SQL dump file.</p>
<p>Run terraform init to initialize the working directory.</p>
<p>Run terraform plan to see the execution plan for your infrastructure.</p>
<p>If the plan looks correct, apply the configuration with terraform apply.</p>
<p>Terraform will prompt you to confirm the plan. Enter yes to proceed.</p>
<p>Terraform will provision the RDS instance and the EC2 instance, copy the SQL dump file and script to the EC2 instance, and execute the script to connect to the RDS instance and import the SQL dump.</p>
<p>Once the process is complete, Terraform will output the RDS endpoint.</p>
<p>You have successfully automated the deployment of an AWS RDS database and initialized it with your SQL dump file.</p>
<p><strong>Project Structure:</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aemt4jfy2qqmrnlwmux5.png" alt="Project Structure" /></p>
<p><strong>Implementation:</strong></p>
<p><strong>Step 1: Initializing Terraform</strong></p>
<p>The journey begins with initializing Terraform in your project directory:</p>
<pre><code class="lang-bash">terraform init
</code></pre>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lpa04wxjr6o7rwxvxb0v.png" alt="Terraform Init" /></p>
<p><strong>Step 2: Planning the Deployment</strong></p>
<p>Use Terraform to plan your infrastructure deployment:</p>
<pre><code class="lang-bash">terraform plan
</code></pre>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nkb8xngqvyaehzblimq0.png" alt="Terraform Plan" /></p>
<p><strong>Step 3: Executing the Deployment</strong></p>
<p>Now, let's apply the deployment:</p>
<pre><code class="lang-bash">terraform apply
</code></pre>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ui68fhr6wvh79o98w2co.png" alt="Terraform Apply" /></p>
<p><strong>Step 4: Verification and Validation</strong></p>
<p>With the deployment complete, it's time to verify and validate that Terraform has automated the process. Here's what you need to do:</p>
<p><strong>A. Logging into EC2</strong></p>
<ul>
<li>Log in to your EC2 instance to establish the connection:</li>
</ul>
<pre><code class="lang-bash">ssh -i &lt;your-key-pair.pem&gt; ec2-user@&lt;your-ec2-public-ip&gt;
</code></pre>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tqr2e4sjejj9zsp09xiz.png" alt="Connect to EC2" /></p>
<p><strong>B. Accessing RDS Services</strong></p>
<ul>
<li>In the AWS Management Console, go to the RDS service and click on "Databases."</li>
</ul>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xlw0gcabe9l1qgy665l5.png" alt="RDS Services" /></p>
<p><strong>C. Obtaining the RDS Endpoint</strong></p>
<ul>
<li>Click on your RDS database and navigate to "Connectivity &amp; Security" to copy the RDS endpoint.</li>
</ul>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u67cbre23ioy58fl9p4x.png" alt="RDS Endpoint" /></p>
<p><strong>D. Establishing RDS Login</strong></p>
<ul>
<li>Go back to your EC2 terminal and provide the login string to access your RDS instance:</li>
</ul>
<pre><code class="lang-bash">mysql -h &lt;rds-endpoint&gt; -u &lt;username&gt; -p
</code></pre>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/inhhr7sb7pdn5x4ehxgm.png" alt="RDS Login" /></p>
<p><strong>Step 5: Success!</strong></p>
<p>Login to the RDS database is successful. Now, let's verify that the "obbs" database is created automatically using the following MySQL commands:</p>
<ul>
<li>Show databases:</li>
</ul>
<pre><code class="lang-sql"><span class="hljs-keyword">show</span> <span class="hljs-keyword">databases</span>;
</code></pre>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4u40lneqqxlivry722q.png" alt="Show Databases" /></p>
<ul>
<li>Use the "obbs" database:</li>
</ul>
<pre><code class="lang-sql"><span class="hljs-keyword">use</span> obbs;
</code></pre>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9rta0ei0ar7xtzp4hfla.png" alt="Use obbs" /></p>
<ul>
<li>Confirm that the data tables have been successfully imported into the "obbs" database:</li>
</ul>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6v6984t9tn72exx9qjha.png" alt="Data Import" /></p>
<p>Deploying an automated data dump import to an AWS RDS MySQL database is a powerful way to streamline your database management. However, like any technical project, this process comes with its fair share of challenges. In this blog post, we'll explore two major hurdles faced during this endeavor and how we successfully overcame them with the help of Terraform and some creative problem-solving.</p>
<p>🚧 Challenge #1: RDS Remote Login Issues The first obstacle we encountered was related to RDS remote login. The RDS endpoint provided by AWS includes the port in the connection string, which is common and expected for most manual configurations. However, Terraform handles this differently. It assumes the default port (3306) for MySQL, and when you try to use the endpoint with the port in Terraform's automated setup, it results in login failures.</p>
<p>🛠️ Challenge #1 Solved: To resolve this issue, we devised a simple yet effective solution using a shell script. We created a custom script called connect_to_rds.sh to connect to the RDS database. Inside this script, we extracted the RDS endpoint provided by AWS, removed the port, and used the modified endpoint for the database connection. This quick workaround allowed us to use the RDS endpoint without specifying the port, aligning with Terraform's expectations.</p>
<p>🚧 Challenge #2: Uploading the Initiation File The second challenge we encountered was the need to upload the rds-db.sql file to the EC2 instance. Manually copying files to the remote server can be a hassle, and Terraform encourages automation. We needed a solution to automate this file transfer.</p>
<p>🛠️ Challenge #2 Solved: Terraform offers a versatile provisioner called the "file" provisioner, which allows you to copy files to remote machines. By leveraging this provisioner, we were able to automate the upload of the rds-db.sql file to the EC2 instance with ease. This eliminated the need for manual intervention and ensured that the database initialization process was entirely automated.</p>
<p>Conclusion: Facing and overcoming challenges is an integral part of any technical project. In the case of deploying automated data dump imports to an AWS RDS MySQL database, understanding the intricacies of Terraform, AWS, and their interactions played a crucial role in our success.</p>
<p>By sharing these challenges and their solutions, we hope to assist others who may encounter similar obstacles during their automation journey. With the right tools, creative thinking, and a bit of determination, you can automate your database management processes and enjoy the benefits of efficiency and reliability.</p>
<p>Follow My Blog:</p>
<p><strong>🚀</strong> <a target="_blank" href="https://dev.to/prafulpatel16/automated-import-of-data-dump-and-initialization-of-an-aws-rds-database-2d0m"><strong>Automated Import of Data Dump and Initialization of an AWS RDS Database</strong></a></p>
<p>Happy automating! 🚀</p>
<p><strong>Let's Stay Connected:</strong></p>
<p>🌐 <strong>Website:</strong> <a target="_blank" href="https://www.praful.cloud">Visit my website</a> for the latest updates and articles.</p>
<p>💼 <strong>LinkedIn:</strong> Connect with me on <a target="_blank" href="https://linkedin.com/in/prafulpatel16">LinkedIn</a> for professional networking and insights.</p>
<p>📎 <strong>GitHub:</strong> Check out my projects and repositories on <a target="_blank" href="https://github.com/prafulpatel16/prafulpatel16">GitHub</a>.</p>
<p>🎥 <strong>YouTube:</strong> Subscribe to my <a target="_blank" href="https://www.youtube.com/@prafulpatel16">YouTube channel</a> for tech tutorials and more.</p>
<p>📝 <strong>Medium:</strong> Find my tech articles on <a target="_blank" href="https://medium.com/@prafulpatel16">Medium</a>.</p>
<p>📰 <a target="_blank" href="http://Dev.to"><strong>Dev.to</strong></a><strong>:</strong> Explore my developer-focused content on <a target="_blank" href="http://Dev.to">Dev.to</a>.</p>
<p>Let's connect and stay updated with the latest in technology and development! 🚀🔗</p>
<p>#AWS #CloudEngineering #CloudComputing #AmazonWebServices #AWSArchitecture #DevOps #CloudSolutions #CloudSecurity #InfrastructureAsCode #AWSCertification #Serverless #AWSCommunity #TechBlogs #CloudExperts #CloudMigration #CloudOps #AWSJobs #TechIndustry #CareerInTech #InnovationInCloud #devops #cloudengineerjobs #devopsjobs #azure #gcp #oci #cloudjobs</p>
]]></content:encoded></item><item><title><![CDATA[🌐AWS - AWS Networking & Content Delivery - VPC'S]]></title><description><![CDATA[Amazon VPC
🌐 AWS VPC is like a shielded fortress within the AWS network, providing you with a virtually isolated private network. It's as if you've brought your secure data center to the cloud! 🏰💻🌐

While creating a VPC following options need to ...]]></description><link>https://praful.cloud/aws-aws-networking-content-delivery-vpcs</link><guid isPermaLink="true">https://praful.cloud/aws-aws-networking-content-delivery-vpcs</guid><category><![CDATA[AWS]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[CloudEngineer]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Thu, 19 Oct 2023 04:44:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1697732774547/65fb4032-fda8-434e-babe-754555258877.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Amazon VPC</p>
<p>🌐 AWS VPC is like a shielded fortress within the AWS network, providing you with a virtually isolated private network. It's as if you've brought your secure data center to the cloud! 🏰💻🌐</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697682971933/83f681ff-7b8b-4e92-9312-18361f7d14b7.png" alt class="image--center mx-auto" /></p>
<p>While creating a VPC following options need to be configured.</p>
<p><strong>Availability Zones</strong> 🏙️:</p>
<ol>
<li><ul>
<li>These are distinct data centers within a region, providing redundancy and fault tolerance. Think of them as the building blocks of high availability.</li>
</ul>
</li>
<li><p><strong>CIDR Blocks</strong> 📊:</p>
<ul>
<li>Classless Inter-Domain Routing blocks are used to define the IP address range for your VPC, much like setting the size of your territory on a map.</li>
</ul>
</li>
<li><p><strong>DNS Options</strong> 🌐:</p>
<ul>
<li>Domain Name System (DNS) options allow you to configure how your VPC resolves domain names. It's like choosing your VPC's "language" for talking to the internet.</li>
</ul>
</li>
<li><p><strong>Internet Gateway</strong> 🌐:</p>
<ul>
<li>This is your VPC's portal to the internet. It helps traffic flow between your VPC and the worldwide web, acting as the gateway to the online world.</li>
</ul>
</li>
<li><p><strong>Name</strong> 📛:</p>
<ul>
<li>The name of your VPC is like a label on a folder, making it easier to identify and manage within your AWS account.</li>
</ul>
</li>
<li><p><strong>NAT Gateways</strong> ⚡:</p>
<ul>
<li>Network Address Translation (NAT) gateways are like interpreters that help your private resources communicate with the internet, enabling them to "speak the same language."</li>
</ul>
</li>
<li><p><strong>Route Tables</strong> 🗺️:</p>
<ul>
<li>Think of route tables as maps that dictate where network traffic should go. They define the pathways within your VPC.</li>
</ul>
</li>
<li><p><strong>Subnets</strong> 🏘️:</p>
<ul>
<li>Subnets are like neighborhoods within your VPC. They divide your VPC's IP address range into smaller chunks, each with its unique characteristics.</li>
</ul>
</li>
<li><p><strong>Tenancy</strong> 🏡:</p>
<ul>
<li>Tenancy options determine whether your instances run on shared hardware (like apartments in a building) or dedicated hardware (like your own house) within the AWS data center.</li>
</ul>
</li>
</ol>
<p>Let's create and deploy the VPC and all the networking components in a real time hands-on way.</p>
<p>The solution is comprised of the following components: ·</p>
<p>· A VPC across two Availability Zones</p>
<p>· Two public web subnets, two private app subnets, and two private DB subnets</p>
<p>· An Internet Gateway attached to the VPC</p>
<p>· A public route table routing internet traffic to the Internet Gateway</p>
<p>· Two private route tables routing traffic internally within the VPC</p>
<p>· A frontend web server application Elastic Load Balancing that routes traffic to the Apache Web Servers</p>
<p>An Auto Scaling group that launches additional Apache Web Servers based on defined scaling policies. Each instance of the web server is based on a launch template, which defines the same configuration for each new web server.</p>
<p>· A hosted zone in Amazon Route 53 with a domain name that routes to the frontend web server Elastic Load Balancing</p>
<p>· An Auto Scaling group that launches additional Apache Web Application Servers based on defined scaling policies. Each instance of the Apache Web Application server is based on a launch template, which defines the same configuration and software components for each new application server</p>
<p>· A MySQL Amazon Relational Database Service (Amazon RDS) Multi-AZ deployment for MySQL RDS to store the contact management and role access tables</p>
<p>Here's the list of components:</p>
<ol>
<li><p>☁️ AWS Cloud</p>
</li>
<li><p>🌐 VPC</p>
<ul>
<li><p>🏘️ Subnets</p>
</li>
<li><p>🌐 Internet Gateway</p>
</li>
<li><p>⚙️ NAT Gateway</p>
</li>
<li><p>🗺️ Route Tables</p>
</li>
<li><p>🔒 Security Groups</p>
</li>
</ul>
</li>
<li><p>💻 EC2 Machine</p>
</li>
<li><p>🎯 Application Load Balancer</p>
</li>
<li><p>♻️ Auto Scaling</p>
</li>
<li><p>🚀 Launch Template</p>
</li>
<li><p>🎲 RDS Database - MySQL</p>
</li>
<li><p>🚪 Mobaxterm SSH Client</p>
</li>
</ol>
<p>Here are the project implementation phases:</p>
<ol>
<li><p>🚀 Phase 1: Deploy networking infrastructure</p>
</li>
<li><p>📦 Phase 2: Deploy Launch Template</p>
</li>
<li><p>🎯 Phase 3: Create elastic load balancer, auto scaling group, target group</p>
</li>
<li><p>🌐 Phase 4: Verify that the web application is accessible</p>
</li>
<li><p>🔄 Phase 5: Test horizontal scaling and high availability of the web application</p>
</li>
<li><p>🎲 Phase 6: Deploy RDS DB managed MYSQL instance</p>
</li>
</ol>
<p>This sequence provides a clear and visually engaging overview of your project's implementation phases.</p>
<h3 id="heading-aws-solution-architecture">AWS Solution Architecture:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697690191744/ec8f36a4-3d9b-42a6-b5ba-0f1c3590867b.png" alt class="image--center mx-auto" /></p>
<p><strong>Phase 1: Deploy networking components</strong> 🌐</p>
<ol>
<li><p>Create VPC 🏞️</p>
<ul>
<li><p>Name: prafect-vpc 📛</p>
</li>
<li><p>CIDR: 10.0.0.0/16 📊</p>
</li>
</ul>
</li>
<li><p>Create web Subnets 🏘️</p>
<ul>
<li><p>Name: web-public01 📛</p>
<ul>
<li><p>Availability zone: us-east-2a 🏙️</p>
</li>
<li><p>CIDR: 10.0.1.0/24 📊</p>
</li>
</ul>
</li>
<li><p>Name: web-public02 📛</p>
<ul>
<li><p>Availability zone: us-east-2b 🏙️</p>
</li>
<li><p>CIDR: 10.0.2.0/24 📊</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>Create app Subnets 🏘️</p>
<ul>
<li><p>Name: app-private01 📛</p>
<ul>
<li><p>Availability zone: us-east-2a 🏙️</p>
</li>
<li><p>CIDR: 10.0.3.0/24 📊</p>
</li>
</ul>
</li>
<li><p>Name: app-private02 📛</p>
<ul>
<li><p>Availability zone: us-east-2b 🏙️</p>
</li>
<li><p>CIDR: 10.0.4.0/24 📊</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>Create DB Subnets 🏘️</p>
<ul>
<li><p>Name: db-private01 📛</p>
<ul>
<li><p>Availability zone: us-east-2a 🏙️</p>
</li>
<li><p>CIDR: 10.0.5.0/24 📊</p>
</li>
</ul>
</li>
<li><p>Name: db-private02 📛</p>
<ul>
<li><p>Availability zone: us-east-2b 🏙️</p>
</li>
<li><p>CIDR: 10.0.6.0/24 📊</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>Create Internet Gateway 🌐</p>
<ul>
<li><p>Name: web-igw 📛</p>
</li>
<li><p>Attach to VPC: prafect-vpc 🌐</p>
</li>
</ul>
</li>
<li><p>Create NAT Gateway ⚡</p>
<ul>
<li><p>Name: prafect-NAT 📛</p>
</li>
<li><p>Subnet: web-public01 🏘️</p>
</li>
<li><p>Connectivity: Public 🌐</p>
</li>
<li><p>Elastic IP: Allocate Elastic IP 📶</p>
</li>
</ul>
</li>
<li><p>Create Route table – web-RT 🗺️</p>
<ul>
<li><p>Name: Web-RT 📛</p>
</li>
<li><p>Select the VPC: prafect-vpc 🌐</p>
</li>
<li><p>Subnet Associations 🏘️</p>
<ul>
<li><p>Select – web-public01 🏘️</p>
</li>
<li><p>Select – web-public02 🏘️</p>
</li>
</ul>
</li>
<li><p>Routes – Add internet gateway as a route from 0.0.0.0/0 🛣️</p>
<ul>
<li><p>Destination: 0.0.0.0/0 🗺️</p>
</li>
<li><p>Target: Select internet gateway: web-igw 🌐</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>Create Route table – App-RT 🗺️</p>
<ul>
<li><p>Name: App-RT 📛</p>
</li>
<li><p>Select the VPC: prafect-vpc 🌐</p>
</li>
<li><p>Subnet Associations 🏘️</p>
<ul>
<li><p>Select – app-private01 🏘️</p>
</li>
<li><p>Select – app-private02 🏘️</p>
</li>
</ul>
</li>
<li><p>Routes – Add NAT 0.0.0.0/0 🛣️</p>
<ul>
<li><p>Destination: 0.0.0.0/0 🗺️</p>
</li>
<li><p>Target: Select NAT gateway ⚡</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>Create Security Groups 🛡️</p>
<ul>
<li><p>Create one security group for web traffic 📛</p>
<ul>
<li><p>Name: web-SG 📛</p>
</li>
<li><p>VPC: prafect-vpc 🌐</p>
</li>
<li><p>Inbound rule 1 📊</p>
<ul>
<li><p>Type: HTTP 🌐</p>
</li>
<li><p>Protocol: TCP 🌐</p>
</li>
<li><p>Port Range: 80 📶</p>
</li>
<li><p>Source: 0.0.0.0/0 🌍</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>Create the second security group for App traffic 📛</p>
<ul>
<li><p>Name: app-SG 📛</p>
</li>
<li><p>VPC: prafect-vpc 🌐</p>
</li>
<li><p>Inbound rule 1 📊</p>
<ul>
<li><p>Type: HTTP 🌐</p>
</li>
<li><p>Protocol: TCP 🌐</p>
</li>
<li><p>Port Range: 80 📶</p>
</li>
<li><p>Source: anywhere : web-SG 🌍</p>
</li>
</ul>
</li>
<li><p>Inbound rule 2 📊</p>
<ul>
<li><p>Type: MYSQL/Aurora 🌐</p>
</li>
<li><p>Protocol: TCP 🌐</p>
</li>
<li><p>Port Range: 3306 📶</p>
</li>
<li><p>Source: anywhere : db-SG 🌍</p>
</li>
</ul>
</li>
<li><p>Inbound rule 3 📊</p>
<ul>
<li><p>Type: SSH (if need to access the app instance by admin) 🌐</p>
</li>
<li><p>Protocol: TCP 🌐</p>
</li>
<li><p>Port Range: 22 📶</p>
</li>
<li><p>Source: anywhere : MYIP 🌍</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>Create the third security group for db traffic 📛</p>
<ul>
<li><p>Name: db-SG 📛</p>
</li>
<li><p>VPC: prafect-vpc 🌐</p>
</li>
<li><p>Inbound rule 1 📊</p>
<ul>
<li><p>Type: ALL TCP 🌐</p>
</li>
<li><p>Protocol: TCP 🌐</p>
</li>
<li><p>Port Range: 0-65635 📶</p>
</li>
<li><p>Source: anywhere : app-SG 🌍</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<p><strong>Phase 2: Deploy Launch Template</strong> 🚀</p>
<ol>
<li><p>Create Launch Template: instances 📦</p>
</li>
<li><p>Target Group name: app-TG 🎯</p>
<ul>
<li><p>Protocol: TCP 🌐</p>
</li>
<li><p>Port: 80 📶</p>
</li>
</ul>
</li>
<li><p>VPC: prafect-vpc 🌐</p>
</li>
<li><p>Health checks 🩺</p>
<ul>
<li>Health check protocol: HTTP 🌐</li>
</ul>
</li>
<li><p>Advanced health check 🩺</p>
<ul>
<li><p>Port: Traffic port 📶</p>
</li>
<li><p>Healthy threshold: 3 📈</p>
</li>
<li><p>Unhealthy threshold: 3 📉</p>
</li>
<li><p>Timeout: 4 ⏱️</p>
</li>
<li><p>Interval: 10 seconds ⏳</p>
</li>
</ul>
</li>
</ol>
<p><strong>Phase 3: Deploy Target Group</strong> 🎯</p>
<ol>
<li><p>Choose target group: instances 📦</p>
</li>
<li><p>Target Group name: app-TG 🎯</p>
<ul>
<li><p>Protocol: TCP 🌐</p>
</li>
<li><p>Port: 80 📶</p>
</li>
<li><p>VPC: prafect-vpc 🌐</p>
</li>
</ul>
</li>
<li><p>Health checks 🩺</p>
<ul>
<li>Health check protocol: HTTP 🌐</li>
</ul>
</li>
<li><p>Advanced health check 🩺</p>
<ul>
<li><p>Port: Traffic port 📶</p>
</li>
<li><p>Healthy threshold: 3 📈</p>
</li>
<li><p>Unhealthy threshold: 3 📉</p>
</li>
<li><p>Timeout: 4 ⏱️</p>
</li>
<li><p>Interval: 10 seconds</p>
</li>
</ul>
</li>
</ol>
<p>🚀 <strong>Phase 4: Deploy Application Load Balancer</strong></p>
<ol>
<li><p>Create Launch Template</p>
</li>
<li><p>Create Application Load Balancer</p>
</li>
</ol>
<ul>
<li><p>Name: web-ALB</p>
</li>
<li><p>Scheme: internet-facing</p>
</li>
<li><p>IP address: IPv4</p>
</li>
<li><p>Network Mapping:</p>
<ul>
<li><p>Select VPC: web-vpc</p>
</li>
<li><p>Mappings: Select: us-east-1a, us-east-1b</p>
</li>
<li><p>Security Groups: Select: web-ALB-SG</p>
</li>
<li><p>Listener:</p>
<ul>
<li><p>HTTP:80</p>
</li>
<li><p>Default action: Target Group</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>🚀 <strong>Phase 5: Deploy Auto Scaling Group</strong></p>
<ol>
<li><p>Name: web-ASG</p>
</li>
<li><p>Launch Template: web-template</p>
</li>
<li><p>Network:</p>
</li>
</ol>
<ul>
<li><p>VPC: web-vpc</p>
</li>
<li><p>Availability Zones: us-east-1a, us-east-1b</p>
</li>
<li><p>Load Balancing: Attach to an existing load balancer</p>
</li>
<li><p>Choose Target Group: web-TG</p>
</li>
<li><p>Health Check: ELB: 300 seconds</p>
</li>
<li><p>Group Size:</p>
<ul>
<li><p>Units</p>
</li>
<li><p>Desired Capacity: 2</p>
</li>
<li><p>Minimum Capacity: 2</p>
</li>
<li><p>Maximum Capacity: 4</p>
</li>
</ul>
</li>
<li><p>Scaling Policies:</p>
<ul>
<li><p>Name: Target Tracking Policy</p>
</li>
<li><p>Metric Type: Average CPU Utilization</p>
</li>
<li><p>Target Value: 50</p>
</li>
<li><p>Warm-up: 300 seconds</p>
</li>
</ul>
</li>
</ul>
<p>🚀 <strong>Phase 6: Verify that web application is accessible</strong></p>
<ol>
<li><p>Go to Application Load Balancer (ALB)</p>
</li>
<li><p>Access the ALB DNS and access the web application</p>
</li>
</ol>
<p>🚀 <strong>Phase 7: Deploy RDS DB Managed MySQL Instance</strong></p>
<ul>
<li><p>Create DB Instance Group</p>
<ul>
<li><p>Go to Subnet Groups</p>
</li>
<li><p>Create DB Subnet Group</p>
<ul>
<li><p>Name: db-subnetgroup</p>
</li>
<li><p>VPC: prafect-vpc</p>
</li>
<li><p>Add Subnets:</p>
<ul>
<li><p>Availability Zones: us-east-2a, us-east-2b</p>
</li>
<li><p>Subnets: db-private01, db-private02</p>
</li>
</ul>
</li>
<li><p>Create</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>Create DB Instance – MySQL</p>
<ul>
<li><p>Create Database</p>
</li>
<li><p>Standard Create</p>
</li>
<li><p>Engine Options: MySQL</p>
</li>
<li><p>Engine Version: 5.7.39</p>
</li>
<li><p>Template: Dev/Test</p>
</li>
<li><p>Availability: Single DB Instance</p>
</li>
<li><p>Settings:</p>
<ul>
<li><p>DB Instance: mysql</p>
</li>
<li><p>Credentials:</p>
<ul>
<li><p>Master Username: admin</p>
</li>
<li><p>Password: Passw0rd!</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>Connectivity:</p>
<ul>
<li><p>VPC: prafect-vpc</p>
</li>
<li><p>DB Subnet Group: dbsubnet</p>
</li>
<li><p>Public Class: No</p>
</li>
<li><p>Existing Security Group: db-SG</p>
</li>
</ul>
</li>
<li><p>Database Authentication: Password Authentication</p>
</li>
</ul>
</li>
</ul>
<p>📋 <strong>Prerequisites for the AWS Project:</strong></p>
<ol>
<li><p><strong>AWS Free Tier</strong> 🆓</p>
</li>
<li><p><strong>Web Application Source Code</strong> 🌐</p>
</li>
<li><p><strong>Web Server Installation Script File</strong> 📜</p>
</li>
<li><p><strong>SSH Client</strong> 🔑</p>
</li>
</ol>
<p>🚀 <strong>Taking Action on the Implementation:</strong></p>
<p><strong>Phase 1: Deploy networking components</strong> 🌐</p>
<ol>
<li><p>Create VPC 🏞️</p>
<ul>
<li><p>Name: prafect-vpc 📛</p>
</li>
<li><p>CIDR: 10.0.0.0/16 📊</p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686040503/bb31e5f3-53ab-486f-86b7-040038066069.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>Create web Subnets 🏘️</p>
<ul>
<li><p>Name: web-public01 📛</p>
<ul>
<li><p>Availability zone: us-east-2a 🏙️</p>
</li>
<li><p>CIDR: 10.0.1.0/24 📊</p>
</li>
</ul>
</li>
<li><p>Name: web-public02 📛</p>
<ul>
<li><p>Availability zone: us-east-2b 🏙️</p>
</li>
<li><p>CIDR: 10.0.2.0/24 📊</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686149700/c2566a24-bfd7-40c4-9a80-07c1d8af3368.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686168178/03688ca2-0853-4db4-859a-abcc83a3bf8c.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>3. Create app Subnets 🏘️</p>
<ul>
<li><p>Name: app-private01 📛</p>
<ul>
<li><p>Availability zone: us-east-2a 🏙️</p>
</li>
<li><p>CIDR: 10.0.3.0/24 📊</p>
</li>
</ul>
</li>
<li><p>Name: app-private02 📛</p>
<ul>
<li><p>Availability zone: us-east-2b 🏙️</p>
</li>
<li><p>CIDR: 10.0.4.0/24 📊</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686405981/2984797a-49be-4554-bd8e-cdf0b1776809.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686420583/9789576d-ff27-4997-9408-fe9855ad7c3d.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>Create DB Subnets 🏘️</p>
<ul>
<li><p>Name: db-private01 📛</p>
<ul>
<li><p>Availability zone: us-east-2a 🏙️</p>
</li>
<li><p>CIDR: 10.0.5.0/24 📊</p>
</li>
</ul>
</li>
<li><p>Name: db-private02 📛</p>
<ul>
<li><p>Availability zone: us-east-2b 🏙️</p>
</li>
<li><p>CIDR: 10.0.6.0/24 📊</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686520025/1ec5f6c5-fdbb-4213-a10d-dd4de6f2472e.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686532979/5817bec0-4228-49cf-abb4-15ad245d23c7.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686548918/7107cea5-fe00-4640-9ea5-6cf5d2c40617.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>Create Internet Gateway 🌐</p>
<ul>
<li><p>Name: web-igw 📛</p>
</li>
<li><p>Attach to VPC: prafect-vpc 🌐</p>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686591633/868baaea-222b-4312-9f96-58d632c9b44b.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686610559/39124f3e-7b63-44b3-a2be-df0da1e6e78d.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686628394/63c8c994-c5d9-4c97-8185-ab8332d93d74.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686642682/0ee6d0cd-314d-4561-9061-c5f7ce9bd7f6.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686656251/97fb3fdf-3213-4822-94b5-daf9de37e76b.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>Create Route table – App-RT 🗺️</p>
<ul>
<li><p>Name: App-RT 📛</p>
</li>
<li><p>Select the VPC: prafect-vpc 🌐</p>
</li>
<li><p>Subnet Associations 🏘️</p>
<ul>
<li><p>Select – app-private01 🏘️</p>
</li>
<li><p>Select – app-private02 🏘️</p>
</li>
</ul>
</li>
<li><p>Routes – Add NAT 0.0.0.0/0 🛣️</p>
<ul>
<li><p>Destination: 0.0.0.0/0 🗺️</p>
</li>
<li><p>Target: Select NAT gateway ⚡</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686706227/481a1be7-a4b0-47e8-a63e-57f12b2c03d6.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686721049/da7cd313-003e-4a70-b818-36822b23f4c2.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686742952/a1a0c7dd-38ae-41d1-a538-b24449b519d6.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686757873/6441e36e-676f-4fbe-84ae-bbf717326d8b.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686774467/eca0de1c-9418-4fbe-a931-3d381e6a44be.png" alt class="image--center mx-auto" /></p>
<p>Add route to internet gateway</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686818592/1bb5bef5-5b20-4128-9ae4-64b0270a755a.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697686829731/3bab316c-0522-4ed1-9422-70355c56904e.png" alt class="image--center mx-auto" /></p>
<p>Destination: 0.0.0.0/0 Target: Internet gateway</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687183897/8897c0bb-a86c-4906-be65-d24cda803e55.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687210528/0e8f11e6-bf21-47ec-ac9d-3b24b832c8ce.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687345698/908c1622-03c4-46dd-9a03-c756ec3913f6.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>Create Route table – App-RT 🗺️</p>
<ul>
<li><p>Name: App-RT 📛</p>
</li>
<li><p>Select the VPC: prafect-vpc 🌐</p>
</li>
<li><p>Subnet Associations 🏘️</p>
<ul>
<li><p>Select – app-private01 🏘️</p>
</li>
<li><p>Select – app-private02 🏘️</p>
</li>
</ul>
</li>
<li><p>Routes – Add NAT 0.0.0.0/0 🛣️</p>
<ul>
<li><p>Destination: 0.0.0.0/0 🗺️</p>
</li>
<li><p>Target: Select NAT gateway ⚡</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687408100/bd44dab5-aad3-4a2b-8810-d254add723b3.png" alt class="image--center mx-auto" /></p>
<p>Add subnet association</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687437389/c3d6ed42-721d-438f-9220-8f7631027dfd.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687455027/b62ce496-923b-496f-8936-c79b753398cf.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687479273/6fe48657-e311-491a-b547-5241d1f80e33.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>Create Security Groups 🛡️</p>
<ul>
<li><p>Create one security group for web traffic 📛</p>
<ul>
<li><p>Name: web-SG 📛</p>
</li>
<li><p>VPC: prafect-vpc 🌐</p>
</li>
<li><p>Inbound rule 1 📊</p>
<ul>
<li><p>Type: HTTP 🌐</p>
</li>
<li><p>Protocol: TCP 🌐</p>
</li>
<li><p>Port Range: 80 📶</p>
</li>
<li><p>Source: 0.0.0.0/0 🌍</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687581268/799bbb73-4556-4031-8502-8ab83d0b5f13.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Create the second security group for App traffic 📛</p>
<ul>
<li><p>Name: app-SG 📛</p>
</li>
<li><p>VPC: prafect-vpc 🌐</p>
</li>
<li><p>Inbound rule 1 📊</p>
<ul>
<li><p>Type: HTTP 🌐</p>
</li>
<li><p>Protocol: TCP 🌐</p>
</li>
<li><p>Port Range: 80 📶</p>
</li>
<li><p>Source: anywhere : web-SG 🌍</p>
</li>
</ul>
</li>
<li><p>Inbound rule 2 📊</p>
<ul>
<li><p>Type: MYSQL/Aurora 🌐</p>
</li>
<li><p>Protocol: TCP 🌐</p>
</li>
<li><p>Port Range: 3306 📶</p>
</li>
<li><p>Source: anywhere : db-SG 🌍</p>
</li>
</ul>
</li>
<li><p>Inbound rule 3 📊</p>
<ul>
<li><p>Type: SSH (if need to access the app instance by admin) 🌐</p>
</li>
<li><p>Protocol: TCP 🌐</p>
</li>
<li><p>Port Range: 22 📶</p>
</li>
<li><p>Source: anywhere : MYIP 🌍</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687616725/40568988-782b-4619-9d52-ddd67b54779f.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Create the third security group for db traffic 📛</p>
<ul>
<li><p>Name: db-SG 📛</p>
</li>
<li><p>VPC: prafect-vpc 🌐</p>
</li>
<li><p>Inbound rule 1 📊</p>
<ul>
<li><p>Type: ALL TCP 🌐</p>
</li>
<li><p>Protocol: TCP 🌐</p>
</li>
<li><p>Port Range: 0-65635 📶</p>
</li>
<li><p>Source: anywhere : app-SG 🌍</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687649010/86c0180e-b353-4360-bbfd-85a4eae87387.png" alt class="image--center mx-auto" /></p>
<p><strong>Phase 2: Deploy Launch Template</strong> 🚀</p>
<ol>
<li><p>Create Launch Template: instances 📦</p>
</li>
<li><p>Target Group name: app-TG 🎯</p>
<ul>
<li><p>Protocol: TCP 🌐</p>
</li>
<li><p>Port: 80 📶</p>
</li>
</ul>
</li>
<li><p>VPC: prafect-vpc 🌐</p>
</li>
<li><p>Health checks 🩺</p>
<ul>
<li>Health check protocol: HTTP 🌐</li>
</ul>
</li>
<li><p>Advanced health check 🩺</p>
<ul>
<li><p>Port: Traffic port 📶</p>
</li>
<li><p>Healthy threshold: 3 📈</p>
</li>
<li><p>Unhealthy threshold: 3 📉</p>
</li>
<li><p>Timeout: 4 ⏱️</p>
</li>
<li><p>Interval: 10 seconds</p>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687725569/b35fa878-4ad2-4e15-b59f-13b234821e8f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687788122/a8c8c984-6f94-4964-af33-e0a689d7c590.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687804018/682ee33e-e5ad-462f-bd2d-9afca437f975.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687820578/0d533daf-b1d1-4dfd-a7bc-10a0ece35f69.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687838920/9598c187-fd87-45e3-9a7f-24ad45899224.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687869770/bb8a90db-1958-4141-af12-698747c8a742.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687884367/d1c48cea-eb6e-486c-bb36-54714d7c59d8.png" alt class="image--center mx-auto" /></p>
<p><strong>Phase 3: Deploy Target Group</strong> 🎯</p>
<ol>
<li><p>Choose target group: instances 📦</p>
</li>
<li><p>Target Group name: app-TG 🎯</p>
<ul>
<li><p>Protocol: TCP 🌐</p>
</li>
<li><p>Port: 80 📶</p>
</li>
<li><p>VPC: prafect-vpc 🌐</p>
</li>
</ul>
</li>
<li><p>Health checks 🩺</p>
<ul>
<li>Health check protocol: HTTP 🌐</li>
</ul>
</li>
<li><p>Advanced health check 🩺</p>
<ul>
<li><p>Port: Traffic port 📶</p>
</li>
<li><p>Healthy threshold: 3 📈</p>
</li>
<li><p>Unhealthy threshold: 3 📉</p>
</li>
<li><p>Timeout: 4 ⏱️</p>
</li>
<li><p>Interval: 10 seconds</p>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687960395/ef29a695-8097-45d7-8213-f1ed8e4bff18.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687978502/34c0cf6a-114c-4584-9433-aa5ef26e236f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697687994201/8ddb4d01-45a4-4afa-93eb-98187a1cb373.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688010053/4a95e580-d315-4268-89a0-0e886cea5d1f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688027885/0c2bf287-d035-4fef-be37-78766336f33a.png" alt class="image--center mx-auto" /></p>
<p>🚀 <strong>Phase 4: Deploy Application Load Balancer</strong></p>
<ol>
<li><p>Create Launch Template</p>
</li>
<li><p>Create Application Load Balancer</p>
</li>
</ol>
<ul>
<li><p>Name: web-ALB</p>
</li>
<li><p>Scheme: internet-facing</p>
</li>
<li><p>IP address: IPv4</p>
</li>
<li><p>Network Mapping:</p>
<ul>
<li><p>Select VPC: web-vpc</p>
</li>
<li><p>Mappings: Select: us-east-1a, us-east-1b</p>
</li>
<li><p>Security Groups: Select: web-ALB-SG</p>
</li>
<li><p>Listener:</p>
<ul>
<li><p>HTTP:80</p>
</li>
<li><p>Default action: Target Group</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688096030/bb146873-1d73-46ad-b17f-ed7b37fdb59e.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688111581/ea4d6d08-cae8-4cf3-b484-9df0e3acf4f6.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688127992/f629c4d8-83f8-464b-82a4-2167b2a4814b.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688143134/21d7fe32-73b4-40c1-b44f-981491a85daa.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688161461/ddf68c7d-b027-4eb3-8405-3a4f6dfffbbf.png" alt class="image--center mx-auto" /></p>
<p>Loadbalancer created successfully</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688187020/8f2897e6-4625-4340-be14-bcaa7b5aaf84.png" alt class="image--center mx-auto" /></p>
<p>🔍 <strong>Verify ALB URL Accessibility</strong>:</p>
<ol>
<li><p>📋 Copy the ALB DNS:</p>
<ul>
<li>ALB DNS: <a target="_blank" href="http://prafect-ALB-784003759.us-east-2.elb.amazonaws.com">prafect-ALB-784003759.us-east-2.elb.amazonaws.com</a></li>
</ul>
</li>
<li><p>🌐 Open your preferred browser 🌟.</p>
</li>
<li><p>🌐 Paste the ALB DNS into the browser's address bar and hit Enter ⏎.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688249875/0c281c86-d4d4-40c8-b2eb-02779bf40dfa.png" alt class="image--center mx-auto" /></p>
<p>Go to Target Group</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688304608/25d3a4e2-5b52-420f-b7d1-3a6982fdc0a3.png" alt class="image--center mx-auto" /></p>
<p>🚀 <strong>Phase 5: Deploy Auto Scaling Group</strong></p>
<ol>
<li><p>Name: web-ASG</p>
</li>
<li><p>Launch Template: web-template</p>
</li>
<li><p>Network:</p>
</li>
</ol>
<ul>
<li><p>VPC: web-vpc</p>
</li>
<li><p>Availability Zones: us-east-1a, us-east-1b</p>
</li>
<li><p>Load Balancing: Attach to an existing load balancer</p>
</li>
<li><p>Choose Target Group: web-TG</p>
</li>
<li><p>Health Check: ELB: 300 seconds</p>
</li>
<li><p>Group Size:</p>
<ul>
<li><p>Units</p>
</li>
<li><p>Desired Capacity: 2</p>
</li>
<li><p>Minimum Capacity: 2</p>
</li>
<li><p>Maximum Capacity: 4</p>
</li>
</ul>
</li>
<li><p>Scaling Policies:</p>
<ul>
<li><p>Name: Target Tracking Policy</p>
</li>
<li><p>Metric Type: Average CPU Utilization</p>
</li>
<li><p>Target Value: 50</p>
</li>
<li><p>Warm-up: 300 seconds</p>
</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688338861/23cb39dd-b224-4f7b-8345-4a1695b9ac82.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688352822/c95ceb4f-dc66-472d-b62c-35d6685256a1.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688372463/b12ece8e-dada-48ac-96f9-08931b8792eb.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688388133/3351d41e-c78d-4169-8743-3b858f76869e.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688404010/50fbad5d-43cd-4221-a655-bb330fb2cb73.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688417394/399284db-6caf-4a49-b78e-6dc11c0cfbc9.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688432869/e387447e-d3c6-42bb-b6a1-f38c59e980a5.png" alt class="image--center mx-auto" /></p>
<p>Go to ALB and copy the ELB DNS</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688458873/4e4d75b0-8a55-41cb-995e-ce6ef7e85a42.png" alt class="image--center mx-auto" /></p>
<p><strong>Verify ALB URL Accessibility</strong>:</p>
<ol>
<li><p>📋 Copy the ALB DNS:</p>
<ul>
<li>ALB DNS: <a target="_blank" href="http://prafect-ALB-784003759.us-east-2.elb.amazonaws.com">prafect-ALB-784003759.us-east-2.elb.amazonaws.com</a></li>
</ul>
</li>
<li><p>🌐 Open your preferred browser 🌟.</p>
</li>
<li><p>🌐 Paste the ALB DNS into the browser's address bar and hit Enter ⏎.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688497888/59caaa09-94c8-4e65-bc4e-3e5d2c092cb6.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688522503/e1d1de3e-8485-4ad4-b300-fc5d3e92058f.png" alt class="image--center mx-auto" /></p>
<p>Create NAT Gateway</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688555574/5e335cd4-9031-4928-977e-b191752677bd.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688568484/32c5e13a-3c77-4882-ab55-932608286fff.png" alt class="image--center mx-auto" /></p>
<p>Add route to application private route table: APP-RT route table</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688613703/f6edbfce-f12e-43ec-998a-0f20727c5c64.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688630912/87efd175-b255-4f4b-a876-e4d979915b98.png" alt class="image--center mx-auto" /></p>
<p>Edit Route – Add NAT gateway Destination: 0.0.0.0/0 Target: NAT Gateway</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688662429/ec2f4bb5-00a0-48c8-8ee0-2848ebe144c0.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688679316/73db61a9-55fb-4b4b-bc38-b182ee37e8cc.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688700081/ad689f1e-2516-49e3-b23a-e0d42a15d51f.png" alt class="image--center mx-auto" /></p>
<p>🚀 <strong>Phase 6: Verify that web application is accessible</strong></p>
<ol>
<li><p>Go to Application Load Balancer (ALB)</p>
</li>
<li><p>Access the ALB DNS and access the web application</p>
</li>
</ol>
<p><strong>Verify ALB URL Accessibility</strong>:</p>
<ol>
<li><p>📋 Copy the ALB DNS:</p>
<ul>
<li>ALB DNS: <a target="_blank" href="http://prafect-ALB-784003759.us-east-2.elb.amazonaws.com">prafect-ALB-784003759.us-east-2.elb.amazonaws.com</a></li>
</ul>
</li>
<li><p>🌐 Open your preferred browser 🌟.</p>
</li>
<li><p>🌐 Paste the ALB DNS into the browser's address bar and hit Enter ⏎.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688763551/e217a3cd-c851-4da0-9974-6676bf082022.png" alt class="image--center mx-auto" /></p>
<p>📝 <strong>Important Note</strong>:</p>
<p>If your web application source code resides in the Private App Subnet, here's what you need for configuring SSH access and package downloads:</p>
<ol>
<li><p>To access the server from SSH, make sure to <strong>enable Public IP</strong> when creating the Launch Template 🌐. This allows secure access to your instance.</p>
</li>
<li><p>For downloading packages and installing the web server in the Private App Subnet, you'll need the following configurations:</p>
<ul>
<li><p>🌐 <strong>NAT Gateway</strong>: Launch a NAT Gateway into the Web-Public Subnet. This enables instances in the Private App Subnet to access external resources.</p>
</li>
<li><p>📚 <strong>App-RT (Route Table)</strong>: Add a NAT Gateway route to the App-Route Table. This route allows instances in the Private App Subnet to use the NAT Gateway for internet-bound traffic.</p>
</li>
</ul>
</li>
</ol>
<p>🚀 <strong>Phase 7: Deploy RDS DB Managed MySQL Instance</strong></p>
<ul>
<li><p>Create DB Instance Group</p>
<ul>
<li><p>Go to Subnet Groups</p>
</li>
<li><p>Create DB Subnet Group</p>
<ul>
<li><p>Name: db-subnetgroup</p>
</li>
<li><p>VPC: prafect-vpc</p>
</li>
<li><p>Add Subnets:</p>
<ul>
<li><p>Availability Zones: us-east-2a, us-east-2b</p>
</li>
<li><p>Subnets: db-private01, db-private02</p>
</li>
</ul>
</li>
<li><p>Create</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688936960/68aacb06-7e95-4271-826e-e3f3d8c824c3.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688950637/8192b2e2-d683-4d4b-a97a-0ffad50e81e1.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697688962921/f934a6ea-3770-45e6-b0bb-5dd0098580ea.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Create DB Instance – MySQL</p>
<ul>
<li><p>Create Database</p>
</li>
<li><p>Standard Create</p>
</li>
<li><p>Engine Options: MySQL</p>
</li>
<li><p>Engine Version: 5.7.39</p>
</li>
<li><p>Template: Dev/Test</p>
</li>
<li><p>Availability: Single DB Instance</p>
</li>
<li><p>Settings:</p>
<ul>
<li><p>DB Instance: mysql</p>
</li>
<li><p>Credentials:</p>
<ul>
<li><p>Master Username: admin</p>
</li>
<li><p>Password: Passw0rd!</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>Connectivity:</p>
<ul>
<li><p>VPC: prafect-vpc</p>
</li>
<li><p>DB Subnet Group: dbsubnet</p>
</li>
<li><p>Public Class: No</p>
</li>
<li><p>Existing Security Group: db-SG</p>
</li>
</ul>
</li>
<li><p>Database Authentication: Password Authentication</p>
</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689005569/57d13437-6315-46cc-b185-37b41dee928a.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689025405/b6045bbe-6986-4d21-9624-a0d45a2b3e71.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689042764/a7210e23-62b5-49b9-8e83-4649155ea4ba.png" alt class="image--center mx-auto" /></p>
<p>Db instance name: mysql User: admin Password: Passw0rd!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689084856/996c5444-db0b-4608-94db-525b8b12ea51.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689100495/1a411d4b-b44f-49f3-b41f-cbf20c67168b.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689113953/f88598c1-6ca6-470a-8d49-4b0060966a5d.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689127561/de4643b9-b1e0-4b9e-832d-ceb112fc6e3c.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689141902/353ba6df-5d33-4ebf-a171-bb9a848ba66f.png" alt class="image--center mx-auto" /></p>
<p>MYSQL db instance created</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689179503/09e354ca-e9a0-4ddb-9b39-42a160e7eefe.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689194300/ef5e115e-3f87-4257-afe0-131274d5ac82.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689225209/04c8ea12-4df7-4553-a55b-5c8e4a49c9c3.png" alt class="image--center mx-auto" /></p>
<p>New DB connection parameters: Servername: <a target="_blank" href="http://mysql1.cagenoemjwd5.us-east-2.rds.amazonaws.com">mysql1.cagenoemjwd5.us-east-2.rds.amazonaws.com</a> Username: admin Password: Passw0rd! Dbname: contacts Go to web source code file: db.php</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689264934/73ac66f0-49b5-421d-9961-ffad90dc3189.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689281655/c8c5bebe-791e-422a-b19f-c5eac1cca5f6.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689300624/adbd7793-9a7a-49d0-95fe-98bde6a3637a.png" alt class="image--center mx-auto" /></p>
<p>Add rule Type: MYSQL/Aurora Protocol: TCP Port range: 3306 Source : custom: db_SG</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689335062/92ad11cf-1a58-40d3-b28a-54a9d1552eed.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689348961/d6ba2c01-2728-4a67-b73f-76c3328885f5.png" alt class="image--center mx-auto" /></p>
<p>Access rds db instance from one of web server 3.143.110.192 Install sudo apt-get install mysql-server mysql-client</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689382215/e171fbb6-34a4-4e58-8707-2e54fab9a395.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689403131/3a911378-7041-4cbb-b931-cf7a64206f36.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689416706/3eebcf02-d279-40dc-b38d-93057b1e99ec.png" alt class="image--center mx-auto" /></p>
<p>Solution: Go to web-SG, Add MY IP as source for MYSQL/Aurora 3306</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689460682/5d876324-5576-4f5e-9764-e6f39083000e.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689478345/9597971e-aa41-4309-80d0-b42e43a0ec3b.png" alt class="image--center mx-auto" /></p>
<p>Create a New Database mysql&gt; create database contacts; Verify that database ‘contacts’ is created</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689512398/68ad89e4-8b6f-46cf-becb-10743a9c5a3e.png" alt class="image--center mx-auto" /></p>
<p>📊 <strong>Create Tables Inside 'Contacts' Database</strong> 📁</p>
<ol>
<li><p>💼 Access the 'Contacts' database:</p>
<ul>
<li><code>mysql&gt; use contacts;</code></li>
</ul>
</li>
<li><p>🛠️ Create the 'users' table with columns 'name,' 'email,' and 'subject':</p>
<ul>
<li><code>mysql&gt; create table users(name varchar(30), email varchar(30), subject varchar(30));</code></li>
</ul>
</li>
<li><p>✅ Verify the creation of the 'users' table:</p>
<ul>
<li><code>mysql&gt; show tables;</code></li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689558200/662d1d1e-c057-40e2-98f2-d53940113b28.png" alt class="image--center mx-auto" /></p>
<p>Describe table and check if the fields are exist</p>
<p>mysql&gt;Describe users;</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689594139/4eb9cce7-8290-423d-9271-1df2b829d46b.png" alt class="image--center mx-auto" /></p>
<p>Install Telnet utility &amp; check DB Connection </p>
<p>Sudo apt-get install telnet</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689635292/08c22691-140c-4f6b-8d00-e12e9c6f7927.png" alt class="image--center mx-auto" /></p>
<p>Test RDS DB connection from Web to DB  Telnet  telnet <a target="_blank" href="http://mysql2021.cntikk0jg8xf.ca-central-1.rds.amazonaws.com">mysql2021.cntikk0jg8xf.ca-central-1.rds.amazonaws.com</a> 3306</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689670704/d59d585c-6471-4a1a-9dca-f02551b2aeea.png" alt class="image--center mx-auto" /></p>
<p>Let's insert data into database from webpage</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689726809/8b4d6c0a-e99b-4e5f-ba7c-a8864e8d92f4.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689739396/eb1454c9-eb5e-447b-ad17-c3ceafb53c34.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689752747/b393c1e3-7072-474a-8fa7-84025244ef6e.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689739396/eb1454c9-eb5e-447b-ad17-c3ceafb53c34.png" alt class="image--center mx-auto" /></p>
<p>🔍 <strong>Verify Data from Backend Database</strong> 🛢️</p>
<p>To ensure that data has been successfully added from the web application, follow these steps:</p>
<ol>
<li><p>🏢 Access your backend database.</p>
</li>
<li><p>📊 Query the database to retrieve and verify the added data.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697689903941/5f4d9d94-9fc8-4a42-8774-8cd46e309c35.png" alt class="image--center mx-auto" /></p>
<p>Congratulations:</p>
<p>🛡️ <strong>Improvement Tasks for Enhanced Security</strong>:</p>
<ol>
<li><p>🚪 <strong>Identity and Access Management (IAM)</strong>: Strengthen user access controls and authentication.</p>
</li>
<li><p>🧯 <strong>Firewalls (Web Application and Network)</strong>: Enhance security layers and implement DDoS protection.</p>
</li>
<li><p>🔐 <strong>Create &amp; Manage Cryptographic Keys</strong>: Safeguard sensitive data with encryption.</p>
</li>
<li><p>🤐 <strong>Manage Secrets, API Keys, Credentials</strong>: Securely handle and store sensitive information.</p>
</li>
<li><p>🛡️ <strong>Security Assessment for EC2 Instances</strong>: Regularly evaluate and fortify EC2 instance security.</p>
</li>
<li><p>🚨 <strong>Threat Detection</strong>: Implement systems to detect and respond to security threats.</p>
</li>
<li><p>🔔 <strong>Manage Security Alerts</strong>: Monitor and respond to security incidents.</p>
</li>
<li><p>🛡️ <strong>Configure Security Controls for Individual AWS Services</strong>: Tailor security measures for each AWS service.</p>
</li>
</ol>
<p>🚀 <strong>Improvement Tasks for Efficient Deployment</strong>:</p>
<ol>
<li><p>⚙️ <strong>Automate Provisioning</strong>: Streamline the deployment process for faster results.</p>
</li>
<li><p>🕵️ <strong>Observability of AWS Resources</strong>: Gain insights into resource performance and usage.</p>
</li>
<li><p>📊 <strong>Track User Actions &amp; API Usage on AWS</strong>: Monitor user activities and API utilization.</p>
</li>
<li><p>🛠️ <strong>Evaluate Configuration of AWS Resources</strong>: Ensure resource settings align with best practices.</p>
</li>
<li><p>📡 <strong>Centralize Operations</strong>:</p>
<ul>
<li><p>🤖 <strong>Automate Actions with Runbooks</strong>: Execute routine tasks efficiently.</p>
</li>
<li><p>🧰 <strong>Manage &amp; Patch Instances</strong>: Keep instances up-to-date and secure.</p>
</li>
<li><p>🕒 <strong>Schedule &amp; Govern Changes</strong>: Control and schedule updates and modifications.</p>
</li>
</ul>
</li>
</ol>
<p>By implementing these tasks, you can bolster security and streamline deployment, ultimately enhancing the performance and resilience of your AWS infrastructure. 🌟🛠️🚀</p>
<p>#AWS #CloudEngineering #AmazonWebServices #CloudComputing #InfrastructureAsCode #Serverless #DevOps #AWSArchitecture #AWSBestPractices #SecurityInAWS #CostOptimization #AWSCertification #S3 #EC2 #Lambda #VPC #CloudFormation #IAM #CloudMigration #ElasticLoadBalancer</p>
]]></content:encoded></item><item><title><![CDATA[👁️‍🗨️ Kubernetes Monitoring 🚀]]></title><description><![CDATA[📊📈 Prometheus & Grafana 👁️‍🗨️

📈 Visualize Cluster Information in Dashboards:

Use Grafana to create dashboards that display essential cluster metrics, such as CPU and memory usage, node health, and pod status.


📚 Pull Custom Application Logs ...]]></description><link>https://praful.cloud/kubernetes-monitoring</link><guid isPermaLink="true">https://praful.cloud/kubernetes-monitoring</guid><category><![CDATA[#KubernetesMonitoring #PodConfiguration #AppMaintenance 🛠️]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Wed, 18 Oct 2023 04:12:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1697602254833/227bf3ff-1961-4cc6-8a32-660d516f2260.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>📊📈 <strong>Prometheus &amp; Grafana</strong> 👁️‍🗨️</p>
<ol>
<li><p>📈 <strong>Visualize Cluster Information in Dashboards</strong>:</p>
<ul>
<li>Use Grafana to create dashboards that display essential cluster metrics, such as CPU and memory usage, node health, and pod status.</li>
</ul>
</li>
<li><p>📚 <strong>Pull Custom Application Logs via Sidecar</strong>:</p>
<ul>
<li>Implement a sidecar container in your pods to extract custom application logs and make them available for monitoring.</li>
</ul>
</li>
<li><p>💻 <strong>Create Dashboards as Code for Easy Editing</strong>:</p>
<ul>
<li>Opt for Infrastructure as Code (IaC) to define and manage your Grafana dashboards. This ensures easy editing, version control, and reproducibility of your monitoring setup.</li>
</ul>
</li>
</ol>
<p>Embrace Prometheus and Grafana to gain valuable insights into your Kubernetes cluster's health and performance. 🌐🔍 #KubernetesMonitoring #Prometheus #Grafana 🛠️</p>
<p>🚨 <strong>Incident Response Scenario: Leaderboard Service Outage</strong> 🚨</p>
<p><strong>Scenario:</strong> You are a Container Engineer responsible for maintaining an e-commerce platform's critical leaderboard service. This service displays the top-selling products on the platform. Suddenly, you receive an urgent alert notifying you that the leaderboard service is down, and it's impacting the user experience. Your task is to quickly respond to this incident, diagnose the issue, and restore the service to its normal working state.</p>
<p><strong>Role</strong>: Container Engineer 🐳 <strong>Platform</strong>: E-commerce 🛒 <strong>Service</strong>: Leaderboard 🏆</p>
<p>📅 <strong>Real-time Actions</strong>:</p>
<ol>
<li><p><strong>🔥 Immediate Alert Acknowledgment</strong>: You swiftly acknowledge the alert, notifying your team that you're diving into the incident.</p>
</li>
<li><p><strong>📊 Monitoring and Logging Tools</strong>: Access real-time data using monitoring tools like Prometheus and Grafana to gauge cluster health and resource use.</p>
</li>
<li><p><strong>🕵️ Kubernetes Cluster Status Check</strong>: Utilize <code>kubectl</code> to confirm the Kubernetes cluster status. Ensure it's not a global cluster issue, inspect nodes and control plane components.</p>
</li>
<li><p><strong>📜 Leaderboard Service Logs</strong>: Check service logs for error messages with <code>kubectl logs</code> and evaluate recent events.</p>
</li>
<li><p><strong>📦 Pod Inspection</strong>: Run <code>kubectl get pods</code> to list all pods, including the leaderboard pod. Spot the "Failed" status.</p>
</li>
<li><p><strong>🔍 Troubleshooting the Pod</strong>: Use <code>kubectl describe pod &lt;pod-name&gt;</code> to uncover details about the failure, including resource and mounting issues.</p>
</li>
<li><p><strong>💾 Resource Check</strong>: Review resource requests and limits in the pod configuration to avoid resource starvation.</p>
</li>
<li><p><strong>🔄 Rolling Restarts</strong>: If issues are found, trigger a rolling restart by updating the Deployment to create fresh pods.</p>
</li>
<li><p><strong>👨‍⚕️ Health Checks</strong>: Ensure liveness and readiness probes in the deployment are correctly configured.</p>
</li>
<li><p><strong>🌐 Integration and Network Issues</strong>: Investigate integration and network problems within the cluster.</p>
</li>
<li><p><strong>🔌 Database Connectivity</strong>: Verify the leaderboard service's ability to connect to the database, essential for fetching sales data.</p>
</li>
<li><p><strong>📦 Docker Image</strong>: Confirm availability and correctness of the Docker image in the deployment configuration.</p>
</li>
<li><p><strong>🛡️ Service Checks</strong>: Confirm the Kubernetes service correctly routes traffic and is reachable.</p>
</li>
<li><p><strong>🔄 Backup and Rollback Plan</strong>: Maintain a rollback plan in case of prolonged issues. Consider implementing a backup mechanism for a default leaderboard.</p>
</li>
<li><p><strong>📝 Documentation and Communication</strong>: Document all actions and updates, keeping the team and stakeholders informed.</p>
</li>
<li><p><strong>✅ Resolution and Verification</strong>: After addressing the root cause, verify that the leaderboard service is operational and meets performance expectations.</p>
</li>
<li><p><strong>🔍 Post-Incident Analysis</strong>: Conduct a post-incident analysis to understand the cause, document lessons learned, and implement preventive measures.</p>
</li>
</ol>
<p>In this real-time scenario, swift response, a systematic troubleshooting approach, and effective communication are vital to minimize downtime and maintain a positive user experience on the e-commerce platform. #IncidentResponse #Kubernetes #Ecommerce #ContainerEngineer</p>
<h3 id="heading-monitoring-priorities">👀 <strong>Monitoring Priorities</strong> 👀</h3>
<p><strong>1. Node Health</strong>:</p>
<ul>
<li>🏥 Monitor node health to ensure each node in the cluster is running smoothly.</li>
</ul>
<p><strong>2. Cluster CPU/Memory Capacity</strong>:</p>
<ul>
<li>💻 Keep an eye on cluster-wide CPU and memory capacity to prevent resource bottlenecks.</li>
</ul>
<p><strong>3. Pod Health Checks</strong>:</p>
<ul>
<li>❤️‍🩹 Implement health checks for pods to detect issues and ensure they're in a healthy state.</li>
</ul>
<p><strong>4. Networking</strong>:</p>
<ul>
<li>🌐 Monitor network traffic and connectivity to guarantee seamless communication between pods and services.</li>
</ul>
<p><strong>5. Application Logs</strong>:</p>
<ul>
<li>📋 Collect and analyze application logs for insights into app behavior and potential issues.</li>
</ul>
<p><strong>Objectives:</strong></p>
<p><strong>1. Identify Pod Configuration Error</strong>:</p>
<ul>
<li>🕵️‍♂️ Identify the error within the pod's configuration causing the app malfunction.</li>
</ul>
<p><strong>2. Update Pod Configuration</strong>:</p>
<ul>
<li>🔄 Revise the pod's configuration to bring the app back to its expected, functioning state.</li>
</ul>
<p>Incorporating monitoring and addressing configuration issues are key elements of maintaining a healthy and operational Kubernetes environment. 🚀</p>
<p>Observer the working nodes</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697600579283/d48df2dc-e0d3-469b-9a2c-937784664e68.png" alt class="image--center mx-auto" /></p>
<p>Observe the running workloads</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697600696008/c4a1d145-002e-43f8-824c-f89a2f65e2a4.png" alt class="image--center mx-auto" /></p>
<p>Get more information about the pod leaderboard</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697600748548/5759751e-9bde-4d73-a392-45f48cabe033.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697600780022/a36be4d4-be52-4aff-b019-8906c6f317c2.png" alt class="image--center mx-auto" /></p>
<p>Check the logs for 'query-app' container</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697601028927/bd26672d-16ce-487f-9490-b05fbaa7570d.png" alt class="image--center mx-auto" /></p>
<p>Some one had typo in the commands 'ech' which caused an error</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697601112434/dad60069-2f33-405c-94ac-f6a1a4d5cd50.png" alt class="image--center mx-auto" /></p>
<p><strong>2. Update Pod Configuration</strong>:</p>
<ul>
<li>🔄 Revise the pod's configuration to bring the app back to its expected, functioning state.</li>
</ul>
<p>Export the leaderboard pod configurations into a leaderboard.yaml file:</p>
<p>kubectl get pod leaderboard -o yaml &gt; leaderboard.yaml Open the file:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697601542305/0078c17a-a0a6-4921-8c71-070fce53571e.png" alt class="image--center mx-auto" /></p>
<p>vim leaderboard.yaml Edit the command to be echo instead of ech.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697601511150/d835559d-a47f-47b0-ab91-017a58c77c85.png" alt class="image--center mx-auto" /></p>
<p>Save and exit the file by pressing Escape followed by wq.</p>
<p>Attempt to update the pod:</p>
<p>kubectl apply -f leaderboard.yaml We can't update the command key for a running pod, so you'll see an error instead.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697601659302/fcbdad01-4410-4507-9a99-f96126d14427.png" alt class="image--center mx-auto" /></p>
<p>Delete the pod:</p>
<p>kubectl delete pod leaderboard Confirm it's gone:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697601704716/c6afc738-9cb7-4994-bbf2-d1f09e1bf6ea.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697601739728/58a2e610-6395-42a5-830d-38e78e567d57.png" alt class="image--center mx-auto" /></p>
<p>kubectl get pods Re-create the pod:</p>
<p>kubectl apply -f leaderboard.yaml Confirm it exists:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697601790538/f844327e-00eb-413d-8b38-b1194644c242.png" alt class="image--center mx-auto" /></p>
<p>kubectl get pods Check the logs for the updated query-app container:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697601830589/982237c1-a215-44fb-89da-b67df4a2b11a.png" alt class="image--center mx-auto" /></p>
<p>kubectl logs leaderboard -c query-app Get the pod description again:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697601892661/48887f3c-71cc-4745-bfbb-7aeb9d241f88.png" alt class="image--center mx-auto" /></p>
<p>kubectl describe pod leaderboard</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697601945889/1dc89a41-bc63-497f-9f33-b66d497d9c44.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697601959279/6ee43a44-1824-46b8-9084-eb1c4a18694d.png" alt class="image--center mx-auto" /></p>
<p>#KubernetesMonitoring #PodConfiguration #AppMaintenance 🛠️</p>
]]></content:encoded></item><item><title><![CDATA[☁️ AWS Cloud Project 6]]></title><description><![CDATA[🌐Introduction: In this comprehensive guide, we'll take you through the process of deploying a WordPress website on AWS Lightsail. We'll cover everything from setting up your Lightsail instance to configuring a custom domain and leveraging AWS CloudF...]]></description><link>https://praful.cloud/aws-cloud-project-6</link><guid isPermaLink="true">https://praful.cloud/aws-cloud-project-6</guid><category><![CDATA[#AWS #AwsCommunityBuilders#CloudEngineering #CloudComputing #AmazonWebServices #AWSArchitecture #DevOps #CloudSolutions #CloudSecurity #InfrastructureAsCode #AWSCertification #Serverless #AWSCommunity #TechBlogs #CloudExperts #CloudMigration #CloudOps #AWSJobs #TechIndustry #CareerInTech #InnovationInCloud #devops #cloudengineerjobs #devopsjobs #azure #gcp #oci #cloudjobs, #kubernetes]]></category><dc:creator><![CDATA[Praful Patel]]></dc:creator><pubDate>Wed, 06 Sep 2023 06:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1703201979521/4969de3f-54d2-4aaa-949f-9609f0363590.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691794383919/77c14aad-500f-4cac-bf7e-f0d2a199173f.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-introduction-in-this-comprehensive-guide-well-take-you-through-the-process-of-deploying-a-wordpress-website-on-aws-lightsail-well-cover-everything-from-setting-up-your-lightsail-instance-to-configuring-a-custom-domain-and-leveraging-aws-cloudfront-for-enhanced-performance-and-security">🌐Introduction: In this comprehensive guide, we'll take you through the process of deploying a WordPress website on AWS Lightsail. We'll cover everything from setting up your Lightsail instance to configuring a custom domain and leveraging AWS CloudFront for enhanced performance and security.</h2>
<p>☁️Deploy web application on AWS Lightsail</p>
<p>🔗 Tags: AWS, Lightsail, WordPress, Custom Domain, CloudFront, Website Deployment</p>
<p>Title: Step-by-Step Guide: Configuring and Deploying WordPress on AWS Lightsail</p>
<h2 id="heading-part-1-deploying-wordpress-on-lightsail-marketplace">Part 1: Deploying WordPress on Lightsail Marketplace</h2>
<h3 id="heading-step-1-launching-a-wordpress-instance">Step 1: Launching a WordPress Instance</h3>
<ol>
<li><p>Log in to your AWS Management Console.</p>
</li>
<li><p>Navigate to AWS Lightsail and click on "Create instance."</p>
</li>
<li><p>Choose the "WordPress" blueprint from the "Select a blueprint" section.</p>
</li>
<li><p>Configure your instance by selecting the appropriate instance plan and region.</p>
</li>
<li><p>Give your instance a unique name and click "Create instance."</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691790893910/194e5f54-71f4-4ae5-afd8-1ae90c4ac0f7.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-2-accessing-your-wordpress-dashboard">Step 2: Accessing Your WordPress Dashboard</h3>
<ol>
<li><p>Once your instance is running, click on its name to access the management page.</p>
</li>
<li><p>Find the "Open in browser" option to access your WordPress dashboard.</p>
</li>
<li><p>Complete the initial setup by providing your website's title, admin username, and password.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691793047298/0f644178-681b-4d03-9132-50e3f4c38b6d.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-assign-static-ip">Assign Static IP</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691791782883/2a12aa4d-9d06-40a4-9212-32a37ef110ec.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-configuring-a-custom-domain-on-lightsail">Configuring a Custom Domain on Lightsail</h2>
<h3 id="heading-step-1-purchasing-a-domain-if-needed">Step 1: Purchasing a Domain (if Needed)</h3>
<ol>
<li><p>Choose a domain registrar (e.g., GoDaddy) and purchase your desired domain name.</p>
</li>
<li><p>Configure the domain's DNS settings to use AWS Lightsail's DNS servers.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691791432658/c9717054-7cb1-44dd-9582-1712a3ffa382.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-2-setting-up-dns-in-lightsail">Step 2: Setting Up DNS in Lightsail</h3>
<ol>
<li><p>In Lightsail, navigate to the "Networking" tab of your instance.</p>
</li>
<li><p>Click "Create DNS zone" and enter your domain name.</p>
</li>
<li><p>Configure DNS records, including A records and CNAME records.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691793490809/a7f1cbd5-55be-4c0a-bcaa-11970aed492a.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-3-updating-wordpress-settings">Step 3: Updating WordPress Settings</h3>
<ol>
<li><p>Log in to your WordPress dashboard.</p>
</li>
<li><p>Go to "Settings" &gt; "General."</p>
</li>
<li><p>Update the "WordPress Address (URL)" and "Site Address (URL)" with your custom domain.</p>
</li>
</ol>
<p><a target="_blank" href="http://44.217.116.136/wp-admin/index.php">http://44.217.116.136/wp-admin/index.php</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691793613918/07488297-9942-45ab-8200-13c7134f11a5.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-part-3-configuring-aws-cloudfront-on-aws-lightsail">Part 3: Configuring AWS CloudFront on AWS Lightsail</h2>
<h3 id="heading-step-1-creating-a-cloudfront-distribution">Step 1: Creating a CloudFront Distribution</h3>
<ol>
<li><p>Access the AWS Management Console and navigate to AWS CloudFront.</p>
</li>
<li><p>Click "Create Distribution" and choose the "Web" distribution type.</p>
</li>
<li><p>Configure your distribution settings, including your custom domain as an alternate domain name.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691790893910/194e5f54-71f4-4ae5-afd8-1ae90c4ac0f7.png" alt class="image--center mx-auto" /></p>
<p>Access Wordpress through browser</p>
<h3 id="heading-step-2-updating-dns-records">Step 2: Updating DNS Records</h3>
<ol>
<li>In Lightsail DNS, create a CNAME record pointing to your CloudFront distribution domain.</li>
</ol>
<h3 id="heading-step-3-securing-cloudfront-with-https">Step 3: Securing CloudFront with HTTPS</h3>
<ol>
<li><p>In CloudFront settings, select your distribution and click "Edit."</p>
</li>
<li><p>Under the "SSL Certificate" section, choose your custom domain's SSL certificate.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691791053757/957320a1-e328-4361-bd55-85196062b044.png" alt class="image--center mx-auto" /></p>
<p>Access wordpress instance through SSH</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691790986862/5feb8240-5abc-4b51-a899-c723693f5a0a.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-custom-domain-registration">Custom Domain Registration:</h1>
<p>Create and assign static ip</p>
<h2 id="heading-domain-configuration"><strong>Domain Configuration</strong></h2>
<p>Go to Domain &amp; DNS from AWS Lighsail</p>
<p>⚙️ <strong>Configure DNS Zone:</strong> After confirming, you'll set up your DNS zone. This is like creating a map for your domain to lead visitors to your website.</p>
<p>📝 <strong>Add DNS Records:</strong> Add essential DNS records – A records, CNAME records, and more. Think of these as signposts guiding visitors to your virtual doorstep.</p>
<p>Create storage bucket for wordpress media</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692042730703/90d6488d-530f-4ae1-b0a1-3efd7a695ba8.png" alt class="image--center mx-auto" /></p>
<p>Create Distribution</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692042846747/090b0a78-8665-4898-9688-00f06b84559e.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692042922077/5c450885-d998-48c2-b062-dc69942df9f6.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692042947170/74b9ed88-b19e-4ae0-9b29-1d3a8e23c490.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692043220514/08104852-8063-4e9f-886a-527559543691.png" alt class="image--center mx-auto" /></p>
<p>Create SSL certificate</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692043353821/9fb54627-0be5-4e52-b1fe-168cc9a278b5.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692043430817/07afb8bf-d898-4689-9116-690b1aabc69a.png" alt class="image--center mx-auto" /></p>
<p>Attach certificate</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692046802454/070b5bff-6528-404a-b33b-de8b81f8a5a7.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692046867926/ba9ea593-d3f1-4413-b3fc-dab3ec3c50a2.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692046983619/67d2627f-4f8f-48c8-80ae-6f6f27560f37.png" alt class="image--center mx-auto" /></p>
<p><a target="_blank" href="https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-connecting-buckets-to-wordpress">https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-connecting-buckets-to-wordpres</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692049013319/8a103d6b-9ef5-474a-bf47-8c1f8a28a597.png" alt class="image--center mx-auto" /></p>
<p><a target="_blank" href="https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-connecting-buckets-to-wordpress">s</a></p>
<p><a target="_blank" href="http://d1uesgpufji6a7.cloudfront.net">d1uesgpufji6a7.cloudfront.net</a></p>
<p>CNAME records are automatically added to Domain</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692044721423/631c4799-94ae-48c7-905f-eb65f1e32f9a.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692045156090/16c8baa4-8d53-441c-92d2-e2a273405a3f.png" alt class="image--center mx-auto" /></p>
<p>STEPS:</p>
<ol>
<li><p>Create Lightsail wordpress instance</p>
</li>
<li><p>Create Static Ip</p>
</li>
<li><p>Assign custom domain</p>
</li>
<li><p>For Content distribution:</p>
<ol>
<li><p>Create storage bucket</p>
</li>
<li><p>Create distribution</p>
</li>
<li><p>Create SSL Certificate</p>
</li>
</ol>
</li>
</ol>
<p>Attach storage</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692047914237/18f2e5bb-c0a0-4f7f-a769-a50ec7ad68d5.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692048205406/18d7c066-a65b-4de9-82de-662b51602096.png" alt class="image--center mx-auto" /></p>
<p><a target="_blank" href="https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-connecting-buckets-to-wordpress">https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-connecting-buckets-to-wordpress</a></p>
<p>configuring https</p>
<p><a target="_blank" href="https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-editing-wp-config-for-distribution">https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-editing-wp-config-for-distribution</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692055570258/339e971a-1ad8-44d7-87c6-d05a5a3181f1.png" alt class="image--center mx-auto" /></p>
<p>Verify that SSL certificate validated</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692056045055/0141d42f-b9ea-4812-a81a-6368148945f7.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692055662338/89c86c5e-511d-4da3-a86e-0474dea7aed2.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692055782093/f2a28989-0f42-405a-a859-fe5fc505734e.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion:</h2>
<p>You've successfully deployed a WordPress website on AWS Lightsail and configured a custom domain while optimizing performance and security with AWS CloudFront. By following this step-by-step guide, you've taken a significant step towards building a scalable and secure web presence.</p>
<p>🎉 Congratulations on your WordPress deployment with AWS Lightsail! 🚀</p>
<p>🔗 Tags: AWS, Lightsail, WordPress, Custom Domain, CloudFront, Website Deployment</p>
<p>#AWS #AwsCommunityBuilders#CloudEngineering #CloudComputing #AmazonWebServices #AWSArchitecture #DevOps #CloudSolutions #CloudSecurity #InfrastructureAsCode #AWSCertification #Serverless #AWSCommunity #TechBlogs #CloudExperts #CloudMigration #CloudOps #AWSJobs #TechIndustry #CareerInTech #InnovationInCloud #devops #cloudengineerjobs #devopsjobs #azure #gcp #oci #cloudjobs, #kubernetes</p>
]]></content:encoded></item></channel></rss>