Saturday, September 14, 2024

Configure a Life Cycle Management in Azure Storage Account

 Azure Blob Storage lifecycle management offers a rule-based policy that you can use to transition blob data to the appropriate access tiers or to expire/cleanup data at the end of the data lifecycle.

  • Select Azure Storage account
  • Access Lifecycle Management
  • Add below rule to cleanup all the blob data older than 7 days.
  •  





















    You can use the list view also to configure the rule from UI. e.g.

    • Lifecycle management policy definition Sample: https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview#lifecycle-management-policy-definition
    • Lifecycle management rule definition Sample: https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview#sample-rule


    Thanks for reading, CloudOps Signing Off! 😊

     













    Setup Azure function (ServiceBus and HTTP based trigger ) with Docker Container Image

     To setup Azure function we first need to choose the azure hosting plans.

    • Consumption Plan
    • Premium Plan
    • Dedicated Plan (App Service Plan)
    In this blog, we will see how to create a Azure function using App Service plan and configured it with Docker container image.




























































    Once you have your azure function created in azure you can start configuring triggers.
    • Create a file called function_app.py using VS code.
    • Add below code to it.

    import azure.functions as func
    import json
    import logging
    import time

    app = func.FunctionApp()

    #Function 1 - ServiceBus Triggered
    @app.function_name('ServiceBusTrigger-Function')
    @app.service_bus_queue_trigger(arg_name="message",
                                   queue_name="my-servicebus-queue-name",
                                   connection="ServiceBusConnectionString")
    def main(message: func.ServiceBusMessage):
        logging.info(json.loads(message.get_body()))
       
        logging.info("Start: The time of code execution begin is : %s", time.ctime())
        time.sleep(300)
        # your processing logic
       
        logging.info("End : %s", time.ctime())


    #Function 2 - Http Triggered
    @app.function_name('HttpTrigger-Function')
    @app.route(route="payload", auth_level=func.AuthLevel.ANONYMOUS)
    def get_metadata(httpReq: func.HttpRequest) -> httpReq.HttpResponse:
        return func.HttpResponse(json.dumps(httpReq.get_body()))





















    •  local.settings.json file to store function parameters value to run azure function locally.
    {
      "IsEncrypted": false,
      "Values": {
        "AzureWebJobsStorage": "DefaultEndpointsProtocol=https;AccountName=stg;AccountKey=abcdef==;EndpointSuffix=core.windows.net",
        "ServiceBusConnectionString": "Endpoint=sb://my-sbns.servicebus.windows.net/;SharedAccessKeyName=SharedAccessKey=abcdefghi=",
        "FUNCTIONS_WORKER_RUNTIME": "python",
        "my-servicebus-queue-name": "queue-name"
      }
    }


    • We can use App Service - App Settings to configure the parameters values, Also we need to set Docker image related parameters for Deployment.



























    [
      {
        "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
        "value": "InstrumentationKey=abcdef-4tgh-4567-jkih-tryrtyrtyr;IngestionEndpoint=https://ai.azure.com/;LiveEndpoint=https://es2.livediagnostics.monitor.azure.com/",
        "slotSetting": false
      },
      {
        "name": "AzureWebJobsStorage",
        "value": "DefaultEndpointsProtocol=https;AccountName=acctest;AccountKey=abcde==;EndpointSuffix=core.windows.net",
        "slotSetting": false
      },
      {
        "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
        "value": "abcdefghijklmnopqrstuvwxyz",
        "slotSetting": false
      },
      {
        "name": "DOCKER_REGISTRY_SERVER_URL",
        "value": "https://mydemoacrtest.azurecr.io",
        "slotSetting": false
      },
      {
        "name": "DOCKER_REGISTRY_SERVER_USERNAME",
        "value": "mydemoacrtest",
        "slotSetting": false
      },
      {
        "name": "FUNCTIONS_EXTENSION_VERSION",
        "value": "~4",
        "slotSetting": false
      },
      {
        "name": "FUNCTIONS_WORKER_RUNTIME",
        "value": "python",
        "slotSetting": false
      },
      {
        "name": "ServiceBusConnectionString",
        "value": "Endpoint=sb://my-sbns.servicebus.windows.net/;SharedAccessKeyName=;SharedAccessKey=abcde=",
        "slotSetting": false
      },
      {
        "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
        "value": "false",
        "slotSetting": false
      }
    ]

    This is the folder structure.























    Thanks for reading, CloudOps Signing Off! 😊



    Azure Function ServiceBus Trigger is triggered repeatedly the same message after about 10 minutes from Azure Service Bus Queue

    I have a Web Application hosted in App Service which sends Message to my Azure ServiceBus Queue - this works fine. Then I have my azure function running under an App Service Plan (Dedicated Plan), which is a ServiceBusTrigger based, receives message from Azure ServiceBus Queue. The problem is that after about 10 minutes, when the first (previous) trigger process has not completed its work, Azure Function receives message from Servicebus queue again. My azure function executing logic can take up to 50-60 minutes.

    Initially I was thinking the cause can be in Azure function either it is getting timeout or somewhere it is not behaving properly but while investigating 2 days with simulation of longer duration 20 minutes, 30 minutes we came to know that this is due to ServiceBus default behaviour.

    ServiceBus have MaxAutoRenewDuration Property which is responsible for maximum duration within which the lock will be renewed automatically for a particular message in servicebus queue, however there is relation of it with MaxDeliveryCount.












    The maximum MaxAutoRenewDuration  for servicebus message is 5 minutes so if message is not processed by your azure function within 5 minutes(By default, the runtime calls Complete on the message if the function finishes successfully, or calls Abandon if the function fails.) it will increase the delivery count and re-try/re-queue the same process till delivery count reach to its limit. once delivery count limit reached, the servicebus queue message moved to DeadLetter Queue.

    Not sure why message requeued in every 10 minutes if the maxautolock duration is just 5 minutes, what I noticed several times - it was always double the autorenewduration, if it is 5 minutes the function will retrigger or message get re-queued in 10 minutes, if it is 3 minutes, function will re-trigger in 6 minutes something that sort).

    To resolve this requeue/re-triggers issue in every 10 minutes we need to update the AutoRenew duration. The maxAutoRenewDuration is configurable in Azure function host.json file , which maps to ServiceBusProcessor.MaxAutoLockRenewalDuration. The default value of this setting is 5 minutes.

    https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus?tabs=isolated-process%2Cfunctionsv2%2Cextensionv2&pivots=programming-language-python#hostjson-settings


    {
      "version": "2.0",
      "extensions": {
        "serviceBus": {
          "messageHandlerOptions": {
            "autoComplete": true,
            "maxConcurrentCalls": 16,
            "maxAutoRenewDuration": "00:50:00"
          }
        }
      },
      "functionTimeout": "-1",
      "logging": {
        "applicationInsights": {
          "samplingSettings": {
            "isEnabled": true,
            "excludedTypes": "Request"
          }
        },
        "logLevel": {
          "default": "Warning",
          "Function": "Warning",
          "Function.Accelerator.User": "Debug"
        }
      },
      "extensionBundle": {
        "id": "Microsoft.Azure.Functions.ExtensionBundle",
        "version": "[4.*, 5.0.0)"
      }
    }





















    Thanks for reading, CloudOps Signing Off! 😊





    Azure function Timeout after 10 minutes

    I have my azure function running python app and recently I faced an issue about azure function being timeout after every 10 minutes if process taking more than 10 minutes. This is because of default timeout settings we have for each hosting plans for azure function.


    Wednesday, February 28, 2024

    Kubernetes Essentials: A Handy Reference for DevOps Engineers

     This Kubernetes cheatsheet provides a comprehensive reference for managing Kubernetes clusters and applications efficiently. It includes essential commands for beginners and experienced users. From basic cluster information to advanced deployment strategies and observability tools, this cheatsheet covers everything you need to navigate Kubernetes effectively. Whether you're deploying applications, troubleshooting issues, or optimizing resource usage, this cheatsheet is your go-to resource for streamlining Kubernetes operations.

     Cluster Management:

    Contexts:

    • kubectl config get-contexts: Lists available contexts.
    • kubectl config current-context: Displays the current context.
    • kubectl config use-context context-name: Sets the current context to the specified one.
    • kubectl config set-context --current --namespace=<namespace-name>: Sets the default namespace for the current context.

    Node Information:

    • kubectl get nodes: Lists all nodes in the cluster.
    • kubectl describe node <node-ip> | grep MemoryPressure: Displays memory pressure for a specific node.
    • kubectl top nodes: Shows current resource usage for all nodes in the cluster.
    • kubectl drain <node-name>: Drains a node for maintenance, evicting pods gracefully.

    Namespaces and Resources:

    • kubectl create namespace <namespace-name>: Creates a new namespace.
    • kubectl get pods --all-namespaces: Lists pods in all namespaces.
    • kubectl describe namespace <namespace-name>: Describes details of a namespace.
    • kubectl delete namespace <namespace-name>: Deletes a namespace and all its resources.


    Pod Management:

    Pods:

    • kubectl get pods --namespace=<namespace-name> -o wide|json|yaml: Lists pods in a specific namespace with additional details.
    • kubectl get pods | grep <POD-NAME>: Filters pods by name.
    • kubectl describe pod: Describes details of a pod.
    • kubectl delete pod <POD-NAME> --grace-period=0 --force --namespace <NAMESPACE>: Deletes a pod forcibly.
    • kubectl logs <pod-name> --tail=<lines>: Retrieves the last few lines of logs for a pod.


    Deployment Management:

    Deployments:

    • kubectl get deployments: Lists all deployments in the cluster.
    • kubectl describe deployment <deployment-name>: Describes details of a deployment.
    • kubectl scale --replicas=5 deployment/<deployment-name>: Scales a deployment to a specific number of replicas.
    • kubectl edit deployment <deployment-name>: Edits a deployment's configuration.
    • kubectl rollout pause deployment/<deployment-name>: Pauses a deployment rollout.
    • kubectl rollout resume deployment/<deployment-name>: Resumes a paused deployment rollout.

    Rollout and Rollback:

    • kubectl rollout status deployment/<deployment-name>: Monitors deployment rollout status.
    • kubectl rollout history deployment/<deployment-name>: Displays revision history of a deployment.
    • kubectl rollout undo deployment/<deployment-name>: Rolls back a deployment to the previous version.
    • kubectl rollout undo deployment/<deployment-name> --to-revision=<revision>: Rolls back to a specific revision.


    Troubleshooting:

    Logs and Events:

    • kubectl logs <pod-name> --namespace <namespace-name>: Retrieves logs for a pod in a specific namespace.
    • kubectl logs <pod-name> --container <container-name> : Retrieves logs for a pod  from a specific container.
    • kubectl describe pod <pod-name> --namespace <namespace-name>: Describes details of a pod in a specific namespace.
    • kubectl get events --namespace <namespace-name> --sort-by='{.lastTimestamp}': Displays events sorted by timestamp in a specific namespace.

    Executing Commands:

    • kubectl exec -it <pod-name> -- /bin/bash: Opens a shell in a running pod. (use /bin/sh instead of /bin/bash if running through powershell window)
    • kubectl exec -it <pod-name> -- powershell: Opens a powershell shell in a running pod.
    • kubectl exec -it <pod-name> --container <container-name> -- /bin/bash: Opens a shell in a specific container of a pod.


    Miscellaneous:

    Exit Code and Restart:

    • echo $?: Displays the exit code of the last application.
    • kubectl rollout restart deployment <deployment-name>: Restarts a deployment.

    Scaling and Deleting Pods:

    • kubectl scale deployment <deployment-name> --replicas=0: Scales a deployment to zero replicas.
    • kubectl delete pods --all --namespace <namespace-name>: Deletes all pods in a specific namespace.

    Automated Actions:

    • for each in $(kubectl get pods --namespace <namespace-name> | grep Evicted | awk '{print $1}'); do kubectl delete pods $each --namespace <namespace-name>; done: Deletes all evicted pods in a specific namespace.
    • kubectl get pods | grep <pod-name> | awk '{print $1}' | xargs kubectl describe pod : Describe the pods

     

    Saturday, January 21, 2023

    CloudFlare Page Rules 101: Understanding and Implementing Redirects

    Redirecting your website to a secure HTTPS connection is a crucial step in ensuring the security and privacy of your website visitors. However, this can become tricky when dealing with different subdomains, such as the "www" subdomain. One way to do this is by using CloudFlare Page Rules to redirect all HTTP traffic to HTTPS Or all non-www to www with HTTPS

    In this blog post, I will show you how to set up redirects using CloudFlare Page Rules for couple of different scenarios:

    1. Redirecting all traffic from http://sunilcloudops.blogspot.com/* to https://www.sunilcloudops.blogspot.com/$1
    2. Redirecting all traffic from https://sunilcloudops.blogspot.com/* to https://www.sunilcloudops.blogspot.com/$1
    3. Redirecting all traffic from http://www.sunilcloudops.blogspot.com/* to https://www.sunilcloudops.blogspot.com/$1
    4. Redirecting all traffic of a sub-domain from http://blog.sunilcloudops.blogspot.com/* to https://sunilcloudops.blogspot.com/$1
    5. Redirecting all traffic of a sub-domain from http://blog.sunilcloudops.blogspot.com/* to https://sunilnewdomain.com/$1 - (external domain)

    Before we begin, make sure that your website is set up with a valid SSL certificate and that it is active on CloudFlare.

    • Redirecting all traffic from http://sunilcloudops.blogspot.com/* to https://www.sunilcloudops.blogspot.com/$1

      To redirect all traffic from non-www http to https://www, follow these steps:
      • Log in to your CloudFlare account
      • Select the website you want to redirect
      • Click on the "Page Rules" tab
      • Click on the "Create Page Rule" button
      • In the "If the URL matches" field, enter "http://sunilcloudops.blogspot.com/*"
      • In the "Then the settings are" field, select "Forwarding URL"
      • In the "Status code" field, select "301 - Permanent Redirect"
      • In the "Redirect URL/Destination URL" field, enter "https://www.sunilcloudops.blogspot.com/$1"
      • Click on the "Save and Deploy" button






    • Redirecting all traffic from https://sunilcloudops.blogspot.com/* to https://www.sunilcloudops.blogspot.com/$1































    • Redirecting all traffic from http://www.sunilcloudops.blogspot.com/* to https://www.sunilcloudops.blogspot.com/$1


    • Redirecting all traffic of a sub-domain from http://blog.sunilcloudops.blogspot.com/* to https://sunilcloudops.blogspot.com/$




    • Redirecting all traffic of a sub-domain from http://blog.sunilcloudops.blogspot.com/* to https://sunilnewdomain.com/$1 - (external domain)



    Thanks for reading, CloudOps Signing Off! 😊

            


    Installing a Specific version of Node.js and npm in Azure DevOps pipeline

    Node.js is a popular JavaScript runtime that allows developers to build server-side applications using JavaScript. npm is the default package manager for Node.js, and it is used to install and manage packages for Node.js projects.

    When building Node.js projects, it is often necessary to specify a specific version of Node.js and npm to be used for the build and release process. This ensures that the same version of the tools is used across all environments and prevents compatibility issues. 

    This can be done using the Node Version Manager (nvm) or the Node.js installer.

    Using Node.js installer:

    You can install a specific version of Node.js and npm by using the Node.js tool installer task in Azure DevOps. You can add the task to your pipeline yaml file and configure it to install the desired version of Node.js and npm:











    This will install version 14.18.1 of Node.js and version 6.14.15 of npm on the build agent. you can use the following command to check the version:
    • node -v
    • npm -v

    If you want to install a specific version of npm instead of using the version installed by NodeTool you can use below PowerShell command into your pipeline after above task.













    This will install version 7.14.0 of npm on the build agent.


    Using nvm:

    nvm is a command-line tool that allows you to easily install and manage multiple versions of Node.js. To install nvm, you can use the following command in your pipeline yaml file:










    Once nvm is installed, you can use it to install a specific version of Node.js in your pipeline yaml file using below:















    This will install version 14.18.1 of Node.js and set it as the default version for the pipeline.

    You can also install a specific version of npm in your pipeline yaml file using below:













    This will install version 7.14.0 of npm.

    In conclusion, you can install a specific version of Node.js and npm in Azure DevOps by using the Node Version Manager (nvm) or using the Node.js tool installer task in a pipeline. 


    Thanks for reading, CloudOps Signing Off! 😊