Tag: Blogs

All types of blogs including life experiences for any user to discuss and comment to.

  • AWS Elastic Beanstalk

    AWS Elastic Beanstalk is a fully managed service that lets developers easily deploy and run web applications on AWS. Provides an easy way to provision, configure, and manage the underlying infrastructure for web applications.

    Using Elastic Beanstalk, you simply upload your code and the service takes care of the rest. EC2 instances, Amazon RDS databases, and load balancers are automatically provisioned and configured for you.

    It supports Java, .NET, PHP, Node.js, Python, Ruby, and Go, and can deploy web applications, RESTful web services, and background worker processes. To manage your application and environment, you can also use the web-based management console and command line tools.

    Your application is automatically deployed, scaled, monitored, and maintained by Elastic Beanstalk. Your application scales automatically based on incoming traffic, and resources and your application’s health are monitored. You can also update your instances’ operating system and runtime environment automatically.

    Elastic Beanstalk also integrates with other Amazon services, such as Amazon S3, Amazon RDS, Amazon SNS, and Amazon CloudWatch, so you can easily store and retrieve files, access data, send notifications, and monitor your application. This makes it an excellent platform for building, deploying, and running web applications.

  • AWS LAMBDA

    AWS Lambda is a serverless compute service offered by Amazon Web Services (AWS) that allows users to run code without provisioning or managing servers. Using Lambda, users can upload their code and create a function, and the service handles scaling automatically. Lambda supports a variety of programming languages, including Node.js, Python, Java, and C#, and can be triggered by events from AWS services such as S3, DynamoDB, and SNS. It is a popular choice for building highly available, scalable, and event-driven applications.

  • AWS EKS

    Using Amazon EKS, you can run Kubernetes on AWS without installing and operating your own control plane or worker nodes.

    Kubernetes is an open-source container orchestration system that allows you to deploy and manage containerized applications. Kubernetes organizes containers into logical groups for management and discovery, then launches them onto Amazon Elastic Compute Cloud (Amazon EC2) instances. You can run containerized applications on premises and in the cloud using Kubernetes, including microservices, batch processing workers, and PaaS platforms.

    EKS deploys the Kubernetes control plane, including the API servers and backend persistence layer, across multiple AWS Availability Zones (AZs) for high availability and fault tolerance. AWS EKS automatically detects and replaces unhealthy nodes in the control plane. AWS Fargate provides serverless compute for containers, so you can run EKS using it as part of a serverless computing setup. With AWS Fargate, there is no need to provision and manage servers, you can specify the resources for a given application and pay for them as needed, and the software enhances security through application isolation by design.

    Amazon EKS is integrated with many AWS services to provide scalability and security for your applications. These services include Elastic Load Balancing for load distribution, AWS Identity and Access Management (IAM) for authentication, Amazon Virtual Private Cloud (VPC) for isolation, and AWS CloudTrail for logging.

    Amazon EKS works by provisioning (starting) and managing the Kubernetes control plane and worker nodes for you. At a high level, Kubernetes consists of two major components: a cluster of ‘worker nodes’ running your containers, and the control plane managing when and where containers are started on your cluster while monitoring their status.

    Without Amazon EKS, you have to run both the Kubernetes control plane and the cluster of worker nodes yourself. With Amazon EKS, you provision your worker nodes using a single command in the EKS console, command-line interface (CLI), or API. AWS handles provisioning, scaling, and managing the Kubernetes control plane in a highly available and secure configuration. This removes a significant operational burden and allows you to focus on building applications instead of managing AWS infrastructure.

  • ITIL V4

    Become ITIL® 4 Foundation Certified in 7 Days by Abhinav Krishna Kaiser

    Why ITIL V4: version 4 of ITIL that is tailored for the digital age where boundary between the development stage and operations stage is not razor thin but rather has vanished into thin air.

    It is the standard today to run IT operations. With the advent of the digital age and DevOps, the principles and the core understanding of management of services were somewhat shaken.

    Anybody can take up the ITIL 4 certification. There are no criteria for minimum experience, education, or other prerequisite certifications.

    There are 40 multiple choice questions; every question comes with a choice of four possible answers.
    • Exam duration: 60 minutes
    • Each question carries one mark; wrong answers do not bog you down with negative scoring.
    • You are required to give 26 correct answers to pass the exam: 65 percent.

    ITIL 4 was announced in 2017: Companies like ITIL, AXELOS, initiated a refresh by reaching out to about 2,000 professionals from various organizations to come together with the single objective of creating a framework that was agile and innovative. The outcome is ITIL 4.

    In ITIL V4 , The Service Lifecycle Is Dead and no more used. In ITIL V4 this is now replaced with Service value system and service value chain are the new concepts that drive the delivery of services.

    ITIL 4 has introduced the concept of practices. Problem management practice in this instance is a system on its whole whose objectives is to deliver all the problem management outputs.

    In ITIL 4 definition of a service means of enabling value co-creation by facilitating outcomes that customers want to achieve, without the customer having to manage specific costs and risks.

    Today a service provider cannot tuck away services and deliver it to the customer in isolation. Any service can become valuable only if there is ample direction and feedback from the customer, the primary person who uses the service. Hence the definition has rightly included co-creation.

    Governance :(Need to update)

    Automation: Activities that do not require cognizance, intelligence, or decision-making brain cells can theoretically be run by machines. This makes even more sense if these activities are repeatable exercises.

    ITIL 4 has taken Automation to the next level by defining a guiding principle coupling optimization and automation to allow ITIL to step through DevOps’ doors.

    ITIL V4 certification path :

    After completing ITIL Foundation which is base for recognizing these below two certifications

    1. ITIL Managing Professional (MP) – It is meant for pure service-management professionals who work in technology and digital streams.
    2. ITIL Strategic Leader (SL) – It is meant to look outward toward business like business needs, expectations, and everything related to them.

    Brief Overview of DevOps

    DevOps is a culture which brings together the development and operations teams. DevOps is not just a methodology for developers. Operations too gets its share of the benefits pie. In Devops we do not blame individuals. Individuals are not made responsible when we look at the overall DevOps scheme of things. This culture develops a system where the mistakes that are made get identified and rectified in the developmental stages, well before they reach production. Hence DevOps is a cultural transformation that brings together people from across disciplines to work under a single umbrella to collaborate and work as one unit with an open mind and to remove inefficiencies.

    DevOps name itself has brief history if interested you can read from here

    DevOps principles

    CALMS stands for the following:
    • Culture
    • Automation
    • Lean
    • Measurement
    • Sharing

    Culture: There is urban legend saying ” “Culture eats strategy for breakfast.” If you want to make a massive mind-boggling earth-shaking change, start by changing the culture that can make it happen and adapt to the proposed new way of working. Culture is something that cannot be changed by a swift switching process. It is embedded into human behavior and requires an overhaul of people’s behavior.

    DevOps as culture:
    • Take responsibility for the entire product and not just the work that you perform.
    • Step out of your comfort zone and innovate.
    • Experiment as much as you want; there’s a safety net to catch you if you fall
    • Communicate, collaborate, and develop affinity with the involved teams.
    • For developers especially; you build it, you run it.

    Automation: It is a key component in the DevOps methodology. The objective is to automate whatever possible in the software delivery life cycle. The kinds of activities that can be efficiently automated are those that are repetitive and those that don’t ask for human intelligence. Activities involving executing tasks such as running a build or running a test script can be automated. The art of writing the code or test
    scripts require the use of human intelligence, and the machines of today are not in a position to do it. In future, artificial intelligence can be a threat to the activities that are dependent on humans today.

    Lean: The thinking behind the Lean methodology is to keep things simple and not to overcomplicate them. There are two parts to the Lean principle.

    1. The primary one is not to bloat the logic or the way we do things; keep it straightforward and minimal.
    2. The second part of the principle is to reduce the wastage arising from the methodology. (Defects are one of the key wastes)

    Measurement: If you seek to automate everything, then you probably need a system to provide feedback whenever something goes wrong. Feedback is possible if you know what the optimum results are and what aren’t. The only way you can find out whether the outcome is optimum or not is by measuring it.

    Measurement principle provides direction on the measures to implement and keep tabs on the pulse of the overall software delivery. It is not a simple task to measure everything. Many times, we do not even know what we should measure. With automation in place, it is extremely important that all the critical activities, and the infrastructure that supports them, be monitored and optimized for measurement.

    Sharing: The final principle is sharing, which hinges on the need for collaboration and knowledge sharing between people. If we aim to significantly hasten the process of software delivery, it is only possible if people don’t work in silos anymore. The knowledge, experience, thoughts, and ideas must be put out into the open for others to join in the process of making them better, enhanced, and profound. With information being transparent, there will be no reason for others to worry or be sceptical about the
    dependencies or the outcome of the process

    Elements of DevOps

    DevOps is not a framework; it is a set of good practices. People, process, and technology are the three elements that are common to all DevOps practices. In fact, they are the enablers to effect change in the DevOps culture. Only when the three elements come together in unison are we able to realize the complete benefits of DevOps.

    Today, people talk of DevOps through the lens of technology. They throw around several tool names and claim that they do DevOps. So, the question to ponder is whether you can really do DevOps by tools alone.? answer is no.

    All the three elements of people, process, and technology are essential to build the DevOps methodology and to achieve the objectives that are set forth before us. By the union of all three elements, we can create an unmatched synergy that can fuel developments at an unparalleled pace.

    Let’s go through on each of these elements.

    People

    Let’s say that an application is developed, and it comes to the change advisory board (CAB) for approval. One of the parties on the CAB is the operational teams. They specifically ask questions around the testing that has been performed for this software, and even though the answer from development is yes for the success rate of all the tests, the operational teams tend to be critical but unfortunately, they only have the confirmation of the developers to go with when the quality of the software is put on the line.

    Process

    Processes are a key component in ensuring the success of any project. It is important that processes are defined first along with a functional DevOps architecture and then translated into tooling and automation. The process must always drive tools and never the other way around. Most IT projects are run on Agile project management methodologies because of the flexibility it offers in this ever-changing market.

    When we talk about Agile project management, there are a number of methodologies to pick from. Scrum, Kanban, Scrumban, Extreme Programming (XP), Dynamic Systems Development Method (DSDM), Crystal, and Feature Driven Development (FDD) are some examples.

    Technology

    Technology is the third element of DevOps and is often regarded as the most important. It is true in a sense that without automation, we cannot possibly achieve the fast results that I have shared earlier through some statistics. The number of tools that claim to support DevOps activities is enormous—too many to count.

    DevOps Practices

    DevOps has become synonymous with certain practices such as continuous integration, continuous delivery, and continuous deployment.

  • Value Articles

    Family

    https://www.romper.com/life/family-traditions?utm_source=pocket-newtab-intl-en

    There is no reason to risk what you have and need for what you don’t have and don’t need.