<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Open Source Dev]]></title><description><![CDATA[Open Source Dev shares insights, tutorials, and real-world experiences from the world of open-source, cloud, and backend development. Built by devs, for devs.]]></description><link>https://blog.craftedbrain.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 16:53:39 GMT</lastBuildDate><atom:link href="https://blog.craftedbrain.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Permanent  fix for Let's encrypt SSL Certificate renewal issues]]></title><description><![CDATA[Let’s Encrypt is a certificate authority that provides free SSL certificates.
SSL Certificates are small data files that digitally bind a cryptographic key to an organization’s details. When installed on a web server, it activates the padlock and the...]]></description><link>https://blog.craftedbrain.com/permanent-fix-for-lets-encrypt-ssl-certificate-renewal-issues</link><guid isPermaLink="true">https://blog.craftedbrain.com/permanent-fix-for-lets-encrypt-ssl-certificate-renewal-issues</guid><category><![CDATA[SSL Certificate]]></category><category><![CDATA[cloudflare]]></category><category><![CDATA[permanent-ssl]]></category><category><![CDATA[nginx-certificates]]></category><dc:creator><![CDATA[Divakar]]></dc:creator><pubDate>Mon, 16 Jan 2023 02:25:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673834260181/ce195279-8188-44c5-b3e4-4698669fb1c1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://letsencrypt.org/">Let’s Encrypt</a> is a certificate authority that provides free SSL certificates.</p>
<p><strong>SSL Certificates</strong> are small data files that digitally bind a cryptographic key to an organization’s details. When installed on a web server, it activates the padlock and the HTTPS protocol and allows secure connections from a web server to a browser and vice-versa.</p>
<h1 id="heading-how-to-install-the-certificate-using-lets-encrypt"><strong>How to install the certificate using Let’s Encrypt?</strong></h1>
<p>You can follow the official <a target="_blank" href="https://letsencrypt.org/getting-started/">docs</a> or directly use <a target="_blank" href="https://certbot.eff.org/">Certbot</a> to install the certificates for your server (like Apache, Nginx etc.) and OS (Linux, CentOS etc.)</p>
<h1 id="heading-what-is-the-problem"><strong>What is the problem?</strong></h1>
<p>Let’s encrypt needs to be renewed every 90 days, and once configured on your server, it starts sending you emails a few weeks before the actual expiry date. Which is good but kind of annoying as well, if you like to keep your mailbox clean.</p>
<p>So we need to manually trigger the <code>cerbot renew</code> command from each server to renew SSL certificates, which is ok to some extent.</p>
<p>Let’s imagine a hypothetical scenario where you have to renew multiple SSL Certificates whose expiry date is very close. Isn’t that tedious to manually enter each server just to trigger the <strong>certbot</strong> to renew your certificates?</p>
<p><strong>So, can’t we have a permanent solution?</strong> The obvious answer is, <strong>YES WE CAN</strong>!</p>
<h1 id="heading-whats-the-solution"><strong>What's the solution?</strong></h1>
<h3 id="heading-step-1-obtain-permanent-keys-from-cloudflare">Step 1: Obtain Permanent Keys from Cloudflare</h3>
<p>The first step in setting up SSL with permanent keys from Cloudflare is to obtain the necessary keys. This can be done by navigating to the "<strong>SSL/TLS</strong>" tab in your Cloudflare account, and selecting "Create Certificate" under the "Origin Certificates" section. You will then need to enter the domain name for which you want to generate the certificate and click "Next".</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673835365493/f384b6ce-c57f-458b-86b5-5e81700b65b4.png" alt class="image--center mx-auto" /></p>
<p>Click on <strong>Create Certificate</strong> and select the default <strong>RSA (2048)</strong> key type and choose your certificate validity period <em>(During this period you never had to renew your certificates)</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673835507301/e248face-b51f-4aee-a7e5-a7fcb6de2b6a.png" alt class="image--center mx-auto" /></p>
<p>Make sure to copy and store the keys in safer vault storage,</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673835621985/47cb012a-f1e3-4d35-995e-f87f9fd148e4.png" alt="All these keys will be pruned while publishing the article" class="image--center mx-auto" /></p>
<h3 id="heading-step-2-configure-nginx-reverse-proxy">Step 2: Configure Nginx (Reverse Proxy)</h3>
<p>Once you have obtained the permanent keys, you can now configure Nginx to use them. This can be done by adding the following code to your Nginx configuration file, typically located at <code>/etc/nginx/nginx.conf:</code></p>
<pre><code class="lang-nginx"><span class="hljs-section">server</span> {
    <span class="hljs-attribute">listen</span> <span class="hljs-number">443</span> ssl;
    <span class="hljs-attribute">server_name</span> example.com;

    <span class="hljs-attribute">ssl_certificate</span> /path/to/your_certificate.crt;
    <span class="hljs-attribute">ssl_certificate_key</span> /path/to/your_private_key.key;
}
</code></pre>
<p>This code configures Nginx to listen on port 443 (the default port for HTTPS) and use the specified SSL certificate and private key for the domain <a target="_blank" href="http://example.com">example.com</a>. Replace the path to your certificate and private key and domain name with the actual values from your Cloudflare account.</p>
<p>Step 3: Test the Configuration Once you have configured Nginx to use the permanent keys from Cloudflare, you can test the configuration by visiting your website using https://. If the configuration is set up correctly, you should see the padlock icon in the browser indicating that the connection is secure.</p>
<p>In conclusion, using permanent keys for SSL from Cloudflare in Nginx is a simple process that can be done by obtaining the necessary keys from Cloudflare, configuring Nginx to use them, and testing the configuration. This will provide an additional layer of security for your website and ensure that sensitive information is protected.</p>
<p><strong><em>Note*</em></strong>: You will need to have Nginx installed on your server, and this is just a basic example, You should also consider configuring additional security features like HSTS (HTTP Strict Transport Security) and OCSP stapling (Online Certificate Status Protocol) to further secure your website.*</p>
]]></content:encoded></item><item><title><![CDATA[SSO Support & Authentication with Portainer using Microsoft OAuth provider]]></title><description><![CDATA[So what is OAuth?
Many of us come into contact with OAuth when browsing around the Web, and most of us aren’t even aware of its existence. OAuth(Open Authentication) is a system that grants third-party websites limited access to user accounts, for ex...]]></description><link>https://blog.craftedbrain.com/sso-portainer</link><guid isPermaLink="true">https://blog.craftedbrain.com/sso-portainer</guid><category><![CDATA[oauth]]></category><category><![CDATA[SSO]]></category><category><![CDATA[Portainer]]></category><category><![CDATA[auzre_oauth]]></category><category><![CDATA[azure_SSO]]></category><dc:creator><![CDATA[Divakar]]></dc:creator><pubDate>Mon, 16 Jan 2023 01:42:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673831364205/4da56c60-3292-4fa9-8148-1f61558c24f2.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-so-what-is-oauth">So what is OAuth?</h2>
<p>Many of us come into contact with OAuth when browsing around the Web, and most of us aren’t even aware of its existence. OAuth(Open Authentication) is a system that grants third-party websites limited access to user accounts, for example, your Twitter or Facebook accounts.</p>
<p>It lets visitors interact within the site without requiring new account registration or releasing your username and password to third parties.</p>
<p>In this guide, I’d like to introduce the concept of OAuth and how it can apply to developers. There are a lot of technical details involved in the implementation of your OAuth application.</p>
<p><img src="https://user-images.githubusercontent.com/290496/48670041-e5803e00-eb53-11e8-91a9-3776276d6bf6.png" alt="Introduce OAuth 2.0 — Authlib 1.2.0 documentation" class="image--center mx-auto" /></p>
<p>OAuth is an open-standard authorization protocol that allows users to share their private resources (e.g. data, files) stored on one site with another site without having to give away their credentials, typically a password. In this article, we will show you how to integrate Microsoft OAuth into Portainer, a popular open-source tool for managing containerized applications.</p>
<h4 id="heading-prerequisites-you-must-have-deployed-and-exposed-your-portainer-application-over-https">Prerequisites: You must have deployed and exposed your portainer application over HTTPS.</h4>
<h3 id="heading-step-1-register-your-application-with-azure-active-directory-aad">Step 1: Register your application with Azure Active Directory (AAD)</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673831892361/c35fbe2c-1784-4a20-916f-10dc43edb4de.png" alt class="image--center mx-auto" /></p>
<p>Login to your [Azure Portal](<a target="_blank" href="https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps">https://portal.azure.com</a>/) and choose App registration, click on new app registration to create a new app and update your application URL.</p>
<p>Once done, click on the API-Permissions menu and choose the below permissions to grant access to the OAuth app, (Enable only the relevant needed access) and hit save.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673832433282/80966f92-eb91-455e-8ee1-3f9f0c233a96.png" alt class="image--center mx-auto" /></p>
<p>Finally, click on the Certificates &amp; Secrets menu to create a new client secret &amp; id, and copy and store the secrets securely.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673832578938/e6baa0d6-12c0-4a28-aa91-71cba7c4c0b4.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-2-configure-portainer-with-your-aad-application">Step 2: Configure Portainer with your AAD application</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673832699460/d4655942-e131-4741-a20f-6533d5e7562e.png" alt class="image--center mx-auto" /></p>
<p>After configuring Portainer with your AAD application, you can now enable OAuth in the Portainer settings. This can be done by navigating to the "Authentication" tab and selecting "Microsoft" as the authentication method.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673832909343/02063e0e-6ae9-40e4-95b4-7d8376336277.png" alt class="image--center mx-auto" /></p>
<p>Update the client ID under the Tenant ID field and the client secret under the Application key, for Application ID head on to the Azure portal and click on your application to view the Application ID.</p>
<h3 id="heading-step-3-test-the-integration">Step 3: Test the integration</h3>
<p>To test that the integration is working correctly, you can log in to Portainer with your Microsoft account. If everything is set up correctly, you should be able to access the Portainer dashboard without any issues.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673833169979/523b690a-0c3a-4023-8abd-a9b751e66d5b.png" alt class="image--center mx-auto" /></p>
<p>In summary, integrating Microsoft OAuth into Portainer is a straightforward process that can be completed in four steps: registering your application with Azure Active Directory, configuring Portainer with your AAD application, enabling OAuth in the Portainer settings, and testing the integration. This will provide an added level of security and ease of access for users.</p>
]]></content:encoded></item><item><title><![CDATA[Microsoft Azure: Cloud Compute Services]]></title><description><![CDATA[What do we mean by “cloud?”
"The cloud" refers to servers that are accessed over the Internet, and the software and databases that run on those servers. Cloud servers are located in data centers all over the world.

Characteristics of cloud computing...]]></description><link>https://blog.craftedbrain.com/microsoft-azure-cloud-compute-services</link><guid isPermaLink="true">https://blog.craftedbrain.com/microsoft-azure-cloud-compute-services</guid><category><![CDATA[Azure]]></category><category><![CDATA[microsoft azure certification]]></category><category><![CDATA[azure-devops]]></category><category><![CDATA[microsoft azure training]]></category><category><![CDATA[azure-compute]]></category><dc:creator><![CDATA[Divakar]]></dc:creator><pubDate>Mon, 08 Aug 2022 03:32:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1659925041281/Toy3UOAgs.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-do-we-mean-by-cloud">What do we mean by “cloud?”</h1>
<p>"The cloud" refers to servers that are accessed over the Internet, and the software and databases that run on those servers. Cloud servers are located in data centers all over the world.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659925228488/GM49IokVo.png" alt="image.png" class="image--center mx-auto" /></p>
<h2 id="heading-characteristics-of-cloud-computing">Characteristics of cloud computing</h2>
<ul>
<li>Network access to cloud services</li>
<li>Pay only what you need from a measured service</li>
<li>Multi-tenancy – many customers in same space</li>
<li>On-demand self-service to scalable resources</li>
<li>High bandwidth links to and between datacenters</li>
</ul>
<p><strong>Who is using cloud computing?</strong> Organizations of every type, size, and industry are using the cloud for a wide variety of use cases, such as data backup, disaster recovery, email, virtual desktops, software development and testing, big data analytics, and customer-facing web applications.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659925349472/6m-p4owQR.png" alt="image.png" class="image--center mx-auto" /></p>
<p>Microsoft Azure platform provides all or many of the below features by default or out of the box.</p>
<h4 id="heading-scalability">Scalability</h4>
<ul>
<li>Ability of system to scale by adding or removing resources.</li>
<li>Resources could be like any resources including VM, database storage and more.</li>
</ul>
<h4 id="heading-elasticity">Elasticity</h4>
<ul>
<li>Elasticity is the ability of a system to scale dynamically.</li>
</ul>
<h4 id="heading-agility">Agility</h4>
<ul>
<li>The ability to react quickly and allocate and deallocate resources in a very short time.</li>
<li>In the on premise world requesting resource and allocation might take weeks to months of time depending upon the resource.</li>
<li>In the cloud,  resource spin up would happen in minutes and at maximum it would take a few hours for heavy resources.</li>
</ul>
<h4 id="heading-fault-tolerance">Fault Tolerance</h4>
<ul>
<li>Ability of system to remain up and running during component and service failures </li>
<li>Major Azure cloud services have built in fault tolerance. </li>
</ul>
<h4 id="heading-disaster-recovery">Disaster recovery</h4>
<ul>
<li>Disaster recovery is the ability of a system to recover from an event that has taken down the service</li>
<li>Disaster recovery can be easily set by setting replication in different regions.</li>
</ul>
<h4 id="heading-high-availability">High Availability</h4>
<ul>
<li>Availability is a measure of system uptime. </li>
</ul>
<h2 id="heading-large-public-cloud-services-have-near-global-reach">Large public cloud services have near-global reach</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659925498226/iX-K5w_OA.png" alt="image.png" class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659925441267/SB2kYFDbL.png" alt="image.png" class="image--center mx-auto" /></p>
<h2 id="heading-service-model">Service Model</h2>
<p>Azure offers 3 service Models</p>
<ul>
<li>Iaas (Infrastructure as a service)</li>
<li>PaaS (Platform as a service)</li>
<li>SaaS (Software as a service)</li>
</ul>
<h2 id="heading-infrastructure-as-a-service-iaas">Infrastructure as a Service (IaaS)</h2>
<p>With Infrastructure as a Service customers access raw computing resources in the form of storage space, various sizes of virtual machine, networking services, and other related management tools. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659926000501/d7KgxI75P.PNG" alt="1.PNG" class="image--center mx-auto" /></p>
<ul>
<li>Customers pay for time and space on a server(s).</li>
<li>Responsible to install and manage their own operating system and software.</li>
</ul>
<p><em>Examples:</em> Azure Stack, ExpressRoute</p>
<h2 id="heading-platform-as-a-service-paas">Platform as a Service (PaaS)</h2>
<p>Platform as a Service offers customers direct access to services rather than to raw computing resources for application design and deployment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659926085319/SWuBZ7mK5.PNG" alt="2.PNG" class="image--center mx-auto" /></p>
<ul>
<li>The PaaS model provides metered (pay as you go) access to services. </li>
<li>Cloud service is responsible for individual virtual machines, and managing basic resources.</li>
</ul>
<p><em>Examples:</em> Azure App Service &amp; IoT device analytics</p>
<h2 id="heading-cloud-service-models">Cloud service models</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659926159032/4k7cqJ0y5.PNG" alt="3.PNG" class="image--center mx-auto" /></p>
<h2 id="heading-azure-compute-service">Azure Compute service</h2>
<p><em>Let's first understand what a compute service is...</em> Compute resources are infrastructure resources that provide processing capabilities in the cloud. For example, virtual clusters, virtual resource pools, and physical servers are all compute resources.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659926854812/XjOALn_zR.png" alt="image.png" class="image--center mx-auto" /></p>
<p>Azure compute provides the infrastructure you need to run your apps. Tap in to compute capacity in the cloud and scale on demand. Containerize your applications, deploy Windows and Linux virtual machines (VMs) and take advantage of flexible options for migrating VMs to Azure. With comprehensive support for hybrid environments, deploy how and where you want to. Azure compute also includes a full-fledged identity solution, so you gain managed end-point protection and Active Directory support which helps secure access to on-premises and cloud apps.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659926608805/WzCF1wpUv.png" alt="image.png" class="image--center mx-auto" /></p>
<h2 id="heading-how-to-choose-an-azure-compute-service">How to choose an Azure compute service</h2>
<p>Azure offers a number of ways to host your application code. The term compute refers to the hosting model for the computing resources that your application runs on. If your application consists of multiple workloads, evaluate each workload separately. A complete solution may incorporate two or more compute services.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659927100497/CIhDF4kcS.png" alt="image.png" class="image--center mx-auto" /></p>
<ul>
<li>“Lift and shift” is a strategy for migrating a workload to the cloud without redesigning the application or making code changes (this is also known as re-hosting). If you lift and shift without re-architecting, you should reserve your compute instances to reduce cost whilst you look to rearchitect later, as you’re already aware of the resource utilization on your workloads.</li>
<li>Cloud optimized is a strategy for migrating to the cloud by refactoring an application to take advantage of cloud-native features and capabilities.</li>
</ul>
<h3 id="heading-azure-virtual-machines">Azure Virtual machines</h3>
<p>The virtual machine is an initial IaaS stage in Azure compute options. This is the most common compute service which is used on all cloud platform widely. So when we create a virtual machine on an Azure portal then we have to deal with some important configuration parameters:</p>
<ul>
<li>The Network Interface gets public and private IP address.</li>
<li>The virtual machine can have multiple disks mounted as per needs.</li>
</ul>
<h4 id="heading-some-important-characteristics-of-the-virtual-machine-are">Some important characteristics of the virtual machine are:</h4>
<ul>
<li>You don’t have to manage the underlying physical servers.</li>
<li>Deploy any type of workload</li>
<li>You can stop the virtual machine whenever you don’t want the virtual machine to run</li>
<li>You can also control the traffic flow using network security groups</li>
<li>You can also monitor different underlying metrics like CPU Utilization and Network Utilization
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659927125504/ZBsEyH7Df.png" alt="image.png" class="image--center mx-auto" /></li>
</ul>
<h3 id="heading-pros-of-azure-virtual-machines">Pros of Azure Virtual machines</h3>
<ul>
<li>Scalability</li>
<li>Data Security/Compliance</li>
<li>High Availability</li>
<li>Cost-Effective</li>
</ul>
<h3 id="heading-cons-of-azure-virtual-machines">Cons of Azure Virtual machines</h3>
<ul>
<li>Requires Management</li>
<li>Requires Platform Expertise</li>
</ul>
<p>Also, read <a target="_blank" href="https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview">Azure scale sets</a> (Virtual Machine Scale Set) that play an important role in cloud computing which provides more elasticity and scalability.</p>
<h2 id="heading-azure-app-services">Azure App Services</h2>
<p>Setting up a web application presents many challenges: <strong>Scaling, Load-Balancing, Patch Management, Configuration Management, Security/Compliance</strong>, to name a few. 
In order for applications to run without issue and without downtime, it is important to deploy these applications with minimum in-service capacity. In addition, it is important to keep the OS and platform versions up to date. However, that is a cumbersome task, needing a lot of operation overhead and expertise. </p>
<p>To make an application highly available in Microsoft Azure, a number of cloud services must be implemented. An application gateway, for instance, is important for distributing traffic. Virtual Machines that scale based on demand are also important. Resources must be provisioned individually, yet integrating them can take a large amount of time.</p>
<p><strong>Azure App Service</strong> helps solve these issues and reduces operational overhead so that developers can concentrate on web development instead of spending more time on infrastructure setup. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659927350863/NBLD2xj22.png" alt="image.png" class="image--center mx-auto" /></p>
<p>When we choose a service to deploy our application, it's often a choice between control, flexibility and ease of use. Cloud services offer greater control over the apps but increases developer responsibility. </p>
<p>Azure App Service, on the other hand, is a Platform as a Service (PAAS) that is quick to build, deploy and scale the application. It helps build enterprise ready applications quickly, accelerating time-to-go-live, all while reducing the overall day to day responsibility of managing the platform.</p>
<h3 id="heading-pros-of-azure-app-service">Pros of Azure App Service</h3>
<ul>
<li>Built-in HTTPS support</li>
<li>Multiple languages and frameworks</li>
<li>Production Ready Environment</li>
<li>DevOps integration</li>
<li>Security and compliance</li>
</ul>
<h3 id="heading-cons-of-azure-app-service">Cons of Azure App Service</h3>
<ul>
<li>Pricing is High</li>
<li>Fixed Domain Name (Deployment of apps under cheap domain names)</li>
<li>No Remote Desktop</li>
<li>No Performance Counters</li>
</ul>
<h2 id="heading-azure-batch-service">Azure Batch Service</h2>
<p>Most of the enterprise and large applications run the lots of automated tasks in the background which can include anything like processing data, bringing new output, calculations, processing billing, testing software etc. In such applications  the role and design are equally important for high-performance computing (HPC) and running processes in parallel to get a job DONE.</p>
<p>Azure batch gives you the power to use ‘Azure batch service’ which provides the facility to run large-scale parallel and high-performance computing (HPC) batch jobs efficiently in Azure without capital investment. This service is recommended to use the large-scale execution and run the parallel tasks like image analysis processing, data injection, processing data, software test case execution and much more.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659927549017/GguGlNYSJ.png" alt="image.png" class="image--center mx-auto" /></p>
<p><strong>How does it work?</strong>
You need a Batch account to use the Batch service. Most Batch solutions also use an associated Azure Storage account for file storage and retrieval.</p>
<ul>
<li>Upload the input files and applications to process the input file into Azure Storage account. These input files can be anything consisting of data to process it.</li>
<li>Create of pool of nodes (virtual machines) to execute the processes. This also consists of configuration of machines like O.S., size of nodes etc.</li>
</ul>
<p>Then, create a job and its associated tasks to perform the actions. Azure batch service automatically schedules the job in the pool of nodes to execute it.</p>
<ul>
<li>Before executing the tasks, service will download the files and applications from Azure storage to provide and execute on the node.</li>
<li>Once the job of execution of tasks is started then application can connect to batch service and monitor the progress of execution over HTTPs.</li>
<li>As the job completes Azure batch service upload the output to Azure storage. We can also fetch this output file from the node directory file system.</li>
<li>Then our client application can download the output files from Azure storage.</li>
</ul>
<h3 id="heading-pros-of-azure-batch">Pros of Azure Batch</h3>
<ul>
<li>Cost-effective – with proper pool management you only pay for the time the workload is executed. </li>
<li>Elastic – you can easily match VM configuration and workflow to your demands. You can choose from general usage A-tier to N-tier VMs with hundreds of RAM memory and dozens of processors.</li>
</ul>
<h3 id="heading-cons-of-azure-batch">Cons of Azure Batch</h3>
<ul>
<li>Complex – Azure Batch system might be difficult to manage. Setting up the whole system: pools, jobs, and tasks might be confusing for beginners and it needs some experience to make Azure batch reliable and cost-effective.</li>
<li>Limited support – Lack of in-depth documentation and poor developer support. As Azure - Batch is still strongly developed by Microsoft and the service is used mostly by big companies, it did not live to see thorough documentation and community support yet.</li>
</ul>
<h2 id="heading-azure-functions">Azure Functions</h2>
<p>Azure Functions is an event driven server less compute platform that lets you implement code that is triggered by events that happen in Azure or other third party services. With Azure Functions you don't need to explicitly provision or manage infrastructure in order to run the event-triggered code.</p>
<p>Azure Functions can be used to achieve decoupling, high throughput, reusability and shared. Being more reliable, it can also be used for the production environments.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659928048746/gEddnsVbN.png" alt="image.png" class="image--center mx-auto" /></p>
<h3 id="heading-how-do-you-call-azure-function">How Do You Call Azure Function?</h3>
<p>Azure Functions can be called when triggered by the events from other services. Being event driven, the application platform has capabilities to implement code triggered by events occurring in any third-party service or on-premise system.</p>
<h3 id="heading-how-long-can-azure-functions-run">How Long Can Azure Functions Run?</h3>
<p>For any Azure Functions, a single Function execution has a maximum of 5 minutes by default to execute. If the Function is running longer than the maximum timeout, then the Azure Functions runtime can end the process at any point after the maximum timeout has been reached.</p>
<h2 id="heading-azure-container-instances">Azure Container Instances</h2>
<p>Azure Container Instances allows you to run a container without provisioning virtual machines or having to use container orchestrators like Kubernetes or DC/OS. Container Instances are useful when you just want a container without orchestration.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659928506299/_HUqM12Ub.png" alt="image.png" class="image--center mx-auto" /></p>
<h1 id="heading-pros-of-using-azure-container-instances">Pros of using Azure Container Instances</h1>
<ul>
<li>Faster startup times</li>
<li>Full Container access</li>
<li>Compliant deployments<ul>
<li>Hypervisor-level security</li>
<li>Customer data (Protection)</li>
</ul>
</li>
</ul>
<h1 id="heading-cons-of-using-azure-container-instances">Cons of using Azure Container Instances</h1>
<ul>
<li>Hard to orchestrate multiple containers.</li>
<li>Hard to manage data flow or network access across them.</li>
</ul>
<p>Since managing multiple containers in runtime is bit harder we use Azure Container Apps which provides multiple container support and platform integrational support with each other in a easier way. to learn more about it <a target="_blank" href="https://azure.microsoft.com/en-us/services/container-apps/">click here</a></p>
<h2 id="heading-azure-kubernetes-service">Azure Kubernetes Service</h2>
<p><strong>Kubernetes</strong> is a fast-growing platform for managing containerized applications, storage, and networking components. It allows developers and administrators to focus on application workloads, not infrastructure components. Kubernetes provides a convenient, declarative way to deploy large numbers of containers, with a powerful set of APIs for management tasks. </p>
<p>Kubernetes can be complex to install and maintain, especially when running in production and at an enterprise scale. To reduce the complexity of key management and deployment operations, such as scalability and Kubernetes updates, you can use Azure Kubernetes Service (AKS), which offers managed Kubernetes services. To simplify the process, Azure manages the AKS control plane, and customers pay only for the AKS nodes the application runs on. AKS is based on the Azure Kubernetes Service Engine, which was released by Microsoft as open source.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659928733484/jN-dVR9sn.png" alt="image.png" class="image--center mx-auto" /></p>
<p>The reference architecture is composed of: </p>
<p><strong>Azure Kubernetes Service (AKS)</strong>—at the center of the architecture is AKS.</p>
<p><strong>Kubernetes cluster</strong>—a cluster running your workloads, deployed on AKS. With AKS you only manage agent nodes; AKS assumes responsibility for the Kubernetes control plane.</p>
<p><strong>Virtual network</strong>—AKS creates a virtual network in which agent nodes can be deployed. In advanced scenarios, you can create a virtual network first, to give you more control over configuration of subnets, local connections, IP addresses, etc.</p>
<p><strong>Ingress</strong>—the ingress provides an HTTP/HTTPS path to access cluster services. Behind it, you will typically deploy an API Gateway to manage authentication and authorization.</p>
<p><strong>Azure Load Balancer</strong>—created when the NGINX ingress controller is implemented. Used to route incoming traffic to the ingress.</p>
<p><strong>External data storage</strong>—microservices are usually stateless and save data to external data stores, such as relational databases like Azure SQL Database or NoSQL stores like Cosmos DB.</p>
<p><strong>Azure Active Directory (AD)</strong>—AKS has its own Azure AD identity, used to generate and control Azure resources for Kubernetes deployments. In addition to these mechanisms, Microsoft recommends using Azure AD to establish user authentication in client applications that use the Kubernetes cluster.</p>
<p><strong>Azure Container Registry (ACR)</strong>—used to store your organization’s Docker images and use them to deploy containers to the cluster. ACR can also leverage authentication by Azure AD. Another option is to store Docker images in a third party registry, like Docker Hub.</p>
<h3 id="heading-azure-kubernetes-service-use-cases">Azure Kubernetes Service Use Cases:</h3>
<p>We’ll take a look at different use cases where AKS can be used.</p>
<ul>
<li><strong>Migration of existing applications:</strong> You can easily migrate existing apps to containers and run them with Azure Kubernetes Service. You can also control access via Azure AD integration and SLA-based Azure Services like Azure Database using Open Service Broker for Azure (OSBA).</li>
<li><strong>Simplifying the configuration and management of microservices-based Apps:</strong> You can also simplify the development and management of microservices-based apps as well as streamline load balancing, horizontal scaling, self-healing, and secret management with AKS. </li>
<li><strong>Bringing DevOps and Kubernetes together:</strong> AKS is also a reliable resource to bring Kubernetes and DevOps together for securing DevOps implementation with Kubernetes. Bringing both together, it improves the security and speed of the development process with Continuous Integration and Continuous Delivery (CI/CD) with dynamic policy controls.</li>
<li><strong>Ease of scaling:</strong> AKS can also be applied in many other use cases such as ease of scaling by using Azure Container Instances (ACI) and AKS. By doing this, you can use AKS virtual node to provision pods inside Azure Container Instance (ACI) that start within a few seconds and enables AKS to run with required resources. If your AKS cluster is run out of resources, if will scale-out additional pods automatically without any additional servers to manage in the Kubernetes environment.</li>
<li><strong>Data streaming:</strong> AKS can also be used to ingest and process real-time data streams with data points via sensors and perform quick analysis.</li>
</ul>
<h2 id="heading-azure-spring-service">Azure Spring Service</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659929109973/61xkPrfR2.png" alt="image.png" class="image--center mx-auto" /></p>
<p>Spring Cloud Azure is an open-source project that provides seamless spring integration with Azure services. It gives developers a Spring-idiomatic way to connect and consume Azure services, with only need few lines of configuration and minimal code changes. Once you’re ready to run your spring app in the cloud, we recommend Azure Spring Cloud. Azure Spring Cloud is a fully managed Spring Cloud service, built and supported by the same team as Spring Cloud Azure.</p>
<p>It was the result of joint effort of Microsoft and VMware to provide an easy development experience when building cloud-native applications depending on Spring Boot, Spring Cloud, and integrating with Azure Cloud components.</p>
<h3 id="heading-why-use-azure-spring-cloud">Why use Azure Spring Cloud?</h3>
<p>Being able to view your data in a single UI makes troubleshooting errors and issues much easier. Now, Spring Boot developers can enjoy that benefit in New Relic One. With Microsoft Azure’s latest integration, you can simply send your application data directly to New Relic One.</p>
<p>Deployment of applications to Azure Spring Cloud has so many benefits, such as:</p>
<ul>
<li>Efficiently migrate existing Spring apps and manage cloud scaling and costs.</li>
<li>Modernize apps with Spring Cloud patterns to improve agility and speed of delivery.</li>
<li>Run Java at cloud scale and drive higher usage without complicated infrastructure.</li>
<li>Develop and deploy rapidly without containerization dependencies.</li>
<li>Monitor production workloads efficiently and effortlessly.</li>
</ul>
<h2 id="heading-azure-service-fabric">Azure Service Fabric</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659929030147/kNKDNzTIT.png" alt="image.png" class="image--center mx-auto" /></p>
<p>Azure Service Fabric handles infrastructure needs, deployment, and scaling, allowing developers to spend more time on features. Service Fabric powers core Azure infrastructure and other Microsoft services, and you can use this technology in your own software solutions to achieve high-availability, better reliability, scalability, and performance. In this course, learn about the platform's main benefits, as well as how to build Service Fabric applications for the cloud or on premises. Instructor Rodrigo Díaz Concha details the benefits of Service Fabric as a distributed microservices platform, the Service Fabric application model, as well as its overall development cycle. He also shows how to create Service Fabric clusters from the Azure Portal and CLI, develop container-based Service Fabric microservices solutions, and more.</p>
<h2 id="heading-security-responsibilities">Security responsibilities</h2>
<p><strong>Security</strong> is one of the most important aspects of any architecture. Good security provides confidentiality, integrity, and availability assurances against deliberate attacks and abuse of your valuable data and systems. Losing these assurances can harm your business operations and revenue, and your organization's reputation.</p>
<p>The various cloud services require different levels of customer engagement and <strong>responsibility for security</strong>. </p>
<p>Here is a link for best practices to be followed in Azure <a target="_blank" href="https://docs.microsoft.com/en-us/azure/architecture/guide/security/security-start-here#best-practices">click here</a></p>
<p>Azure Security <a target="_blank" href="https://azure.microsoft.com/en-in/explore/security/">click here</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659926361757/llcxhTo6u.png" alt="image.png" class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>With more than 200 services and numerous benefits, Microsoft Azure is undoubtedly the fastest-growing cloud computing platform being adopted by businesses. In fact, Microsoft Azure’s total revenue is expected to surpass $19 billion by 2020. This growth in the implementation of Azure by businesses is creating various opportunities for professionals well-versed in this technology.</p>
<p>So, if you are interested in a career in Azure, this is the right time to jump in. The best way to start your career in Azure is by getting certified with Azure.</p>
<p><em>I hope you enjoyed this post, and that you'll come back for the next one!</em></p>
<p>Feel free to subscribe my email newsletter for future updates and connect with me on <a target="_blank" href="https://github.com/rexdivakar">GitHub</a> and <a target="_blank" href="https://twitter.com/rex_divakar">Twitter</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Rust Vs Python]]></title><description><![CDATA[Python and Rust are two popular programming languages use to write code and develop applications. While Python is an established and almost ubiquitous programming languages, Rust is more of an up-and-coming language which is quickly growing popular i...]]></description><link>https://blog.craftedbrain.com/rust-vs-python</link><guid isPermaLink="true">https://blog.craftedbrain.com/rust-vs-python</guid><category><![CDATA[Rust]]></category><category><![CDATA[Python]]></category><category><![CDATA[rust-vs-python]]></category><dc:creator><![CDATA[Divakar]]></dc:creator><pubDate>Sat, 02 Jul 2022 15:30:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1656774052757/n9LRGo2NO.PNG" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Python and Rust are two popular programming languages use to write code and develop applications. While Python is an established and almost ubiquitous programming languages, Rust is more of an up-and-coming language which is quickly growing popular in the software developer community.</strong></p>
<p><em>This article will compare the features of Python and Rust, as well as the pros and cons of each, so you can decide which one will work best for your next project. First, let’s define each of these languages.</em></p>
<h2 id="heading-what-is-rust">What is Rust?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656774311438/OmXwxn0bk.png" alt="rust.png" class="image--center mx-auto" /></p>
<p>Rust was developed from C++ with more safe scripts in 2010. The language is open source. Rust has a high-performance graph when compared with C++ or C language. Rust has many curly brackets and indentation is not necessary at all. Memory management is done through the RAII convention in Rust. Rust compiler is able to infer the type of variable, argument, function from the context or syntax it is typed. Now type state is removed from Rust that is achieved through branding pattern.</p>
<p>There is a Builder pattern in Rust that allows describing the current state of an object into the type of that object. Rust does not have classes as defined but it works with type structures and implementations. There were many changes while the version was updated in Rust. This reason made Rust less popular among developers. Inheritance and polymorphism are supported in Rust. There is no automated garbage collection in Rust. Safe Rust and unsafe Rust makes users select Rust language programming for their development to be on the safer side.</p>
<pre><code class="lang-shell">fn main () {
println! ("Hello World!");
}
println! is the macro in this program.
</code></pre>
<p>Even though Rust is a newer language compared to Python, it has quickly gained popularity within the developer community and is the most loved technology, according to the 2021 StackOverflow developer survey. Rust can also be used in many different domains such as:</p>
<ul>
<li>System developments</li>
<li>Web applications</li>
<li>Embedded systems</li>
<li>Blockchain</li>
<li>Game engines</li>
</ul>
<h3 id="heading-advantages-of-rust">Advantages of Rust</h3>
<ul>
<li>Rust is performance-oriented compared to other languages with its fast and memory-efficient architecture with no runtime or garbage collection.</li>
<li>Enforces strict safe memory allocations and secure coding practices.</li>
<li>Direct safe control over low-level resources. (Comparable to C/C++)</li>
</ul>
<h3 id="heading-disadvantages-of-rust">Disadvantages of Rust</h3>
<ul>
<li>Relatively higher learning curve compared to languages like Python. A higher degree of coding knowledge is required to use Rust efficiently.</li>
<li>Low level of monkey patching support.</li>
<li>The compiler can be slow compared to other languages.</li>
</ul>
<h2 id="heading-what-is-python">What is Python?</h2>
<p>Python is a programming language designed to help developers work more efficiently and integrate systems more effectively. Like Rust, Python is multiparadigm and designed to be extensible. You can use lower-level API calls, such as CPython, if speed is paramount.</p>
<p>Python, which dates all the way back to 1991 when it was introduced by Guido van Rossum, is notable for its code readability, elimination of semicolons, and curly brackets.</p>
<p>Besides its extensible nature, Python is an interpreted language, which makes it slower than most compiled languages. As you might expect given its maturity, Python has a large ecosystem of libraries and a large, dedicated community.</p>
<h3 id="heading-advantages-of-python">Advantages of Python</h3>
<ul>
<li><p><strong>Python has a relatively smaller learning curve compared to other languages.</strong> It can provide a simpler development experience without compromising functionality. The asynchronous coding style allows developers to easily handle complex coding requirements.</p>
</li>
<li><p><strong>A massive collection of libraries and frameworks is available</strong>. Python has gained an impressive number of libraries and frameworks due to its maturity and popularity. As a developer, there is a high chance that you can find a library or framework for any kind of functionality.</p>
</li>
<li><strong>Python integrates with a wide variety of software</strong>, including enterprise applications and databases. It can be easily integrated with other languages like PHP and .NET.</li>
</ul>
<h3 id="heading-disadvantages-of-python">Disadvantages of Python</h3>
<ul>
<li>Python is slower compared to compiled options such as C++ and Java since it is an interpreted language.</li>
<li>While Python is easy to debug, some errors won’t be shown until runtime.</li>
</ul>
<p>Read our comparisons of (Python to Go)<a target="_blank" href="https://rexdivakar.hashnode.dev/go-vs-python">click here</a></p>
<h1 id="heading-rust-vs-python-all-essential-differences">Rust vs Python: All Essential Differences</h1>
<p>Here is the list of essential differences between Rust and Python</p>
<h4 id="heading-rust-vs-python-performance">Rust vs Python Performance</h4>
<p>Rust provides better performance than Python. Rust offers developers a solid balance of high performance and security, as well as faster processing. Rust is about twelve times faster, and its performance is comparable to C and C++, but Python is slower. Yes, Python is known for being “slow” in some situations, but this doesn’t matter in most cases. This is a minor element that will not affect the majority of projects. </p>
<h4 id="heading-rust-vs-python-security">Rust vs Python Security</h4>
<p>Managing computer memory safely and efficiently is one of the most difficult tasks for any programming language. Security is one of the best aspects of Rust. Python has a garbage collector that looks for and cleans up unused memory as the program runs. </p>
<p>Rust is extremely safe. There is a higher emphasis on fixing memory leaks and other security issues. Memory leaks are addressed in many of its key principles. </p>
<h4 id="heading-rust-vs-python-low-level-language">Rust vs Python Low-level language</h4>
<p>It is one of the primary differences in Rust is a low-level programming language. It is a great option for embedded and bare-metal development due to its direct access to hardware and memory. </p>
<p>Rust is best for developers who have limited resources and need to ensure that their software does not fail. Whereas the high-level language Python is better suited to fast prototypes.</p>
<h4 id="heading-rust-vs-python-dynamic-and-static-typing">Rust vs Python Dynamic And Static Typing</h4>
<p>Python is a dynamic type system, which makes creating software easier for programmers. Whereas Rust is a static type system requiring programmers to declare parameters, i.e., constants and function arguments, it supports Python-like dynamic typing within the function body. “None” is a valuable feature in Rust that allows programmers to deal with exceptions at build time, ensuring that the program executes smoothly for the user.</p>
<h4 id="heading-rust-vs-python-easiness-to-code-andamp-learn">Rust vs Python Easiness to Code &amp; Learn</h4>
<p>The most significant, but also the most subjective, part of this comparison is learning and coding experience. Everyone wants their first programming language to be simple to learn but versatile to pursue various programming careers. </p>
<p>Beginners generally take one to two weeks to start building projects, whereas it only takes a few days for them to start building projects in other competitive languages. </p>
<p>In comparison to Rust, Python is much easier to learn. Python is a great language for beginners because of its extremely short learning curve. The syntax of Python is extremely simple to read, understand, and code, even for beginners.</p>
<h3 id="heading-so-which-one-is-right-for-me">So, which one is right for me?</h3>
<p>As we have seen, the Python vs Rust debate is not a simple one to solve. Both of them have advantages and disadvantages but are overall great, versatile and powerful programming languages which are rightly popular in the developer community.</p>
<p>In general, Python provides a simpler development experience and is easier to get started with. It also has a bigger community and wider resource base to choose from, so offers better extensibility for potentially larger projects. Python can be used across many disciplines, from web application development to DevOps, scientific scripting, machine learning and enterprise apps. This versatility, combined with the ease of use, makes it easy to see why Python is so popular.</p>
<p>Rust, meanwhile, should be the preferred option if speed and security are your priorities. Its performance-orientation and memory safety make it ideal for projects such as system development, file systems, game engine development, virtual reality (VR) and embedded integrations. These options make it clear that Rust will only continue to gain in popularity, and as it matures, its documentation and extensibility will improve too.</p>
<h4 id="heading-i-hope-you-enjoyed-this-post-and-that-youll-come-back-for-the-next-one">I hope you enjoyed this post, and that you'll come back for the next one!</h4>
<p><em>Feel free to subscribe my email newsletter and 
connect with me on <a target="_blank" href="https://github.com/rexdivakar">GitHub</a> or <a target="_blank" href="https://twitter.com/rex_divakar">Twitter</a></em></p>
]]></content:encoded></item><item><title><![CDATA[Go vs Python 🐍]]></title><description><![CDATA[Both Python and Go are different, generally serving different purposes. Python is the primary language among data scientists, where Go is the language for server-side commands. Go is the language to use to run software. It is the faster language, per...]]></description><link>https://blog.craftedbrain.com/go-vs-python</link><guid isPermaLink="true">https://blog.craftedbrain.com/go-vs-python</guid><category><![CDATA[golang]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[Python]]></category><category><![CDATA[go vs python]]></category><category><![CDATA[comparision]]></category><dc:creator><![CDATA[Divakar]]></dc:creator><pubDate>Fri, 01 Jul 2022 04:45:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1656229061104/2Kk7RgZ0u.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Both Python and Go are different, generally serving different purposes. Python is the primary language among data scientists, where Go is the language for server-side commands. Go is the language to use to run software. It is the faster language, performing at Java and C++ speeds.</p>
<p>Python is the language to use for readable, shareable code—hence the large community around it.</p>
<p>Technically, Go is a procedural, functional language built for speed, and Python is an object-oriented, imperative, functional, and procedural language. Go supports concurrency, the ability of an algorithm to run its steps out of order, and Python doesn’t.</p>
<p>If you’re choosing between Go and Python, it’s likely because both of these languages are quickly growing and extremely marketable. While Go trails behind Python in terms of its raw community code base, it’s rapidly becoming one of the most important languages in the market.</p>
<p>Python is two decades older than Go. But that doesn’t mean that Go is going to knock Python off of its throne. Today, we’re going to take a deeper look at Go language, Python, and when you should use one language over the other.</p>
<p>In short, if you are working with data and your audience is people, use Python. If you are working with servers, use Go.</p>
<h2 id="heading-a-pythons-tale">A Python’s tale</h2>
<p>Ask some developers and you will hear that there was nothing before Python and there will be nothing after it. Over the years, it has managed to gain a cult-like following because it is a very good programming language. The internet is filled with wonders created using Python.</p>
<p>Python is old, in terms of programming years. It was first conceptualized back in 1991. And with age comes certain advantages. It has a wide following which translates to it being stable and well documented. In most cases, you will find libraries for almost everything and code samples for just about everything you can think of. What this means for devs and businesses is that the choice of using Python comes with it a wealth of experience and coding just waiting to be accessed.</p>
<p>There are plenty of open-source projects that use Python as a base, so in most cases, you are not starting from scratch. It is well integrated into enterprise applications and can be used in machine language and AI applications. But it does have its downsides. For one, it is not ideal for memory-intensive tasks, a bit on the slow side for executions and unsuitable for mobile application development.</p>
<h2 id="heading-speaking-of-golanggo">Speaking of Golang(Go)</h2>
<p>Developed at Google back in 2009. Go was a solution to a problem. Its aim was to create a language that took away all the baggage and excesses found in languages such as C++. This gives it a performance and speed boost that makes working with it a delight. Plus, most developers picking up Go for the first time are not going to be left adrift as the familiar elements and ease of use would come as a pleasant surprise.</p>
<p>This is not to say the language is perfect in all cases. While it takes speed and elegance to the next level, it does, however, leave behind some a few things to be desired. For one, it does not have an extensive library, nor support for inheritance. In addition, there is no GUI library or object-oriented programming support. What it does have going for it is a lightweight thread (Goroutines), smart standard library, strong built-in security and is easy to code with minimal syntax.</p>
<h2 id="heading-why-use-go-language">Why use GO language?</h2>
<p>Here, are important reasons for using Go language:</p>
<ul>
<li>It allows you to use static linking to combine all dependency libraries and modules into one single binary file based on the type of the OS and architecture.</li>
<li>Go language performed more efficiently because of CPU scalability and concurrency model.</li>
<li>Go language offers support for multiple libraries and tools, so it does not require any 3rd party library.</li>
<li>It’s statically, strongly typed programming language with a great way to handle errors</li>
</ul>
<h2 id="heading-why-use-python-language">Why use Python language?</h2>
<p>Here, are reasons for using Python language:</p>
<ul>
<li>Python is a powerful object-oriented programming language.</li>
<li>Uses an elegant syntax, making the program you write easier to read.</li>
<li>Python comes with a large standard library, so it supports many common programming tasks.</li>
<li>Runs on various types of computers and operating systems: Windows, macOS, Unix, OS/2, etc.</li>
<li>Very simple syntax compared to Java, C, and C++ languages.</li>
<li>Extensive library and handy tools for developers</li>
<li>Python has its auto-installed shell</li>
<li>Compared with the code of other languages, python code is easy to write and debug. Therefore, its source code is relatively easy to maintain.</li>
<li>Python is a portable language so that it can run on a wide variety of operating systems and platforms.</li>
<li>Python comes with many prebuilt libraries, which makes your development task easy.</li>
<li>Python helps you to make complex programming simpler. As it internally deals with memory addresses, garbage collection.</li>
<li>Python provides an interactive shell that helps you to test the things before its actual implementation.</li>
<li>Python offers database interfaces to all major commercial DBMS systems.</li>
</ul>
<h2 id="heading-key-differences">Key Differences</h2>
<p>Below mentioned are the key differences between Python and Golang:</p>
<ul>
<li>As Python is a scripting language, it has to be interpreted, while Golang is quicker most of the time as it does not have to count on anything at runtime.</li>
<li>Python is an ideal language that comes equipped with an easy-to-understand syntax, making it more readable and flexible. Golang is also in the prime league when it comes to clear syntax, which holds zero unnecessary components.</li>
<li>Python doesn’t come with a built-in concurrency mechanism, whereas Go is packed with a built-in concurrency mechanism.</li>
<li>Talking about safety, Python is a well-typed compiled language thereby incorporating a layer of security, while Golang is very decent because every variable should have a type connected with it. It implies a programmer cannot give away the details, which will further lead to flaws.</li>
<li>Python is less verbose compared to Go to accomplish the same functionality.</li>
<li>Python comes with dozens of libraries as opposed to Go, but gradually Go is also improving in this area.</li>
<li>Python still earns the upper hand when it comes to syntax, making it very user-friendly.</li>
<li>Python is considered the best when you have to solve data science problems, while Go is best suited for system programming.</li>
<li>Python is a dynamically typed language, while Golang is a statically typed language, which helps you to detect flaws at compile-time, further reducing serious glitches later in the production.</li>
<li>Python is the best choice for basic programming. Python can become complex if one prefers to build complicated systems. But with Go, the same task can be executed quickly without getting into the subtleties of programming language.</li>
<li>Python is more compact than Golang.</li>
</ul>
<h2 id="heading-which-to-choose">Which to choose?</h2>
<p>While Python has remained a community favorite retaining the #2 spot in the first quarter of 2019 for the fastest programming language on GitHub in terms of pull requests (+17%), Golang isn’t so far behind and is hot on its heels at #4 (+8%). The choice between Golang vs Python becomes even more blurry. Regardless, there are a few things to be considered when selecting which might be right for you.</p>
<p><strong>Scalability:</strong> Golang was created with scalability in mind. It comes with inbuilt concurrency to handle multiple tasks at the same time. Python uses concurrency but it is not inbuilt; it implements parallelism through threads. This means that if you are going to work with large data sets, then Golang would seem to be a more suitable choice.</p>
<p><strong>Performance:</strong> Python is not known to be memory or CPU friendly but with its huge number of libraries, Python performs efficiently for basic development tasks. Golang comes with build-in features and it is more suitable for microservices software architectures.</p>
<p><strong>Applications:</strong> Python shines when used to write codes for artificial intelligence, data analytics, deep learning, and web development. Whereas Golang has been used for system programming, it is loved by developers who use it for cloud computing and cluster computing applications.</p>
<p><strong>Community &amp; Library:</strong> As mentioned earlier, Python’s age gives it certain advantages. One of which is the number of libraries it has and the large community that supports it. Golang, on the other hand, is still a growing language and does not have the number of libraries and community support that Python commands. Yet we should not count Go out just yet. Its rate of growth and adoption is incredible and it is expanding every day.</p>
<p><strong>Execution: </strong>If speed is the name of the game, then Golang wins by a mile.</p>
<p>After taking all these into account, your use case will be the determining factor in which language to adopt. Given a scenario where you are setting up a development team to create microservices, Golang would be the more reasonable choice here as its both fast, easy to code with and can scale excellently well. Python, on the other hand, is more geared towards AI, ML and data analysis.</p>
<p>So going head to head, Go would come out on top in most cases and is considered to be a valid alternative to using Python. Developers need to choose a programming language considering their nature and size of the development project as well as the skill set of those involved.</p>
<p>The good news, however, is that regardless of choice, both languages are ever-evolving. While Golang might seem like an obvious choice in most cases, the Python community isn’t just sitting back and doing nothing. Both languages are expanding and growing. This means that we will be seeing more functionality and improvement in the future.</p>
]]></content:encoded></item><item><title><![CDATA[Proxy Servers explained...!]]></title><description><![CDATA[So, what are proxies ?
Proxy is a server that sits in between a client and an actual server. Word "proxy" defines, someone or something acting on behalf of something else. In computer science proxy means one server acting on behalf of other servers. ...]]></description><link>https://blog.craftedbrain.com/proxy-servers-explained</link><guid isPermaLink="true">https://blog.craftedbrain.com/proxy-servers-explained</guid><category><![CDATA[Reverse Proxy]]></category><category><![CDATA[proxy]]></category><category><![CDATA[vpn]]></category><category><![CDATA[Proxy Server]]></category><category><![CDATA[forward-proxy]]></category><dc:creator><![CDATA[Divakar]]></dc:creator><pubDate>Mon, 27 Jun 2022 03:48:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1656301682414/zWAuuL_sZ.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-so-what-are-proxies">So, what are proxies ?</h2>
<p>Proxy is a server that sits in between a client and an actual server. Word "proxy" defines, someone or something acting on behalf of something else. In computer science proxy means one server acting on behalf of other servers. A proxy server is a server (or computer system or an application) that acts as an intermediary for requests from clients seeking resources from other servers.</p>
<p>A client connects to the proxy server, requesting for some service (file, connection, web page) available from a different server and the proxy server evaluates the request as a way simplify and control its complexity. Proxies were invented to add structure and encapsulation to distributed systems.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656299854182/tiqTsAXMA.png" alt="proxy.png" class="image--center mx-auto" /></p>
<p>Without an online proxy, your device communicates directly with web servers. All
the websites that communicate with your browser can see your device and speak with it directly. In other words, your IP address is public knowledge and exposed. But what if you want to get rid of all that public exposure? <strong>A web proxy or other proxy server</strong> sits in front of the client or a network of clients and handles this traffic on its behalf. This proxy server is another device that's connected to both the internet as well as your device, and it has its own IP address. Your device speaks only to the proxy server, and the proxy forwards all communication onward to the internet at large thus keeps the IP hidden at plain sight.</p>
<h2 id="heading-what-does-a-proxy-server-do-exactly">What does a proxy server do, exactly?</h2>
<p>A proxy server plays many vital roles in managing the traffic and nodes across a
network. Here's a few of the primary uses for a proxy server:</p>
<p><strong>Firewalls:</strong> A firewall is a type of network security system that acts as a barrier between a network and the wider internet. Security professionals configure firewalls to block unwanted access to the networks they are trying to protect, often as an anti-malware or anti-hacking countermeasure. A proxy server between a trusted network and the internet is the perfect place to host a firewall designed to intercept and either approve or block incoming traffic before it reaches the network.</p>
<p><strong>Content filters:</strong> Just as online proxies can regulate incoming connection requests with a firewall, they can also act as content filters by blocking undesired outgoing traffic. Companies may configure proxy servers as content filters to prevent employees from accessing the blocked websites while at work.</p>
<p><strong>Bypassing content filters:</strong> That's right - you can outsmart a web proxy with another proxy. If your company's proxy has blocked your favorite website, but it hasn't blocked access to your personal proxy server or favorite web proxy, you can access your proxy and use it to reach the websites you want.</p>
<p><strong>Caching:</strong> Caching refers to the temporary storage of frequently accessed data, which makes it easier and faster to access it again in the future. Internet proxies can cache websites so that they'll load faster than if you were to send your traffic all the way through the internet to the websites server. This reduces latency - the time it takes for data to travel through the internet. </p>
<p><strong>Sharing internet connections:</strong> Businesses or even homes with a single internet connection can use a proxy server to funnel all their devices through that one connection. Using a Wi-Fi router and wireless-capable devices is another solution to this issue.</p>
<h2 id="heading-wait-isnt-that-the-same-as-a-vpn">Wait - isn't that the same as a VPN?</h2>
<p>Proxies and VPNs both connect you to the internet via an intermediary server, but that's
where the similarities end. While an online proxy simply forwards your traffic to its destination, a VPN encrypts all traffic between your device and the VPN server.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656300041074/1yOuQkf4p.PNG" alt="1.PNG" class="image--center mx-auto" /></p>
<h2 id="heading-types-of-proxy-server">Types of Proxy Server</h2>
<p><strong>Reverse Proxy Server:</strong> The job of a reverse proxy server to listen to the request made by the client and redirect to the web server which is present on different servers.</p>
<p><strong>Web Proxy Server: </strong>Web Proxy forwards the HTTP requests, only URL is passed instead of a path. The request is sent to particular the proxy server responds. Examples, Apache, HAP Proxy.</p>
<p><strong>Anonymous Proxy Server:</strong> This type of proxy server does not make an original IP address instead these servers are detectable still provides rational anonymity to the client device.</p>
<p><strong>Transparent Proxy:</strong> This type of proxy server is unable to provide any anonymity to the client, instead, the original IP address can be easily detected using this proxy. But it is put into use to act as a cache for the websites. A transparent proxy when combined with gateway results in a proxy server where the connection requests are sent by the client then, then IP are redirected. Redirection will occur without the client IP address configuration. HTTP headers present on the server-side can easily detect its redirection.</p>
<p><strong>CGI Proxy:</strong> CGI proxy server developed to make the websites more accessible. It accepts the requests to target URLs using a web form and after processing its result will be returned to the web browser. It is less popular due to some privacy policies like VPNs, but it still receives a lot of requests also. Its usage got reduced due to excessive traffic that can be caused to the website after passing the local filtration and thus leads to damage to the organization.</p>
<p><strong>Suffix Proxy:</strong> Suffix proxy server basically appends the name of the proxy to the URL. This type of proxy doesn't preserve any higher level of anonymity. It is used for bypassing the web filters. It is easy to use and can be easily implemented but is used less due to the more number or web filter present in it.</p>
<p><strong>Tor Onion Proxy:</strong> This server aims at online anonymity to the user's personal information. It is used to route the traffic through various networks present worldwide to arise difficulty in tracking the users' address and prevent the attack of any anonymous activities. It makes it difficult for any person who is trying to track the original address. In this type of routing, the information is encrypted in a multi-fold's layer. At the destination, each layer is decrypted one by one to prevent the information to scramble and receive original content. This software is open-source and free of cost to use.</p>
<p><strong>DNS Proxy:</strong> DNS proxy take requests in the form of DNS queries and forward them to the Domain server where it can also be cached, moreover flow of request can also be redirected and often hosted internally to an Organization.</p>
<p>Apart from the above types, the commonly used proxies are <strong>forward proxy server</strong> and <strong>reverse proxy server</strong> which are built to manage web services and client services on demand.</p>
<h2 id="heading-forward-proxy-server">Forward Proxy Server:</h2>
<p>A forward proxy is one who acts as a different requestor, so this proxy encapsulates the
original identity of the requestor. Forward proxy can be used by the client to bypass restrictions in purpose to visit websites that are blocked by government, company or etc. Another usage of forward proxy is it can also act as a cache server. If the content is downloading multiple times, the proxy can cache the content on the server so when next time someone is downloading the same content, the proxy will send the content that is previously stored.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656300365283/btiUsorhF.jpg" alt="forward-proxy-01-1.jpg" class="image--center mx-auto" /></p>
<h2 id="heading-reverse-proxy-server">Reverse Proxy Server:</h2>
<p>A proxy server is a go-between or intermediary server that forwards requests for content from multiple clients to different servers across the Internet. A reverse proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server. A reverse proxy provides an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and servers.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656300546311/sB49_RyDK.PNG" alt="reverse_proxy.PNG" class="image--center mx-auto" /></p>
<p>A reverse proxy operates by:</p>
<ul>
<li>Receiving a user connection request.</li>
<li>Completing a TCP three-way handshake, terminating the initial connection.</li>
<li>Connecting with the origin server and forwarding the original request.</li>
</ul>
<h3 id="heading-common-use-cases-of-reverse-proxy-scenarios">Common use cases of Reverse Proxy scenarios:</h3>
<p>There is a multitude of scenarios and use cases in which having a reverse proxy
can make all the difference to the speed and security of your corporate network. By providing you with a point at which you can inspect traffic and route it to the appropriate server, or even transform the request, a reverse proxy can be used to achieve a variety of different goals.</p>
<p><strong>Load balancing:</strong> A reverse proxy server can act as a traffic cop that sits in front of your backend servers and distributes client requests across a group of servers in a manner that maximizes speed and capacity utilization while ensuring no server is overloaded, which can degrade performance. If a server goes down, the load balancer redirects traffic to the remaining online servers.</p>
<p><strong>Web acceleration:</strong> Reverse proxies can compress inbound and outbound data, as well as cache commonly requested content, both of which speed up the flow of traffic between clients and servers. They can also perform additional tasks such as SSL encryption to take load off your web servers, thereby boosting their performance</p>
<p><strong>Security and anonymity:</strong> By intercepting requests headed for your backend servers, a reverse proxy server protects their identities and acts as an additional defense against security attacks. It also ensures that multiple servers can be accessed from a single record locator or URL regardless of the structure of your local area network.</p>
<h2 id="heading-what-are-the-benefits-of-a-reverse-proxy-server">What are the Benefits of a Reverse Proxy Server?</h2>
<p>Benefits of reverse proxy servers include:</p>
<ul>
<li>Load balancing</li>
<li>Global server load balancing (GSLB)</li>
<li>Caching content and web acceleration for improved performance</li>
<li>More efficient and secure SSL encryption, and</li>
<li>Protection from DoS attacks and related security issues.</li>
</ul>
<p><strong>Load balancing</strong> is the process of distributing network traffic across multiple servers. This ensures no single server bears too much demand. By spreading the work evenly, load balancing improves application responsiveness. It also increases availability bf applications and websites for users. Modern applications cannot run without load balancers</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656300759861/8_CVviKXq.PNG" alt="load_balancer.PNG" class="image--center mx-auto" /></p>
<h2 id="heading-reverse-proxy-vs-forward-proxy">Reverse Proxy vs Forward Proxy</h2>
<p>In contrast, a forward proxy server is also positioned at your network's edge but regulates outbound traffic according to preset policies in shared networks. Additionally, it disguises a client's IP address and blocks malicious incoming traffic.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656301018756/UmflZK7LA.png" alt="fowardvsreverse.png" class="image--center mx-auto" /></p>
<p>Forward proxies are typically used internally by large organizations, such as universities and enterprises to,</p>
<ul>
<li>Block employees from visiting certain websites.</li>
<li>Monitor employee online activity.</li>
<li>Block malicious traffic from reaching an origin server.</li>
<li>Improve the user experience by caching external site content.</li>
</ul>
<h2 id="heading-benefits-and-risks">Benefits and Risks</h2>
<p>Now that you know everything about proxies, here's a list of some of the benefits and risks associated with using them.</p>
<h3 id="heading-benefits">Benefits</h3>
<ul>
<li>Secure and private internet browsing.</li>
<li>Ability to get around geo-location restrictions.</li>
<li>Better network performance.</li>
<li>Ability to control what websites clients have access to.</li>
<li>Many types to choose from to suit specific needs.</li>
</ul>
<h3 id="heading-risks">Risks</h3>
<ul>
<li>Your requests might return really slow.</li>
<li>Free or cheap proxies could be set up by hackers or government agencies.</li>
<li>There are plenty more benefits and risks to using any of the proxy server types. That's why it is important to only connect to proxy servers you trust. When you are connected to a trusted proxy, the risks should have been taken into account in the configurations so you have less to worry about.</li>
</ul>
<p><strong>Proxy Server Risks:</strong> Free installation does not invest much in backend hardware or
encryption. It will result in performance issues and potential data security issues. If you install a "free" proxy server, treat very carefully, some of those might steal your credit card numbers.</p>
<p><strong>Browsing history log:</strong> The proxy server stores your original IP address and web request information is possibly unencrypted form and saved locally. Always check if your proxy server logs and saves that data - and what kind of retention or law enforcement cooperation policies they follow while saving data.</p>
<p><strong>No encryption:</strong> No encryption means you are sending your requests as plain text. Anyone will be able to pull usernames and passwords and account information easily. Keep a check that proxy provides full encryption whenever you use it.</p>
<h2 id="heading-conclusion">Conclusion:</h2>
<p>Proxy and reverse proxy may sound similar but are different in terms of use cases and
benefits. Both add the element of anonymity, proxy hides the identity of the client whereas the reverse proxy conceals the identity of the server. So, if you want to protect clients in your internal network, put them behind a forward proxy. On the other hand, if you intend to protect servers, put them behind a reverse proxy with proper firewall implementations enabled.</p>
]]></content:encoded></item><item><title><![CDATA[DNS Explained..!]]></title><description><![CDATA[What Exactly Is DNS?
DNS or Domain Name Systems is a translation system that allows the users to
browse the internet with a language we're comfortable with. Instead of typing and memorizing long sequences of numbers we can use names and letters (host...]]></description><link>https://blog.craftedbrain.com/dns-explained</link><guid isPermaLink="true">https://blog.craftedbrain.com/dns-explained</guid><category><![CDATA[dns]]></category><category><![CDATA[dnssec]]></category><category><![CDATA[dns-records]]></category><category><![CDATA[dns-poisoning]]></category><category><![CDATA[dns-hijacking]]></category><dc:creator><![CDATA[Divakar]]></dc:creator><pubDate>Sun, 26 Jun 2022 04:03:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1656212960088/u0Yf8zc1f.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-exactly-is-dns">What Exactly Is DNS?</h2>
<p><em>DNS or Domain Name Systems</em> is a translation system that allows the users to
browse the internet with a language we're comfortable with. Instead of typing and memorizing long sequences of numbers we can use names and letters (hostname). This "address book" is distributed around the world, stored on domain name servers that all communicate with each other. This distribution helps speed things up as there are multiple locations for the directory. This reduces load and reduces travel time.</p>
<blockquote>
<p>Example: www.google.com <br />
Actual IP: 8.8.8.8</p>
</blockquote>
<h2 id="heading-how-exactly-does-the-dns-work">How Exactly Does the DNS Work?</h2>
<p>When a user tries to visit a website, initially the request is submitted, the computer will
first search for its IP address, found on the local DNS cache memory. If successful, it will display the website immediately to the user. If it's unable to locate the IP address, the search query will then go to the recursive server which is maintained by the ISP (Internet Service Provider) to try and fetch the IP address. If the IP address is accurately located, the website will be displayed to the user. </p>
<p>The ISP will usually maintain a cache of IP addresses frequently accessed by their customers <em>(e.g. commonly viewed websites such as Facebook or YouTube)</em>, so the chances of locating the requested websites are rather high. If the IP address can't be located, the recursive server directs the query to the root nameserver. If it's found here, the website is displayed. If not, the user query goes to the TLD nameserver, and if it's still not found, the destination the query goes to would be the authoritative server. This is where the IP address will be located, and the website will be retrieved and displayed to the user.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656213174778/WWo5wQhQr.webp" alt="dns-record-request-sequence-1.webp" class="image--center mx-auto" /></p>
<p>So, essentially, there are 4 levels the user query can go through to resolve a domain name into a computer-friendly IP address and display a website to a user. The recursive server will usually cache the IP address by extracting it from the authoritative server so it's readily accessible to users the next time they request it. There is usually a period the recursive server will cache the IP address for before it refreshes itself It's called the 'time to live. The usually directs the recursive server on how long to cache the IP address for when the communication between servers occurs.</p>
<p>To put it simply, one user search query leads to a whole series of queries and responses, almost like a chain reaction, to find and display a website in the blink of an eye!</p>
<h2 id="heading-dns-resolution">DNS Resolution</h2>
<p>DNS resolution is the process of converting a hostname into an IP address. Four operations are required to load a webpage. </p>
<p><strong>The recursor</strong> is a server designed to receive queries from clients. It'll act as a middleman between the client and the DNS name server. It'll return data from its cache if it exists or will directly lookup the root name server.</p>
<p><strong>The root name server</strong> contains information that makes up the root zone, which is the global list of top-level domains. The root zone contains:</p>
<blockquote>
<p>Generic top-level domains such as .com, .net, and .org
Country-code top-level domains such as in for India.</p>
</blockquote>
<h2 id="heading-top-level-domain-name-server">Top-level domain name server</h2>
<p>A top-level domain (TLD) is the highest level of domain name in the root zone of the DNS of the internet. The Internet Corporation for Assigned Names and Numbers (ICANN) looks after most top-level domains.</p>
<h2 id="heading-authoritative-name-server">Authoritative name server</h2>
<p>The authoritative name server is the last operation in the name-server query. If the
authoritative name server has access to the requested record, it'll return the IP address for the requested hostname back to the DNS recursor. This server holds the actual DNS records (A, CNAME, etc.) for a particular domain.</p>
<h1 id="heading-an-overview-of-dns-querying">An Overview of DNS Querying:</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656213418667/rGZVpmPQz.png" alt="dns-lookup-diagram.png" class="image--center mx-auto" /></p>
<p>A <strong>Recursive Name Server</strong> is a DNS server that receives DNS queries for informational purposes. These types of DNS servers do not store DNS records. When a DNS query is received, it will search in its cache memory for the host address tied to the IP address from the DNS query. If the recursive name server has the information, then it will return a response to query sender. If it does not have the record, then the DNS query will be sent to other recursive name servers until it reaches an authoritative DNS server that can supply the IP address.</p>
<p>An <strong>Authoritative DNS Server</strong> is a DNS server that stores DNS records (A, NAME, MX, TXT, etc.) for domain names. These servers will only respond to DNS queries for locally stored DNS zone files.</p>
<h2 id="heading-what-is-a-dns-record">What is a DNS record?</h2>
<p><strong>DNS records</strong> <em>(aka zone files)</em> are instructions that live in authoritative DNS servers and provide information about a domain including what IP address is associated with that domain and how to handle requests for that domain. These records consist of a series of text files written in what is known as DNS syntax. DNS syntax is just a string of characters used as commands that tell the DNS server what to do. All DNS records also have a 'TTL', which stands for time-to-live, and indicates how often a DNS server will refresh that record.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656213559859/Y6om-7THX.png" alt="dns-records.png" class="image--center mx-auto" /></p>
<h3 id="heading-what-are-the-most-common-types-of-dns-record">What are the most common types of DNS record?</h3>
<blockquote>
<p><strong>A record</strong> - The record that holds the IP address of a domain.</p>
<p><strong>AAAA record</strong> - The record that contains the IPv6 address for a domain (as opposed to A records, which list the IPv4 address).</p>
<p><strong>NAME record</strong> - Forwards one domain or subdomain to another domain, does NOT provide an IP address.</p>
<p><strong>MX record</strong> - Directs mail to an email server. Learn more about the MX record.</p>
<p><strong>TXT record</strong>- Lets an admin store text notes in the record. These records are often used for email security.</p>
<p><strong>NS record</strong> - Stores the name server for a DNS entry. </p>
<p><strong>SO record</strong> - Store's admin information about a domain. </p>
<p><strong>SRV record</strong> - Specifies a port for specific services. </p>
<p><strong>PTR record</strong> - Provides a domain name in reverse-lookups.</p>
</blockquote>
<h3 id="heading-what-are-some-of-the-less-commonly-used-dns-records">What are some of the less commonly used DNS records?</h3>
<blockquote>
<p><strong>AFSDB record </strong>- This record is used for clients of the Andrew File System (AFS) developed by Carnegie Melon. The AFSDB record functions to find other AS cells.
APL record - The 'address prefix list' is an experiment record that specifies lists of address ranges.</p>
<p><strong>CAA record</strong> - This is the 'certification authority authorization' record, it allows domain owners state which certificate authorities can issue certificates for that domain. If no CAA record exists, then anyone can issue a certificate for the domain. These records are also inherited by subdomains.</p>
<p><strong>DNSKEY record</strong> - The 'DNS Key Record' contains a public key used to verify Domain Name System Security Extension (DNSSEC) signatures.</p>
<p><strong>CDNSKEY record</strong> - This is a child copy of the DNSKEY record, meant to be transferred to a parent.</p>
<p><strong>CERT record</strong> - The 'certificate record' stores public key certificates.</p>
<p><strong>CHID record</strong> - The 'DHCP Identifier' stores info for the Dynamic Host Configuration Protocol (DHCP), a standardized network protocol used on IP networks.</p>
<p><strong>DNAME record</strong> - The 'delegation name' record creates a domain alias, just like NAME, but this alias will redirect all subdomains as well. For instance, if the owner of 'example.com' bought the domain 'website.net' and gave it a NAME record that points to 'example.com', then that pointer would also extend to 'blog. website.net' and any other subdomains.</p>
<p><strong>HIP record</strong> - This record uses 'Host identity protocol', a way to separate the roles of an IP address, this record is used most often in mobile computing.</p>
<p><strong>IPSECKEY record</strong> - The <em>'IPSEC key'</em> record works with the Internet Protocol 
Security (IPSEC), an end-to-end security protocol framework and part of the Internet Protocol Suite (TCP/IP).</p>
<p><strong>LOC record</strong> - The <em>location record</em> contains geographical information for a domain in the form of longitude and latitude coordinates.</p>
<p><strong>NAPTR record</strong> - The <em>'name authority pointer'</em> record can be combined with an SRV record to dynamically create URI to point to base on a regular expression.</p>
<p><strong>NSEC record</strong> - The <em>'next secure record'</em> is part of DNSSEC, and it's used to prove requested DNS resource record does not exist</p>
<p><strong>RRSIG record</strong> - The <em>'resource record signature' </em>is a record to store digital signatures used to authenticate records in accordance with DNSSEC.</p>
<p><strong>RP record</strong> - This is the <em>'responsible person'</em> record, and it stores the email address of the person responsible for the domain.</p>
<p><strong>SSHFP record</strong> - This record stores the <em>'SSH public key fingerprints'</em>, SSH stands for Secure Shell and it's a cryptographic networking protocol for secure communication over an unsecure network.</p>
</blockquote>
<h2 id="heading-what-is-dnsmasq-and-how-it-can-be-used">What is dnsmasq and how it can be used?</h2>
<p><strong>Dnsmasq</strong> is free software providing DNS caching, DHCP, router advertisement and
network boot features. It will decrease CPU and network usage and avoid DNS resolution failure by providing the cache.</p>
<p>It sounds great because DNS Cache will bring many advantages to your service &amp; system. </p>
<ul>
<li>Lesser CPU usage.</li>
<li>Lesser Network usage (Low latency in terms of the transaction for the service).</li>
<li>Avoid the failure of resolving DNS (As there is cache).</li>
</ul>
<p>Now it's the time to talk about what you need to be cautious about when you are using dnsmasq.</p>
<h2 id="heading-dnsmasq-vulnerabilities">Dnsmasq Vulnerabilities</h2>
<p><strong>DNS masquerade</strong> is a widely used open-source DNS resolver used by many projects and hardware firmware worldwide, from Kubernetes (Kube-dns) to routers and other products. <strong>dnsmasq </strong>is the service providing the cache, so it will always be the target for DNS cache poisoning.</p>
<h3 id="heading-what-should-i-do-for-dnsmasq-vulnerabilities">What should I do for dnsmasq vulnerabilities?</h3>
<p>Many devices likely remain unpatched for the security patch and have overlooked the dnsmasq vulnerability. dnsmasq has been getting more popular and is responsive for the security vulnerability updates. So, if you update your dnsmasq software on time, your system will be safe.</p>
<h2 id="heading-dns-poisoning">DNS Poisoning</h2>
<p><em>DNS cache poisoning, also known as DNS spoofing</em>, is the act of entering false
information by an attacker into a DNS cache, so that DNS queries return an incorrect response and users are directed to the wrong websites. This can be used by the attacker to be redirecting online traffic or stealing user credentials or user sensitive information.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656214496551/lSTk1ySQx.jpg" alt="DNS-spoofing.jpg" class="image--center mx-auto" /></p>
<h2 id="heading-risks-of-dns-poisoning">Risks of DNS Poisoning</h2>
<ol>
<li><strong>Data theft </strong>- Attacker may steal personal information (login credentials, account details, credit card numbers).</li>
<li><strong>Malware infection</strong> - When users are directed to the attacker's fake website, attacker may install viruses or malware on their servers. So, the user will get harm.</li>
</ol>
<h3 id="heading-how-to-prevent-from-dns-poisoning">How to prevent from DNS Poisoning</h3>
<p>When we consider about the prevention. To prevent DNS spoofing, user-end protections are limited. Website owners and server providers are a bit more empowered to protect themselves and their users. To appropriately keep everyone safe, both parties must try to avoid spoofs.</p>
<p>The prevention measures for server providers and website owners are as follows:</p>
<ol>
<li>Use tools for DNS spoofing detection.</li>
<li>End to end encryption.</li>
<li>Use Domain name system security extension (DNSSEC).</li>
</ol>
<p>The prevention measures for endpoint users are as follows:</p>
<ol>
<li>Never click on an unrecognized link.</li>
<li>Daily scan our system for viruses or malware.</li>
<li>Solve poisoning by flushing our DNS cache.</li>
<li>Use VPN (Virtual private network).</li>
</ol>
<h2 id="heading-dnssec-domain-name-system-security-extensions-overview">DNSSEC (Domain Name System Security Extensions) overview</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656214677008/QDJXSuRCQ.jpeg" alt="jj200221.722eac10-df4f-41c4-a5e8-abc04cc49fab(ws.11).jpeg" class="image--center mx-auto" /></p>
<p>The <strong>Domain Name System Security Extensions (DNSSEC)</strong> is an Internet standard that adds security mechanisms to the <strong>Domain Name System (DNS)</strong>. It ensures both the authenticity and integrity of the DNS data, It is used to analyze the internally generated DNSSEC queries needed to build the chain-to-trust. This feature is enabled by default and can be useful to provide additional details about DNSSEC transactions.</p>
<p>A (local) DNS resolver can use DNSSEC to verify that the DNS zone data it receives has not been modified and is indeed identical to the authoritative zone. DNSSEC was developed mainly as means against <strong>DNS cache poisoning</strong>. <em>It secures the transmission of resource records by means of digital signatures using asymmetric so-called public-key cryptography</em>. If you are not familiar with the concept, think of it as a cleverly designed lock, where one key lock and one key unlocks. In DNSSEC, you get the unlocking public key, while the locking key is kept private. The owners of the authoritative server on which the zone to be secured is located signs each individual record using their private keys. DNS clients can validate this signature with the owner's public key to verify said authenticity and integrity.</p>
<p>A separate zone signing key (a pair consisting of a public and private key) is generated for each zone to be secured. The public part of the zone key is included in the zone file as a <strong>DNSKEY resource record</strong>. The private key is used to digitally sign each individual record of this zone. For this purpose, a new record type is provided, the RRSIG Resource Record, which contains the signature to the associated DNS record. For each transaction, the associated RRSIG record is supplied in addition to the actual resource record. The requesting resolver can then validate the signature using the public zone key.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656215064636/1WQawONh9.jpeg" alt="jj200221.5fa34772-6719-49c1-8588-200ae6fcbc79(ws.11).jpeg" class="image--center mx-auto" /></p>
<p><strong>DNSKEY records</strong> are used to propagate public keys through DNS: The owner of the key stores it as DNSKEY record on a publicly accessible DNS server. Anyone who needs this public key sends a corresponding DNSKEY request. You receive the public key in response. The procedure thus corresponds to any other DNS requests, for instance ordinary IP addresses. In practice however, this type of propagation is not sufficient since a complete zone could be forged. The public key must therefore either be introduced manually into the resolver as a trusted key or the associated DS resource record must be published in the overlying zone. The trusted key of the root zone (the
uppermost level of the DNS hierarchy) is known by your Pi-hole and is hard-coded.</p>
<p>For a complete picture, we also need DS (Delegation Signer) records. Those are used to chain <em>DNSSEC-signed zones</em>. This allows multiple DNS zones to be combined into a chain of trust and validated via a single public key. The basic idea is to chain all the zones involved and use only the topmost one as the secure entry point. The security-providing and critical part is that DS record can be calculated from the DNSKEY, but not vice versa thanks to the used asynchronous cryptography.</p>
<p><strong>Digital signatures:</strong>
Signatures generated with DNSSEC are contained within the DNS zone itself in the new resource records which are called RRSIG (resource record signature) records. When a resolver issues a query for a name, the RRSIG record is returned in the response. A public cryptographic key called a DNSKEY is needed to verify the signature. The DNSKEY is retrieved by a DNS server during the validation process.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656214893618/CaUvOo-n9.png" alt="download.png" class="image--center mx-auto" /></p>
<p><strong>Zone signing</strong>
Sign in a zone with DNSSEC means that you are individually signing all the records contained in the zone. This makes it possible to add, modify, or delete records in the zone without re-signing the entire zone. It is only necessary to re-sign the updated records.</p>
<p><strong>DNSKEY</strong>
A DNSKEY resource record stores a public cryptographic key that is used to verify a signature. The DNSKEY record is used by a DNS server during the validation process. DNSKEY records can store public keys for a zone signing key (ZSK) or a key signing key (KSK).</p>
<p><strong>NSEC</strong>
If the DNS server responds that no record was found, this response also needs to be
validated as authentic and if there is no resource record, then there is no RRSIG record also. The answer to this problem is the Next Secure (NSEC) record. NSEC records create a chain of links between signed resource records. When a query is submitted for a nonexistent record, the DNS server returns the SEC record prior to where the nonexistent record would have been in the order. This allows for something called <em>authenticated denial of existence</em>.</p>
<p>NSEC3 is a replacement or alternative to SEC that has the additional benefit of preventing "zone walking" which is the process of repeating NSEC queries to retrieve all the names in a zone. A zone can be signed with either NSEC or NSEC3, but not both.</p>
<p><strong>Delegation Singer</strong>
A DS record is a DNSSEC record type that is used to secure a delegation. DS records are used to build authentication chains to child zones.</p>
<p><strong>Trust anchors</strong>
DNSKEY and DS resource records are also called trust anchors or trust points. A trust anchor must be distributed to all nonauthoritative DNS servers that will perform DNSSEC validation of DNS responses for a signed zone. If the DNS server is running on a domain controller, trust anchors are stored in the forest directory partition in Active Directory Domain Services (AD DS) and can be replicated to all domain controllers in the forest. On standalone DNS servers, trust anchors are stored in a file named <em>TrustAnchors DNS</em>.</p>
<p><strong>DNSSEC Key Management</strong></p>
<p>DNSSEC key management strategy includes planning for key generation, key storage, key expiration, and key replacement. Together, key expiration and replacement in DNSSEC is called key rollover.</p>
<h2 id="heading-how-dnssec-works">How DNSSEC works?</h2>
<p>DNSSEC uses digital signatures and cryptographic keys to validate that DNS responses are authentic. DNS zone can be secured with DNSSEC using a process called zone signing. Signing a zone with DNSSEC adds validation support to a zone without changing the basic mechanism of a DNS query and response. Validation of DNS responses occurs using digital signatures that are included with DNS responses. These digital signatures are contained in new, DNSSEC-related resource records that are generated and added to the zone during zone signing.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656215380588/KwznH3pde.png" alt="1_GLQ4YeaOBFvV6u-sfhm17Q.png" class="image--center mx-auto" /></p>
<p>When a DNSSEC-aware recursive or forwarding DNS, server receives a query from a DNS client for a DNSSEC-signed zone, it will request that the authoritative DNS server also send DNSSEC records and then attempt to validate the DNS response using these records. A recursive or forwarding DNS server recognizes that the zone supports DNSSEC if it has a DNSKEY, also called a trust anchor, for that zone.</p>
<p><strong>DNSSEC validation</strong>
A recursive DNS server uses the DNSKEY resource record to validate responses from the authoritative DNS server by decrypting digital signatures that are contained in DNSSEC-related resource records, and then by computing and comparing hash values. If hash values are the same, it provides a reply to the DNS client with the DNS data that is requested, such as a host (A) resource record. If hash values are not the same, it replies with a SERVFAIL message. In this way, a DNSSEC-capable, resolving DNS server with a valid trust anchor installed protects against DNS spoofing attacks whether DNS clients are DNSSEC-aware.</p>
<h2 id="heading-dns-server-vulnerabilities">DNS Server Vulnerabilities</h2>
<p>DNS is too important to do without, but it's difficult to defend. In fact, DNS services
are an excellent target for attack. Taking out an organization's DNS service renders it unreachable to the rest of the world except by IP address. If "f5.com" failed to be published online, every single Internet site and service we ran would be invisible. This means web servers, VPNs, mail services, file transfer sites everything. Even worse, if hackers could change the DNS records, then they could redirect everyone to sites they controlled. Imagine going to "www.f5.com" and landing on a page full of banner ads. Since DNS is built upon cooperation between millions of servers and clients over insecure and unreliable protocols, it is uniquely vulnerable to disruption, subversion, and
hijacking. Here's a quick rundown of the known major DNS attacks.</p>
<h3 id="heading-denial-of-service">Denial of Service</h3>
<p><em>Denial-of-service </em>attacks are not limited to DNS but taking out DNS decapitates an
organization. Why bother flooding thousands of web sites when killing a single service does it all for you? The most famous DoS attacks against DNS are the recent Dyn, Inc. DDoS attacks which exceeded 40 gigabytes of noise blared at their DNS services. Dyn was running DNS services for many major organizations, so when they were drowned by a flood of illegitimate packets, so were companies like Amazon, Reddit, FiveThirtyEight, and Visa.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656215573444/6LgSum3uM.png" alt="dos-attack.png" class="image--center mx-auto" /></p>
<p>There are many ways to knock out DNS service, the simplest being a stream of garbage from thousands of compromised hosts (bots) in a DDoS attack. Instead of clogging up the pipe, attackers can also overwork the server with DNS Query Flood6 attacks from thousands of bots.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656215668018/yJbsjQzzI.webp" alt="ddos-attacks1.webp" class="image--center mx-auto" /></p>
<p>DNS can also be subverted for use as a denial-of-service weapon against other sites by way of DNS Amplification/Reflection. This works because DNS almost always returns a larger set of data than what was queried. A simple DNS query asking for F5.com only amounts to a few hundred bytes at most, while the response will be several orders of magnitude larger. This way an attacker can amplify network traffic through DNS servers, building up a tsunami from a ripple. Since DNS runs over UDP, it's a simple matter for attackers to craft fake packets spoofing a query source, so if they can fake thousands of queries from the victim's IP address, that tsunami of responses will return to overwhelm the victim. A bonus for the attacker is that, to the victim, it will appear as if a
huge number of DNS servers are attacking it. All the while, the attacker stays safely hidden.</p>
<h2 id="heading-dns-hijacking">DNS Hijacking</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656215801333/HVMCZfr-P.webp" alt="dns-hijacking.webp" class="image--center mx-auto" /></p>
<p>Who owns what domain name and what DNS servers are designated to answer queries
are managed by Domain Registrars, these are commercial services, such as GoDaddy, Nom, and Network Solutions Inc., where there are registered accounts storing this information, if attackers can hack those accounts, they can repoint a domain to a DNS server they control. Attacks like this have affected the New York Times 9, Linkedin, Dell, Harvard University, Coca Cola, and many others.</p>
<h2 id="heading-dns-server-vulnerabilities">DNS Server Vulnerabilities</h2>
<p>Because DNS services are software, they are likely to contain bugs. It's possible that some of these bugs will create software vulnerabilities that attackers can exploit. That's just the way it is with all software written by imperfect carbon-based life forms. Luckily, DNS is old (so we've had time to find most of the bugs) and simple (so bugs are easy to spot), but problems have cropped up. In 2015, there was a rather significant hole found in BIND, an open-source DNS server running much of the Internet10. Called <strong>CVE-2015-547711</strong> (no cute name, thank you), BIND allowed an attacker to crash a DNS server with a single crafted query.</p>
<p>Another software vulnerability in DNS servers is the Recursive DNS spoof cache poisoning technique, which means that an attacker can temporarily change DNS database entries by issuing specifically crafted queries.</p>
<h2 id="heading-dns-data-leakage">DNS Data Leakage</h2>
<p>You can't run an unauthenticated Internet database full of important information without
the occasional risk of leaking out something important. Attackers will often repeatedly query DNS servers as a prelude to an attack, looking for interesting Internet services that may not be widely known. For example, an organization may have a site called vpn.example.com which it doesn't advertise to anyone except its employees. If an attacker discovers this site, they've just found a new potential target in an attack. DNS records can also aid phishing expeditions by using known server names in their phony baloney emails.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656216015683/BI-YsX-FL.png" alt="DNS-Leak-Diagram-2.png" class="image--center mx-auto" /></p>
<p>Many organizations run DNS on the inside of the network, advertising local area network (LAN) resources for workstations. Some smaller organizations run split-horizon DNS servers that offer up <strong>Internet DNS services</strong> to the world as well as these LAN-based DNS services on the same box. A wrong configuration on that DNS server can lead to some devastating DNS data leakages as internal names and addresses are shared with attackers. Even giants can be tripped up by this seemingly simple vulnerability.</p>
<h2 id="heading-final-thoughts"><em>Final thoughts:</em></h2>
<p>So, there you have it! It is clear how the DNS is critical to the functioning of the
internet! As the internet grows and grows, so too does the DNS and the number of domain names and IP addresses register. Knowing how the DNS works and keeping in mind some of the best practices is pivotal to positive user experience as well as the success of your own website.</p>
]]></content:encoded></item><item><title><![CDATA[Docker Cheetsheet 🐳]]></title><description><![CDATA[Manage images
docker build
Create an image from a Dockerfile.
docker build [options] .
  -t "app/container_name"    # name
  --build-arg APP_HOME=$APP_HOME    # Set build-time variables

docker run
Deploys the container from docker image.
docker run ...]]></description><link>https://blog.craftedbrain.com/docker-cheetsheet</link><guid isPermaLink="true">https://blog.craftedbrain.com/docker-cheetsheet</guid><category><![CDATA[2Articles1Week]]></category><category><![CDATA[Docker]]></category><category><![CDATA[containers]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Divakar]]></dc:creator><pubDate>Sat, 25 Jun 2022 13:56:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1656165345804/JCzpd7PuK.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-manage-images">Manage images</h2>
<h2 id="heading-docker-build">docker build</h2>
<p>Create an <code>image</code> from a Dockerfile.</p>
<pre><code class="lang-shell">docker build [options] .
  -t "app/container_name"    # name
  --build-arg APP_HOME=$APP_HOME    # Set build-time variables
</code></pre>
<h2 id="heading-docker-run">docker run</h2>
<p>Deploys the container from <code>docker image</code>.</p>
<pre><code class="lang-shell">docker run [options] IMAGE
  # see `docker create` for options
</code></pre>
<h3 id="heading-example">Example</h3>
<p>Run a command in an <code>image</code>.</p>
<pre><code class="lang-shell">docker run -it debian:buster /bin/bash
</code></pre>
<h2 id="heading-manage-containers">Manage containers</h2>
<h2 id="heading-docker-create">docker Create</h2>
<pre><code class="lang-yml"><span class="hljs-string">docker</span> <span class="hljs-string">create</span> [<span class="hljs-string">options</span>] <span class="hljs-string">IMAGE</span>
  <span class="hljs-string">-a,</span> <span class="hljs-string">--attach</span>               <span class="hljs-comment"># attach stdout/err</span>
  <span class="hljs-string">-i,</span> <span class="hljs-string">--interactive</span>          <span class="hljs-comment"># attach stdin (interactive)</span>
  <span class="hljs-string">-t,</span> <span class="hljs-string">--tty</span>                  <span class="hljs-comment"># pseudo-tty</span>
      <span class="hljs-string">--name</span> <span class="hljs-string">NAME</span>            <span class="hljs-comment"># name your image</span>
  <span class="hljs-string">-p,</span> <span class="hljs-string">--publish</span> <span class="hljs-number">5000</span><span class="hljs-string">:5000</span>    <span class="hljs-comment"># port map (host:container)</span>
      <span class="hljs-string">--expose</span> <span class="hljs-number">5432</span>          <span class="hljs-comment"># expose a port to linked containers</span>
  <span class="hljs-string">-P,</span> <span class="hljs-string">--publish-all</span>          <span class="hljs-comment"># publish all ports</span>
      <span class="hljs-string">--link</span> <span class="hljs-string">container:alias</span> <span class="hljs-comment"># linking</span>
  <span class="hljs-string">-v,</span> <span class="hljs-string">--volume</span> <span class="hljs-string">`pwd`:/app</span>    <span class="hljs-comment"># mount (absolute paths needed)</span>
  <span class="hljs-string">-e,</span> <span class="hljs-string">--env</span> <span class="hljs-string">NAME=hello</span>       <span class="hljs-comment"># env vars</span>
</code></pre>
<h4 id="heading-example">Example</h4>
<p>Create a <code>container</code> from an <code>image</code>.</p>
<pre><code class="lang-shell">$ docker create --name app_redis_1 \
  --expose 6379 \
  redis:3.0.2
</code></pre>
<h3 id="heading-docker-exec">docker Exec</h3>
<p>Command to login to the container,</p>
<pre><code class="lang-shell">docker exec [options] CONTAINER COMMAND
  -d, --detach        # run in background
  -i, --interactive   # stdin
  -t, --tty           # interactive
</code></pre>
<h4 id="heading-example">Example</h4>
<p>Run commands in a <code>container</code>.</p>
<pre><code class="lang-shell">docker exec app_web_1 tail logs/development.log
docker exec -t -i app_web_1 rails c
</code></pre>
<h3 id="heading-docker-startstop">docker start/stop</h3>
<p>Start/stop a <code>container</code>.</p>
<pre><code class="lang-shell">docker start [options] CONTAINER
  -a, --attach        # attach stdout/err
  -i, --interactive   # attach stdin

docker stop [options] CONTAINER
</code></pre>
<h3 id="heading-docker-ps">docker ps</h3>
<p>Manage <code>container</code>s using ps/kill.</p>
<pre><code class="lang-shell">docker ps
docker ps -a
docker kill $ID
</code></pre>
<h3 id="heading-docker-logs">docker logs</h3>
<p>See what's being logged in an <code>container</code>.</p>
<pre><code class="lang-shell">docker logs $ID
docker logs $ID 2&gt;&amp;1 | less
docker logs -f $ID # Follow log output
</code></pre>
<h2 id="heading-images">Images</h2>
<h3 id="heading-docker-images">docker images</h3>
<p>Manages <code>image</code>s.</p>
<pre><code class="lang-shell">$ docker images
  REPOSITORY   TAG        ID
  ubuntu       12.10      b750fe78269d
  me/myapp     latest     7b2431a8d968
</code></pre>
<pre><code class="lang-shell">docker images -a   # also show intermediate
</code></pre>
<h3 id="heading-docker-rmi">docker rmi</h3>
<p>Deletes <code>image</code>s.</p>
<pre><code class="lang-shell">docker rmi b750fe78269d
</code></pre>
<h2 id="heading-clean-up">Clean up</h2>
<h3 id="heading-clean-all">Clean all</h3>
<p>Cleans up dangling images, containers, volumes, and networks (ie, not associated with a container)</p>
<pre><code class="lang-shell">docker system prune
</code></pre>
<p>Additionally remove any stopped containers and all unused images (not just dangling images)</p>
<pre><code class="lang-shell">docker system prune -a
</code></pre>
<h3 id="heading-containers">Containers</h3>
<pre><code class="lang-shell"># Stop all running containers
docker stop $(docker ps -a -q)

# Delete stopped containers
docker container prune
</code></pre>
<h3 id="heading-images">Images</h3>
<p>Delete all the images</p>
<pre><code class="lang-shell">docker image prune [-a]
</code></pre>
<h3 id="heading-volumes">Volumes</h3>
<p>Delete all the volumes</p>
<pre><code class="lang-shell">docker volume prune
</code></pre>
<h2 id="heading-sevices">Sevices</h2>
<p>To view list of all the services runnning in swarm</p>
<pre><code class="lang-shell">docker service ls
</code></pre>
<p>To see all running services</p>
<pre><code class="lang-shell">docker stack services stack_name
</code></pre>
<p>to see all services logs</p>
<pre><code class="lang-shell">docker service logs stack_name service_name
</code></pre>
<p>To scale services quickly across qualified node</p>
<pre><code class="lang-shell">docker service scale stack_name_service_name=replicas
</code></pre>
<h3 id="heading-clean-up">Clean up</h3>
<p>To clean or prune unused (dangling) images</p>
<pre><code class="lang-shell">docker image prune
</code></pre>
<p>To remove all images which are not in use containers , add - a</p>
<pre><code class="lang-shell">docker image prune -a
</code></pre>
<p>To prune your entire system</p>
<pre><code class="lang-shell">docker system prune
</code></pre>
<p>To leave swarm</p>
<pre><code class="lang-shell">docker swarm leave
</code></pre>
<p>To remove swarm ( deletes all volume data and database info)</p>
<pre><code class="lang-shell">docker stack rm stack_name
</code></pre>
<p>To kill all running containers</p>
<pre><code class="lang-shell">docker kill $(docekr ps -q )
</code></pre>
<h2 id="heading-also-see">Also see</h2>
<ul>
<li><a target="_blank" href="http://www.docker.io/gettingstarted/">Getting Started</a> <em>(docker.io)</em></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Containerization vs. Virtualization]]></title><description><![CDATA[What's the Difference ?

Containerization and virtualization, both, are methods of deploying many isolated services on the same platform and they are both prominent tools within the hosting world. Both are a means for storing data within hosting plat...]]></description><link>https://blog.craftedbrain.com/containerization-vs-virtualization</link><guid isPermaLink="true">https://blog.craftedbrain.com/containerization-vs-virtualization</guid><category><![CDATA[2Articles1Week]]></category><category><![CDATA[Docker]]></category><category><![CDATA[containers]]></category><category><![CDATA[Application Virtualization]]></category><category><![CDATA[Containerization vs. Virtualization]]></category><dc:creator><![CDATA[Divakar]]></dc:creator><pubDate>Sat, 25 Jun 2022 13:49:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1656164621412/H60M5Ci8g.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-whats-the-difference">What's the Difference ?</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656164647663/sdNqkMZpG.png" alt="vc.png" class="image--center mx-auto" /></p>
<p>Containerization and virtualization, both, are methods of deploying many isolated services on the same platform and they are both prominent tools within the hosting world. Both are a means for storing data within hosting platforms. And although both terms are becoming increasingly referenced, they are often confused.</p>
<p>Which is the better option? That topic is frequently up for debate and is unfortunately not easily answered. The truth is that the right option depends on each user’s needs. This article will first provide a rundown of both technologies to answer this question. It discusses their uses, the situations where they perform best, and compares the advantages and disadvantages of virtualization vs containerization.</p>
<p>Let’s understand both the concepts before diving into the differences between them.</p>
<h2 id="heading-what-is-virtualization">What is Virtualization ?</h2>
<p>Virtualization also refers to the process of creating many instances of operating systems on the same computer.</p>
<p>These instances are called virtual machines. For the applications running over these virtual machines, it appears as if they are working on a system dedicated to that particular application.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656164675913/PY9daOrPO.png" alt="vir.png" class="image--center mx-auto" /></p>
<p>Virtualization is not possible without the hypervisor. A hypervisor, or virtual machine monitor, is the software or firmware layer that enables multiple operating systems to run side-by-side, all with access to the same physical server resources. The hypervisor orchestrates and separates the available resources (computing power, memory, storage, etc.), aligning a portion to each virtual machine as needed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656164702385/80bRHoBrV.png" alt="hyper.png" class="image--center mx-auto" /></p>
<p><em>Examples of virtualization tools include VirtualBox, Hyper-V, VMWare Workstation Player, amongst many others.</em></p>
<p>Though Virtualization also has some shortcomings. Running multiple VMs at the same time on Host OS leads to performance breakdown/degradation. The reason behind this is because the guest OS runs on the top of host OS, which will have its own kernel and dependencies, this takes up a large mass of system resources like Hard Disk, Processor and RAM.</p>
<h2 id="heading-what-is-containerization">What is Containerization ?</h2>
<p>Containers are a lighter-weight, more agile way of handling virtualization — since they don't use a hypervisor, you can enjoy faster resource provisioning and speedier availability of new applications.</p>
<p>Rather than spinning up an entire virtual machine, containerization packages together everything needed to run a single application or microservice (along with runtime libraries they need to run). The container includes all the code, its dependencies and even the operating system itself. This enables applications to run almost anywhere — a desktop computer, a traditional IT infrastructure or the cloud.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656164833747/2VqM9Jjae.jpg" alt="cc.jpg" /></p>
<p>Containers use a form of operating system (OS) virtualization. Put simply, they leverage features of the host operating system to isolate processes and control the processes’ access to CPUs, memory and desk space.</p>
<p>Many containers share the same operating system even though they run different isolated applications.
Containerization allows developers to create applications faster without having to worry about bugs when the application is run on a computing environment different than the one on which it was developed.</p>
<p><em>Containerization tools include Kubernetes, Docker, Rocket, Podman etc...</em></p>
<h2 id="heading-containers-vs-vms-what-are-the-differences">Containers vs. VMs: What are the differences?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656164864501/_QIifnJxl.png" alt="vdr.png" class="image--center mx-auto" /></p>
<p>In traditional virtualization, a hypervisor virtualizes physical hardware. The result is that each virtual machine contains a guest OS, a virtual copy of the hardware that the OS requires to run and an application and its associated libraries and dependencies. VMs with different operating systems can be run on the same physical server. For example, a VMware VM can run next to a Linux VM, which runs next to a Microsoft VM, etc.</p>
<p>Instead of virtualizing the underlying hardware, containers virtualize the operating system (typically Linux or Windows) so each individual container contains only the application and its libraries and dependencies. Containers are small, fast, and portable because, unlike a virtual machine, containers do not need to include a guest OS in every instance and can, instead, simply leverage the features and resources of the host OS.</p>
<h2 id="heading-which-is-better-virtualization-or-containerization">Which Is Better: Virtualization or Containerization?</h2>
<p>In comparing virtualization vs containerization, we see that each technology serves a different purpose. Determining the better option relies heavily on the user’s application needs and required server capacity. Virtualization and containerization are both data storage methods that create self-contained virtual packages. But, when comparing virtualization vs containerization, it will help to consider the following factors before deciding which one is right for your needs.</p>
<ol>
<li>Speed</li>
<li>Resources</li>
<li>Security and isolation</li>
<li>Portability and application sharing</li>
<li>Operating system requirements</li>
<li>Application lifecycle</li>
</ol>
<p>Choosing one method over the other is a big decision. IT managers should consider all of the significant differences before taking the plunge. To help you decide more efficiently, we’ve created a quick overview in the table below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656164928189/_S6IhrDmR.png" alt="dp.png" /></p>
<h2 id="heading-wrapping-up">Wrapping Up</h2>
<p>Considering all the differences above, it is quite safe to say that containers and virtual machines can’t necessarily be used interchangeably.</p>
<p>Each has its benefits and specific scenarios where the other does not provide practical application. It is dependent on the user as to which system works best for them in the current scenario, and then, they can choose between Containerization and Virtualization.</p>
]]></content:encoded></item><item><title><![CDATA[A beginner’s guide to Docker  —  How to create your first Docker application 🐳]]></title><description><![CDATA[Before starting to build the app, let's first install the pre-requisites that are required for building the docker containers,
1. Install Docker on your machine

For Linux (Ubuntu/Debian Based distros):

sudo apt install docker.io

For MacOSX: you ca...]]></description><link>https://blog.craftedbrain.com/a-beginners-guide-to-docker-how-to-create-your-first-docker-application</link><guid isPermaLink="true">https://blog.craftedbrain.com/a-beginners-guide-to-docker-how-to-create-your-first-docker-application</guid><category><![CDATA[Docker]]></category><category><![CDATA[Dockerfile]]></category><category><![CDATA[docker-application]]></category><category><![CDATA[docker-beginner]]></category><dc:creator><![CDATA[Divakar]]></dc:creator><pubDate>Mon, 06 Jun 2022 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1656166029150/8yRpC6jO3.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4 id="heading-before-starting-to-build-the-app-lets-first-install-the-pre-requisites-that-are-required-for-building-the-docker-containers">Before starting to build the app, let's first install the pre-requisites that are required for building the docker containers,</h4>
<h2 id="heading-1-install-docker-on-your-machine">1. Install Docker on your machine</h2>
<blockquote>
<p>For Linux (Ubuntu/Debian Based distros):</p>
</blockquote>
<pre><code class="lang-shell">sudo apt install docker.io
</code></pre>
<p>For MacOSX: you can follow this <a target="_blank" href="https://docs.docker.com/desktop/windows/install/">link</a> <br />
For Windows: you can follow this <a target="_blank" href="https://docs.docker.com/desktop/mac/install/">link</a> <br /></p>
<blockquote>
<p>For Windows (Enable WSL2 backend for optimized performance)</p>
</blockquote>
<h3 id="heading-finally-verify-that-docker-is-installed-correctly">Finally, verify that Docker is installed correctly</h3>
<pre><code class="lang-shell">sudo docker run hello-world
</code></pre>
<h2 id="heading-2-create-your-project">2. Create your project</h2>
<p>Create a new directory and two files <code>main.py</code> and <code>Dockerfile</code>.</p>
<blockquote>
<p><code>main.py</code> -- contains python code for web server.<br />
<code>Dockerfile</code> -- this file contains the structure of container.<br />
<code>requirements.txt</code>-- this is a dependency file for python web server(flask).<br /></p>
</blockquote>
<p>Kindly add this code to your <code>main.py</code> to initiate web server,</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> flask <span class="hljs-keyword">import</span> Flask

app = Flask(__name__)

<span class="hljs-meta">@app.route('/')</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">hello_world</span>():</span>
 <span class="hljs-keyword">return</span> <span class="hljs-string">'Hello World'</span>


<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">'__main__'</span>:
    app.run(debug=<span class="hljs-literal">True</span>, host=<span class="hljs-string">'0.0.0.0'</span>, port=<span class="hljs-number">5000</span>)
</code></pre>
<p>add the below dependency to requirements.txt file,</p>
<pre><code class="lang-text">Flask==2.1.2
</code></pre>
<h2 id="heading-3-edit-the-docker-file">3. Edit the Docker file</h2>
<p>The first step to take when you create a Docker file is to access the DockerHub website. This site contains many pre-designed images to save your time (for example: all images for linux or code languages).</p>
<p>In our case, we will type ‘Python’ in the search bar. The first result is the official image created to execute Python. Perfect, we’ll use it!</p>
<pre><code class="lang-Dockerfile"># Start by pulling the python image
FROM python:3.8-alpine

# Copy every content from the local file to the image
COPY . /app

# switch working directory
WORKDIR /app

# install the dependencies and packages in the requirements file
RUN pip install -r /app/requirements.txt

# run the python code
CMD ["python", "main.py" ]
</code></pre>
<h3 id="heading-lets-go-over-the-instructions-in-this-dockerfile">Let’s go over the instructions in this Dockerfile</h3>
<p><strong>FROM python:3.8-alpine</strong>: Since Docker allows us to inherit existing images, we install a Python image and install it in our Docker image. Alpine is a lightweight Linux distro that will serve as the OS on which we install our image</p>
<p><strong>COPY  ./requirements.txt /app/requirements.txt</strong>: Here, we copy the requirements file and its content (the generated packages and dependencies) into the app folder of the image</p>
<p><strong>WORKDIR  /app</strong>: We proceed to set the working directory as /app, which will be the root directory of our application in the container
<strong>RUN pip install -r requirements.txt</strong>: This command installs all the dependencies defined in the requirements.txt file into our application within the container</p>
<p><strong>COPY  . /app</strong>: This copies every other file and its respective contents into the app folder that is the root directory of our application within the container</p>
<p><strong>ENTRYPOINT [ "python" ]</strong>: This is the command that runs the application in the container</p>
<p><strong>CMD [ "main.py" ]</strong>: Finally, this appends the list of parameters to the EntryPoint parameter to perform the command that runs the application. This is similar to how you would run the Python app on your terminal using the <code>python main.py</code> command.</p>
<h2 id="heading-4-build-the-docker-image">4. Build the Docker image</h2>
<p>Let’s proceed to build the image with the command below:</p>
<pre><code class="lang-shell">docker image build -t flask_docker .
</code></pre>
<h2 id="heading-5-run-the-container">5. Run the container</h2>
<p>After successfully building the image, the next step is to run an instance of the image. Here is how to perform this:</p>
<pre><code class="lang-shell">docker run -p 5000:5000 -d --name new_app flask_docker
</code></pre>
<p>This command deploys the container with the python flask server running on port 5000, Here is the output of our application when we send a request to localhost:5000 on our browser,</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656166293125/GuS1E7_kI.png" alt="op.png" class="image--center mx-auto" /></p>
<h3 id="heading-track-the-container-status">Track the container status</h3>
<p>The below command displays the list of all running active containers, check if your deployed container is in running status!</p>
<pre><code class="lang-shell">docker ps -a
</code></pre>
<h2 id="heading-6-useful-commands-for-docker">6. Useful commands for Docker</h2>
<blockquote>
<h3 id="heading-list-your-images">List your images</h3>
</blockquote>
<pre><code class="lang-shell">docker images
</code></pre>
<blockquote>
<h3 id="heading-delete-a-specific-image">Delete a specific image</h3>
</blockquote>
<pre><code class="lang-shell">docker image rm [image name]
</code></pre>
<blockquote>
<h3 id="heading-list-all-existing-containers-running-and-not-running">List all existing containers (running and not running)</h3>
</blockquote>
<pre><code class="lang-shell">docker ps -a
</code></pre>
<blockquote>
<h3 id="heading-stop-a-specific-container">Stop a specific container</h3>
</blockquote>
<pre><code class="lang-shell">docker stop [container name]
</code></pre>
<blockquote>
<h3 id="heading-delete-a-specific-container">Delete a specific container</h3>
</blockquote>
<pre><code class="lang-shell">docker rm [container name]
</code></pre>
<blockquote>
<h3 id="heading-display-logs-of-a-container">Display logs of a container</h3>
</blockquote>
<pre><code class="lang-shell">docker logs [container name]
</code></pre>
<blockquote>
<p>Note: If you want to learn more about Dockerfiles, check out <a target="_blank" href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/">Best practices for writing Dockerfiles</a>.</p>
</blockquote>
<h2 id="heading-7-conclusion">7. Conclusion</h2>
<p>In this article, we built a simple Flask app and containerized it with Docker. You can refer to this post every time you need a simple and concrete example on how to create your first Docker application. If you have any questions or feedback, feel free to ask.</p>
]]></content:encoded></item><item><title><![CDATA[Introduction to Docker 🐳]]></title><description><![CDATA[In this article lets try to understand one of the most popular tools used to containerize and deploy applications i.e. Docker. It makes packaging & deploying applications extremely easy.
We will try to look at the things that make Docker so special a...]]></description><link>https://blog.craftedbrain.com/introduction-to-docker</link><guid isPermaLink="true">https://blog.craftedbrain.com/introduction-to-docker</guid><category><![CDATA[Docker]]></category><category><![CDATA[containers]]></category><category><![CDATA[Dockerfile]]></category><category><![CDATA[dockerhub]]></category><category><![CDATA[docker-engine]]></category><dc:creator><![CDATA[Divakar]]></dc:creator><pubDate>Wed, 05 Jan 2022 14:05:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1656165737154/EQrLp7Ccz.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article lets try to understand one of the most popular tools used to containerize and deploy applications i.e. Docker. It makes packaging &amp; deploying applications extremely easy.</p>
<p>We will try to look at the things that make Docker so special and learn how you can build, deploy, and fetch applications using Docker &amp; Docker Hub using just a few steps.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656165698130/cpvswKjOI.png" alt="dlgo.png" /></p>
<p>Docker is an open source containerization platform. <br />
It enables developers to package applications into containers—standardized executable components combining application source code with the operating system (OS) libraries and dependencies required to run that code in any environment. Docker has been a game-changer since its release in 2013.</p>
<h2 id="heading-why-use-docker">Why use Docker?</h2>
<p>You have probably heard the iconic phrase "It works on my machine". Well, why don't we give that machine to the customer?</p>
<ul>
<li>Improved—and seamless—portability</li>
<li>Even lighter weight and more granular updates</li>
<li>Automated container creation</li>
<li>Container versioning</li>
<li>Container reuse</li>
<li>Shared container libraries</li>
</ul>
<h2 id="heading-docker-architecture">Docker architecture</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656165776548/_Uqp81uct.png" alt="key.png" class="image--center mx-auto" /></p>
<p>Docker uses a client-server architecture. The client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface. Another Docker client is Docker Compose, that lets you work with applications consisting of a set of containers.</p>
<h2 id="heading-core-components-of-docker">Core components of Docker</h2>
<ol>
<li>Docker file</li>
<li>Docker Image</li>
<li>Docker Container</li>
<li>Docker Engine</li>
<li>Docker registry</li>
</ol>
<h3 id="heading-dockerfile">Dockerfile</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656165799606/wdCIj5DtH.png" alt="dock_layers.png" class="image--center mx-auto" /></p>
<p>A Dockerfile is a script that consists of a set of instructions on how to build a Docker image. These instructions include specifying the operating system, languages, environment variables, file locations, network ports, and other components needed to run the image. All the commands in the file are grouped and executed automatically.</p>
<h3 id="heading-docker-image">Docker Image</h3>
<ul>
<li>It is a file, comprised of multiple layers, used to execute code in a Docker container.</li>
<li>They are a set of instructions used to create docker containers.</li>
</ul>
<h3 id="heading-docker-container">Docker Container</h3>
<p>It is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.</p>
<h3 id="heading-docker-engine">Docker Engine</h3>
<p>The Docker Engine (DE) is installed on the host machine and represents the core of the Docker system. It is a lightweight runtime system and the underlying client-server technology that creates and manages containers.</p>
<p>Docker Engine consists of three components:</p>
<ul>
<li>Server - the Docker daemon (dockerd), which is responsible for creating and managing containers.</li>
<li>Rest API - establishes communication between programs and Docker and instructs dockerd what to do.</li>
<li>Command Line Interface (CLI) - used for running Docker commands.</li>
</ul>
<h3 id="heading-docker-registry">Docker Registry</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656165825672/MZ225GCnY.png" alt="red.png" /></p>
<p>A Docker registry is a storage and distribution system for named Docker images. The same image might have multiple different versions, identified by their tags.</p>
<p>A Docker registry is organized into Docker repositories , where a repository holds all the versions of a specific image. The registry allows Docker users to pull images locally, as well as push new images to the registry.</p>
<h3 id="heading-dockerhub">DockerHub</h3>
<p>DockerHub is a hosted registry solution by Docker Inc. Besides public and private repositories, it also provides automated builds, organization accounts, and integration with source control solutions like Github and Bitbucket.</p>
<h2 id="heading-to-wrap-up">To Wrap Up</h2>
<blockquote>
<p>Docker is a game-changer. But it is not a one-size-fits-all solution.</p>
</blockquote>
<p>Whether you like it or not, this technology has a future. There are some developers and development agencies that hate Docker and try to eliminate it from all their ongoing projects. At the same time, there are specialists who containerize everything they can because they see Docker as a panacea. Perhaps, you should not join either camp. Stay impartial, stay objective, and make a decision depending on a particular situation.</p>
]]></content:encoded></item><item><title><![CDATA[Common problems and solutions of TensorFlow GPU installation]]></title><description><![CDATA[When i first started using Tensorflow GPU setup, I often encounter problems. I have installed it several times and often encounter the same or similar problems. So I plan to record it and hope it can help others…
Inconsistent Libraries

Initially i u...]]></description><link>https://blog.craftedbrain.com/common-problems-and-solutions-of-tensorflow-gpu-installation</link><guid isPermaLink="true">https://blog.craftedbrain.com/common-problems-and-solutions-of-tensorflow-gpu-installation</guid><category><![CDATA[Deep Learning]]></category><category><![CDATA[tensorflow-gpu]]></category><category><![CDATA[Python 3]]></category><category><![CDATA[cuda]]></category><category><![CDATA[NVIDIA]]></category><dc:creator><![CDATA[Divakar]]></dc:creator><pubDate>Tue, 26 Jan 2021 13:38:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1656164261462/Ojt7YzBiC.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When i first started using <a target="_blank" href="https://developpaper.com/tag/tensorflow/">Tensorflow</a> GPU setup, I often encounter problems. I have installed it several times and often encounter the same or similar problems. So I plan to record it and hope it can help others…</p>
<h2 id="heading-inconsistent-libraries">Inconsistent Libraries</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656164158119/ScKPMZsmr.png" alt="gpu_ver.png" /></p>
<p>Initially i used to install the TensorFlow with the improper versions of CUDA &amp; Cudnn often leads me to several problems, even a slight mismatch in versions of libraries and binaries are a trouble some process to fix so i recommend everyone to follow the above chart and install the right versions on your machine.</p>
<h2 id="heading-microsoft-visual-c-2015-redistributable-update-3-is-not-installed">Microsoft Visual C + + 2015 redistributable update 3 is not installed</h2>
<p>In order for the Tensorflow modules to work perfectly it needs run-time components of Visual C++ libraries. So download and install the below libraries in case u face issues.</p>
<p><a target="_blank" href="https://www.microsoft.com/en-us/download/details.aspx?id=52685">https://www.microsoft.com/en-us/download/details.aspx?id=52685</a></p>
<h2 id="heading-updating-environment-path-to-windows-set-your-path">Updating Environment Path to Windows (Set your PATH)</h2>
<p>After installation of CUDA and Cudnn libraries don’t forget to add its path to ensure that TensorFlow can find CUDA, you should go to the system environment and add them as mentioned below.</p>
<pre><code class="lang-shell">export PATH="/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0/bin:$PATH"
export PATH="/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0/extras/CUPTI/libx64:$PATH"
export PATH="/c/tools/cuda/bin:$PATH"
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656164197069/jCf_3g02V.png" alt="sysvar.png" /></p>
<h2 id="heading-cudart64-dll-error">Cudart64 dll Error</h2>
<p>When running the tensorflow code, initially we get an error cudart64 which prevents GPU execution. I recommend you to extract the DDL script from zip and paste it to,</p>
<blockquote>
<p>C:\Windows\System32</p>
</blockquote>
<p><a target="_blank" href="https://drive.google.com/file/d/10kKz9YRRmTtMj4vZHTt8fNrrrbgD2ooU/view">https://drive.google.com/file/d/10kKz9YRRmTtMj4vZHTt8fNrrrbgD2ooU/view</a></p>
<h2 id="heading-test-tensorflow-gpu-installation">Test Tensorflow GPU installation</h2>
<p>To verify successful installation of tensorflow, try running this in your machine and hope fully it completes without any errors.</p>
<pre><code class="lang-shell">import tensorflow as tf 
#Device Name
print('Device Name: '+tf.test.gpu_device_name())
# Version-check
print('Version: '+tf.__version__)
#CUDA Support
print('CUDA Support: '+str(tf.test.is_built_with_cuda()))
</code></pre>
]]></content:encoded></item><item><title><![CDATA[TensorFlow Deep learning Setup using GPU]]></title><description><![CDATA[The interest on deep-learning has been growing enormous in the past couple of months but in order to get started we need a stable development environment. I find many beginners facing problems while installing libraries and setting up environment. As...]]></description><link>https://blog.craftedbrain.com/tensorflow-deep-learning-setup-using-gpu</link><guid isPermaLink="true">https://blog.craftedbrain.com/tensorflow-deep-learning-setup-using-gpu</guid><category><![CDATA[TensorFlow]]></category><category><![CDATA[Python]]></category><category><![CDATA[GPU]]></category><category><![CDATA[tensorflow-gpu]]></category><category><![CDATA[cuda]]></category><dc:creator><![CDATA[Divakar]]></dc:creator><pubDate>Tue, 08 Sep 2020 13:32:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1656163665427/qEfYcclZ-.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The interest on deep-learning has been growing enormous in the past couple of months but in order to get started we need a stable development environment. I find many beginners facing problems while installing libraries and setting up environment. As i have faced first time when i was trying. So this guide is totally for beginners .</p>
<h1 id="heading-installation-setup">Installation Setup</h1>
<p>We will cover the following steps:</p>
<ol>
<li>Install Anaconda &amp; Python</li>
<li>Install/ Update GPU Drivers</li>
<li>Install CUDA Toolkit &amp; cuDNN</li>
<li>Add Environment Variables to the PATH in Windows</li>
<li>Install TensorFlow &amp; Keras</li>
<li>Verify the package run</li>
</ol>
<h3 id="heading-step-1-installation-of-anaconda">(Step-1) Installation of Anaconda</h3>
<p>In this step, kindly download the Anaconda Python package manager for your platform (Windows/Linux) and install it accordingly.</p>
<p><a target="_blank" href="https://www.anaconda.com/products/distribution">https://www.anaconda.com/products/distribution</a></p>
<h3 id="heading-step-2-install-gpu-drivers-cuda-101-requires-418x-or-higher">(Step-2) Install GPU Drivers — CUDA 10.1 requires 418.x or higher</h3>
<p>Now, Choose your appropriate graphics driver and install it, I recommend you to update to the latest version for better performance.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656163826216/J3kXRZI_Y.png" alt="nvidia_driver.png" /></p>
<p>NVIDIA Drivers: <a target="_blank" href="https://www.nvidia.com/Download/index.aspx?lang=en-us">https://www.nvidia.com/Download/index.aspx?lang=en-us</a></p>
<h3 id="heading-step-3-install-cuda-toolkit-tensorflow-supports-cuda-101-tensorflow-andgt-210">(STEP-3) Install CUDA Toolkit — TensorFlow supports CUDA 10.1 (TensorFlow &gt;= 2.1.0)</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656163848299/Morz9XKfW.png" alt="cuda.png" /></p>
<blockquote>
<p><code>Note:</code> Kindly choose the CUDA version according to your Nvidia GPU version to avoid errors.
{: .prompt-tip }</p>
</blockquote>
<p>Note: Kindly choose the CUDA version according to your Nvidia GPU version to avoid errors.</p>
<ol>
<li>Choose the desired platform and download it  <a target="_blank" href="https://developer.nvidia.com/cuda-downloads?target_os=Windows&amp;target_arch=x86_64&amp;target_version=10"><strong>Cuda Toolkit</strong></a></li>
</ol>
<blockquote>
<p>Make sure you have the right CUDA version and drivers installed else the setup won't work!</p>
</blockquote>
<ol>
<li>Install CUDA Toolkit with default settings and usually it takes time so bare with it!</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656163862419/7OOk8x7gw.png" alt="c_tool.png" /></p>
<h3 id="heading-step-4-adding-cudnn-libraries">(Step-4) Adding Cudnn libraries</h3>
<p>Cudnn libraries provide accelerated performance on GPU usage, so we need to add it in for smoother and efficient performance.</p>
<p><a target="_blank" href="https://developer.nvidia.com/cudnn-download-survey">https://developer.nvidia.com/cudnn-download-survey</a></p>
<p>It will prompt you to create an account, go ahead and sign up and download the appropriate version for your platform.</p>
<p>Now extract the Cudnn libraries zip file and copy all the files to</p>
<blockquote>
<p>“C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2”</p>
</blockquote>
<p>location and overwrite the files on that location.</p>
<h3 id="heading-step-5-add-environment-variables-to-the-path-in-windows">(Step-5) Add Environment Variables to the PATH in Windows</h3>
<ol>
<li>Open Run using (Win + R) and type sysdm.cpl and press Enter</li>
<li>Under System Properties, please select the Tab Advanced.</li>
<li>In Environment Variables go to System variables</li>
<li>Click on Add and save the below path,</li>
<li>Click ok and Save it.</li>
</ol>
<blockquote>
<p>Variable name = CUDA_PATH</p>
<p>Variable value = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0</p>
</blockquote>
<h3 id="heading-step-6-install-tensorflow-andamp-keras">(Step-6) Install TensorFlow &amp; Keras</h3>
<p>Open command prompt and type in,</p>
<pre><code class="lang-shell">pip install tensorflow-gpu
</code></pre>
<p>After successful installation try running this below program to verify its successful setup.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf
<span class="hljs-comment"># Device Name</span>
print(<span class="hljs-string">'Device Name: '</span>+tf.test.gpu_device_name()
<span class="hljs-comment"># Version-check</span>
print(<span class="hljs-string">'Version: '</span>+tf.__version__)
<span class="hljs-comment"># CUDA Support</span>
print(<span class="hljs-string">'CUDA Support: '</span>+str(tf.test.is_built_with_cuda()))
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1656163747692/92BX4zwnu.png" alt="tf_setup.png" /></p>
<p>If you face any library missing issue then kindly download the below zip extract it and paste it over</p>
<p><a target="_blank" href="https://drive.google.com/file/d/10kKz9YRRmTtMj4vZHTt8fNrrrbgD2ooU/view?usp=sharing">https://drive.google.com/file/d/10kKz9YRRmTtMj4vZHTt8fNrrrbgD2ooU/view?usp=sharing</a></p>
<pre><code class="lang-commandprompt">C:\Windows\System32
</code></pre>
<p>In case u need packages and setup directly, then you can refer my Github Repo</p>
<p><a target="_blank" href="https://github.com/rexdivakar/Deep-Learning-Setup">https://github.com/rexdivakar/Deep-Learning-Setup</a></p>
<p>Congratulations! 😉 You have successfully created an environment for using TensorFlow, Keras (with Tensorflow backend) over GPU on Windows!</p>
]]></content:encoded></item><item><title><![CDATA[Track real-time metrics of TensorFlow Model during training using Notifly]]></title><description><![CDATA[Notifly  is a Pypi package designed to track the model metrics during real-time training using a wrapper over Tensorflow callbacks, which plots the accuracy and loss over each epoch. Notifly also tracks the system resources over the runtime thus prov...]]></description><link>https://blog.craftedbrain.com/track-real-time-metrics-of-tensorflow-model-during-training-using-notifly</link><guid isPermaLink="true">https://blog.craftedbrain.com/track-real-time-metrics-of-tensorflow-model-during-training-using-notifly</guid><category><![CDATA[Python]]></category><category><![CDATA[TensorFlow]]></category><category><![CDATA[Deep Learning]]></category><category><![CDATA[GPU]]></category><dc:creator><![CDATA[Divakar]]></dc:creator><pubDate>Sat, 18 Jan 2020 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1656160535299/aRqptMIYm.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong> Notifly </strong> is a Pypi package designed to track the model metrics during real-time training using a wrapper over Tensorflow callbacks, which plots the accuracy and loss over each epoch. Notifly also tracks the system resources over the runtime thus providing details about the training in runtime, It can also share the details over <em>Discord, Telegram, Teams and Slack</em>.</p>
<h4 id="heading-built-with-python-3httpswwwpythonorg">Built with <a target="_blank" href="https://www.python.org/">Python 3</a></h4>
<h3 id="heading-prerequisites">Prerequisites:</h3>
<ul>
<li>Python
It is preinstalled in Ubuntu 20.04. To check the version use command:<pre><code class="lang-shell">python3 --version
</code></pre>
If it is not preinstalled for some reason, proceed <a target="_blank" href="https://www.python.org/">here</a> and download as per requirement.
Run the following command in terminal to download the required packages for running the tool locally :</li>
<li>Using requirements file :</li>
</ul>
<pre><code class="lang-shell">pip3 install -r requirements.txt
</code></pre>
<ul>
<li>Directly download packages:<pre><code class="lang-shell">pip3 install requests==2.24.0
pip3 install matplotlib==3.2.2
pip3 install slackclient==2.9.3
</code></pre>
</li>
</ul>
<h2 id="heading-install-the-package">Install the package</h2>
<p>Run the following terminal commands to install the package on the given distros.</p>
<ul>
<li>Terminal:
shell</li>
</ul>
<pre><code class="lang-shell">pkg install python3
</code></pre>
<pre><code class="lang-shell">pip3 install notifly
</code></pre>
<ul>
<li>Ubuntu/Debian</li>
</ul>
<pre><code class="lang-shell">sudo apt install python3-pip
</code></pre>
<pre><code class="lang-shell">pip3 install notifly
</code></pre>
<ul>
<li>Arch</li>
</ul>
<pre><code class="lang-shell">sudo pacman -S python3-pip
</code></pre>
<pre><code class="lang-shell">pip3 install notifly
</code></pre>
<p><strong><em>This may take a while depending on the network speed.</em></strong></p>
<h2 id="heading-working-of-the-tool">Working of the tool</h2>
<h3 id="heading-telegram">Telegram</h3>
<p>To see how the tool works,</p>
<ol>
<li>Create the <a target="_blank" href="https://telegram.org/blog/bot-revolution">telegram bot</a>.</li>
<li><p>Getting the bot API token</p>
<ol>
<li>Search and go to <code>_@Botfather_</code> .</li>
<li>Message <code>/mybots</code> .</li>
<li>Select the bot.</li>
<li>Select the <em>API token</em> displayed in message.</li>
<li>Copy and use in sample code.</li>
</ol>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> notifly <span class="hljs-keyword">import</span> telegram              <span class="hljs-comment">#import the package    </span>
x = telegram.Notifier(<span class="hljs-string">'bot API token'</span>)   <span class="hljs-comment">#create object of class Notifier</span>
x.send_message(<span class="hljs-string">'message'</span>)                <span class="hljs-comment">#send message</span>
x.send_image(<span class="hljs-string">"image address"</span>)            <span class="hljs-comment">#send image(.jpg or .png format)</span>
x.send_file(<span class="hljs-string">"file address"</span>)              <span class="hljs-comment">#send document</span>
x.session_dump()                         <span class="hljs-comment">#creates folder named 'downloads' in local folder, downloads/saves message,chat details for current session in 'sessio_dump.json' file</span>
</code></pre>
</li>
<li>Run sample code.</li>
</ol>
<h3 id="heading-discord">Discord</h3>
<p>To see how the tool works,</p>
<ol>
<li>Create server.</li>
<li><p>Create and copy server webhooks <a target="_blank" href="https://discordjs.guide/popular-topics/webhooks.html#creating-webhooks">instruction</a> and use in sample code.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> notifly <span class="hljs-keyword">import</span> discord
x = discord.Notifier(<span class="hljs-string">r'webhook'</span>)         <span class="hljs-comment">#create object of class Notifier</span>
x.send_message(<span class="hljs-string">'message'</span>)                <span class="hljs-comment">#send message</span>
x.send_file(<span class="hljs-string">"file address"</span>)              <span class="hljs-comment">#send file</span>
x.send_file(<span class="hljs-string">"image address"</span>)             <span class="hljs-comment">#send image</span>
</code></pre>
</li>
<li>Run sample code.</li>
</ol>
<h3 id="heading-slack">Slack</h3>
<p>To see how the tool works,</p>
<ol>
<li>Create app. Follow these steps,<ol>
<li>Go <a target="_blank" href="https://api.slack.com/">here</a> to create a new API for slack.</li>
<li>Choose to  <code>Create an App</code> .</li>
<li>Enter <em>App Name</em> and select workspace. Click <code>Create App</code>.</li>
<li>Under <strong>Add features and functionality</strong> select <code>Incoming Webhooks</code> and <strong>Activate Incoming Webhooks</strong>.</li>
<li>Scroll down, select <code>Add New Webhook to Workspace</code> and select a channel from the drop down.This channel name is used as an argument in the sample code. Click <code>Allow</code>.</li>
<li>Select <strong>OAuth &amp; Permissions</strong> from left-sidebar.</li>
<li>Under <strong>Scopes</strong> &gt; <strong>Bot Token Scopes</strong>  click <code>Add an OAuth Scope</code> and add the following scopes,
<br /><code>chat:write</code>   <code>chat:write.public</code>   <code>files:write</code>   <code>users:write</code></li>
<li>Scroll up, under <strong>OAuth Tokens for Your Team</strong> copy the <em>Bot User OAuth Access Token</em> to use in sample code.</li>
<li>Click <code>Reinstall to Workspace</code>, select channel and click <code>Allow</code>.</li>
</ol>
</li>
<li><p>Write sample code.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> notifly <span class="hljs-keyword">import</span> slack
x= slack.Notifier(<span class="hljs-string">'token'</span>, channel=<span class="hljs-string">'channel-name'</span>)      <span class="hljs-comment">#create object of class Notiflier</span>
x.send_message(<span class="hljs-string">'message'</span>)      <span class="hljs-comment">#send message</span>
x.send_file(<span class="hljs-string">"image or file address"</span>)      <span class="hljs-comment">#send image/file</span>
</code></pre>
</li>
<li>Run sample code.</li>
</ol>
<h3 id="heading-tensorflow-integration">Tensorflow Integration</h3>
<p>Plug and play feature for your tensorflow callbacks</p>
<pre><code class="lang-python"><span class="hljs-comment"># create your notifier using above methods</span>
<span class="hljs-keyword">from</span> notifly <span class="hljs-keyword">import</span> discord
notifier = discord.Notifier(<span class="hljs-string">r'webhook'</span>) 
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">MyNotifierCallback</span>:</span>

<span class="hljs-meta">    @notifier.notify_on_epoch_begin(epoch_interval=1, graph_interval=1, hardware_stats_interval=1)</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">on_epoch_begin</span>(<span class="hljs-params">self, epoch, logs=None</span>):</span>
        <span class="hljs-keyword">pass</span>

<span class="hljs-meta">    @notifier.notify_on_epoch_end(epoch_interval=1, graph_interval=1, hardware_stats_interval=1)</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">on_epoch_end</span>(<span class="hljs-params">self, epoch, logs=None</span>):</span>
        <span class="hljs-keyword">pass</span>

<span class="hljs-meta">    @notifier.notify_on_train_begin()</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">on_train_begin</span>(<span class="hljs-params">self, logs=None</span>):</span>
        <span class="hljs-keyword">pass</span>

<span class="hljs-meta">    @notifier.notify_on_train_end()</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">on_train_end</span>(<span class="hljs-params">self, logs=None</span>):</span>
        <span class="hljs-keyword">pass</span>

model.fit(callbacks=[MyNotifierCallback()])
</code></pre>
<h2 id="heading-learn-more-about-notifly">Learn more about Notifly ✨</h2>
<p>Read the <a target="_blank" href="https://github.com/rexdivakar/Notifly/wiki">wiki pages</a> which has all the above steps in great detail with some examples as well 🤩🎉.</p>
<h2 id="heading-contributing">Contributing</h2>
<ol>
<li>Fork the Project</li>
<li>Create your Feature Branch<blockquote>
<p>git checkout -b feature/mybranch</p>
</blockquote>
</li>
<li>Commit your Changes<blockquote>
<p>git commit -m 'Add something'</p>
</blockquote>
</li>
<li>Push to the Branch<blockquote>
<p>git push origin feature/mybranch</p>
</blockquote>
</li>
<li>Open a Pull Request<br /><br />
Follow the given commands or use the amazing <strong><em>GitHub GUI</em></strong><br />
<strong>Happy Contributing</strong></li>
</ol>
]]></content:encoded></item></channel></rss>