<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Programming with Paulers]]></title><description><![CDATA[Coding the world one function at a time.]]></description><link>https://paulers.com</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 23:48:33 GMT</lastBuildDate><atom:link href="https://paulers.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[NestJS Is Basically ASP.NET Core, And That's Great!]]></title><description><![CDATA[Not clickbait, I swear. Hear me out. Let's drop some dates first:

ASP.NET 5 was announced in mid-late 2015

ASP.NET Core was RC'ed in January 2016 and released in June 2016.

NestJS 1.0's first NPM publish date was May 14, 2017.


Now that we have t...]]></description><link>https://paulers.com/nestjs-is-basically-aspnet-core-and-thats-great</link><guid isPermaLink="true">https://paulers.com/nestjs-is-basically-aspnet-core-and-thats-great</guid><category><![CDATA[nestjs]]></category><category><![CDATA[asp.net core]]></category><category><![CDATA[REST API]]></category><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Mon, 21 Aug 2023 15:30:09 GMT</pubDate><content:encoded><![CDATA[<p>Not clickbait, I swear. Hear me out. Let's drop some dates first:</p>
<ol>
<li><p>ASP.NET 5 was announced in mid-late 2015</p>
</li>
<li><p>ASP.NET Core was RC'ed in January 2016 and released in June 2016.</p>
</li>
<li><p>NestJS 1.0's first NPM publish date was May 14, 2017.</p>
</li>
</ol>
<p>Now that we have the chronological order out of the way, you see why I worded the title the way I did. It's not "ASP.NET Core Is Basically NestJS" because that would infer that NestJS came first -- it did not.</p>
<p>Now that we have the controversy handled, let's look at the similarities and some differences. At the end I'll offer my opinion on when to use one or the other.</p>
<h2 id="heading-similarities">Similarities</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>ASP.NET Core</td><td>NestJS</td><td></td></tr>
</thead>
<tbody>
<tr>
<td>Controllers</td><td>Controllers</td><td>Accept incoming HTTP requests and route them to responsible services.</td></tr>
<tr>
<td>Services</td><td>Providers</td><td>In both frameworks, these can represent repositories, helpers, factories, etc. Services can be dependency-injected into other services and controllers.</td></tr>
<tr>
<td>Namespaces</td><td>Modules</td><td>In NestJS, modules are meant to organize "closely related set of capabilities". In ASP.NET, Namespaces usually organize by type of class. These are different, but both serve as a way to organize the codebase.</td></tr>
<tr>
<td>Middleware</td><td>Middleware</td><td>Process request and response objects.</td></tr>
<tr>
<td>Filters</td><td>Pipes</td><td>Filters in ASP.NET are used for a variety of things, including what NestJS uses pipes for.</td></tr>
<tr>
<td>Filters</td><td>Guards</td><td>Authorization Filters in ASP.NET are the same as Guards in NestJS.</td></tr>
<tr>
<td>Exception Handlers</td><td>Exception Filters</td><td>Essentially the same functionality. Convert a thrown exception into a response user can understand.</td></tr>
<tr>
<td>IConfiguration</td><td>ConfigModule</td><td>App configuration</td></tr>
<tr>
<td>ModelValidation</td><td>ValidationPipe</td><td>Incoming payload validation</td></tr>
<tr>
<td>ApiVersioning</td><td>Versioning</td><td>REST endpoint versioning</td></tr>
<tr>
<td>ILogger</td><td>Logger</td><td>Included logging</td></tr>
</tbody>
</table>
</div><p>Just looking at the above table, you can see the two frameworks cover all the bases. There are more similarities I have omitted, or this table would get out of control. For example both frameworks support websockets, gRPC, unit testing, graphql, response caching and auth.</p>
<p>Both frameworks use the concept of inversion of control (the D in SOLID) and just generally are very object-oriented. Anyone coming from ASP.NET Core and strongly typed C# would feel right at home in NestJS with TypeScript. It's a natural transition.</p>
<h2 id="heading-differences">Differences</h2>
<ul>
<li><p>NestJS supports CRON-based tasks. ASP.NET Core's BackgroundTask does not support cron expressions.</p>
</li>
<li><p>NestJS's database ORM support is cool. ASP.NET has EntityFramework which is okay. They're not quite the same thing but do share the same goal -- simplify data access.</p>
</li>
<li><p>NestJS has specific implementations for microservices. ASP.NET supports different transports, but not out of the box.</p>
</li>
<li><p>NestJS supports delegation of long-running or CPU-intensive tasks to queues (Redis-backed) via Bull. It's cool that this is built-in, I don't believe ASP.NET has anything out of the box.</p>
</li>
<li><p>ASP.NET Core's <a target="_blank" href="https://www.techempower.com/benchmarks/#section=data-r21">benchmark performance</a> still trounces Node.js, even on Fastify.</p>
</li>
</ul>
<blockquote>
<p>Note: It's interesting that out of the popular REST API frameworks, ASP.NET Core is by far the fastest (14) with Fiber (Go) coming in in 34th spot and nothing else in the top 100. Also, these benchmarks don't really matter.</p>
</blockquote>
<ul>
<li><p>ASP.NET Core's integration testing framework with TestServer is arguably better than anything Jest can offer.</p>
</li>
<li><p>ASP.NET Core has a giant corporation behind it and millions of dollars in resources. NestJS is run by a single guy (not really, but you get the picture).</p>
</li>
</ul>
<p>These differences are largely irrelevant, except, perhaps, for the CRON task support in NestJS. There are Nuget packages which improve BackgroundTasks in ASP.NET Core, but they're not available out of the box.</p>
<h2 id="heading-im-starting-a-business-which-should-i-use">I'm Starting a Business, Which Should I Use?</h2>
<p>As with everything in life, it depends. Do you have a team? What is your team's competency? Do you want enterprise-level support when the time comes? Where will you host?</p>
<p>You can write anything with JavaScript and almost everything with C#. C# developers are going to be a bit more expensive than JavaScript. Hosting C# code is going to be a bit more labor-intensive than anything running on Node and potentially also more expensive. You can deploy a Node app to the edge on CloudFront or Railway for free in a matter of minutes. When you scale and you're running on Node, you'll be piecing together disparate services like Railway, Upstash, Axiom and Clerk. If you're running ASP.NET in Azure, you'll get everything you need in one convenient place and have just one bill to pay (although a higher bill).</p>
<h3 id="heading-recommendation">Recommendation</h3>
<p>If you're bootstrapping a hipster world-altering startup out of your garage, NestJS is the better choice. Assuming you don't need any kind of audit compliance any time soon and you're okay with refactoring later when you join the big leagues.</p>
<p>If you're building something in a regulated industry (aerospace, health, finance), and you're planning to bring in more seasoned and serious engineers who take security and performance seriously, I recommend ASP.NET. There's a reason these huge industries run on enterprise-supported languages like Java and C# and not Node.</p>
<p>You may also consider Gin or Fiber (Golang) if you're writing microservices. If you're writing anything real-time like a game server, you won't be using Node, ASP.NET or Go anyway.</p>
<p>In the end, this is not a very high-stakes choice. Look at the big picture and make the call. You won't be disappointed with either choice!</p>
]]></content:encoded></item><item><title><![CDATA[Frontend Framework Analysis - Aug 2023]]></title><description><![CDATA[If someone asked me "If you were starting a project from scratch today, which front-end framework would you choose", my answer would be Svelte. Actually... Vue. Just kidding, it must be React, right? What about Angular?
If you read my REST API articl...]]></description><link>https://paulers.com/frontend-framework-analysis-aug-2023</link><guid isPermaLink="true">https://paulers.com/frontend-framework-analysis-aug-2023</guid><category><![CDATA[Frontend Development]]></category><category><![CDATA[frontend]]></category><category><![CDATA[React]]></category><category><![CDATA[Svelte]]></category><category><![CDATA[Vue.js]]></category><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Mon, 14 Aug 2023 15:00:09 GMT</pubDate><content:encoded><![CDATA[<p>If someone asked me "If you were starting a project from scratch today, which front-end framework would you choose", my answer would be Svelte. Actually... Vue. Just kidding, it must be React, right? What about Angular?</p>
<p>If you read my REST API article, you might already know what's coming: <strong>it doesn't matter.</strong> That, there, is a bit misleading though, and I'll explain why a bit later in this article. First, let's look at the <a target="_blank" href="https://survey.stackoverflow.co/2023/#most-popular-technologies-webframe-prof">StackOverflow 2023 Survey</a>.</p>
<h3 id="heading-data-data-everywhere">Data, data everywhere!</h3>
<p>The most popular front-end framework is React. This should be a non-controversial statement. From the survey:</p>
<ol>
<li><p>React - 42.87%</p>
</li>
<li><p>jQuery - 22.85%</p>
</li>
<li><p>Angular - 19.89%</p>
</li>
<li><p>Express - 19.51</p>
</li>
<li><p>Vue - 17.64%</p>
</li>
<li><p>Next.js 17.3%</p>
</li>
</ol>
<p>Svelte is way down the list with 6.01%, Blazor at 5.41% and Nuxt.js at 3.89%. We'll need these numbers a bit later, so they're here for your and my reference.</p>
<p>Scrolling down a bit we find the <a target="_blank" href="https://survey.stackoverflow.co/2023/#technology-admired-and-desired">admired/desired section</a> where we can see that raw React is not especially useful and most people prefer to use its meta cousin, Next.js. Even further down is the <a target="_blank" href="https://survey.stackoverflow.co/2023/#technology-worked-with-vs-want-to-work-with">"worked with/want to work with" section</a> where we learn that React developers want to work with Next.js, and are curious about Angular, Svelte and Vue.</p>
<p>Q: Why are we reviewing all this?<br />A: We can see what the current market looks like and trends away or towards some technologies.</p>
<h3 id="heading-data-review-amp-derivatives">Data review &amp; derivatives</h3>
<ul>
<li><p>React devs want to check out Vue, Svelte and Angular</p>
</li>
<li><p>Vue devs don't want to work with anything other than Vue</p>
</li>
<li><p>Svelte devs don't want to work with anything other than Svelte</p>
</li>
<li><p>Angular devs want to check out React</p>
</li>
</ul>
<p>When considering all the above SO survey data, we can derive several assumptions:</p>
<ul>
<li><p>React and Next.js make up 60% of the market, but React by itself is not liked much. Devs who use React want to use Next.js. This is because Next.js is a batteries-included framework that makes building full apps significantly easier compared to piecing together various React packages.</p>
</li>
<li><p>Vue and Svelte devs either started with those frameworks or came from React and don't want to go back. This makes sense, because both Vue and Svelte have a better developer experience than React.</p>
</li>
<li><p>Angular devs are interested in React. Here, I'm puzzled. If anything, they should be interested in Next.js, since that's closer to Angular than React is. Perhaps professional curiosity?</p>
</li>
<li><p>Nobody cares about ASP.NET Core, Blazor.</p>
</li>
<li><p>People learning to code just focus on what all the YouTubers tell them to - Node/Express, React/NextJS. This is why adoption of arguably better frameworks like Svelte and Vue is low. When you already learned React and it's 60% of the market, why learn anything else?</p>
</li>
</ul>
<h3 id="heading-it-doesnt-matter">It doesn't matter</h3>
<p>Let's answer the question I pose at the start of this article and explain why it doesn't matter which framework you choose.</p>
<p>If I were starting a greenfield project today, which framework would I choose? Svelte.</p>
<p>Why doesn't it matter? Because if you know JavaScript well, you can pick up any of these frameworks with ease. They all have phenomenal documentation (yay react.dev) and going from Vue to React, React to Svelte or Svelte to Vue is maybe a week or two of getting used to new syntax and some framework-specific quirks. Everything else is just JavaScript.</p>
<blockquote>
<p>My front-end journey: Backbone (2013) -&gt; Knockout (2015) -&gt; Angular (2016) -&gt; React (2017) -&gt; Vue (2018) -&gt; Svelte (2023). I still have a KnockoutJS project I support, and recently dusted off React. No problem at all.</p>
</blockquote>
<p>React has JSX, but you're mostly writing JavaScript. It's verbose, and hooks are not amazing. Vue has a lot of "magic" resulting in fast dev-to-customer cycles, but there's a bit of ramp-up for new developers learning said "magic". Svelte is super lightweight, and there's a fair bit of magic too, but the syntax is a lot simpler. Svelte is also different in that it's a compiler. You never ship the framework to the customer, resulting in a smaller bundle size.</p>
<h3 id="heading-closing-thoughts">Closing Thoughts</h3>
<p>If you're a CTO or an entrepreneur thinking about which framework you should go with, ask yourself these questions:</p>
<ol>
<li><p>Will you hire junior/mid-level front-end developers?</p>
<ol>
<li><p>If yes, they probably learned React on YouTube, so if your workforce is proficient in React but not JavaScript, go with React.</p>
</li>
<li><p>If not, you will hire a senior full-stack or front-end engineer, they should be able to pick up any framework you choose. I recommend Svelte.</p>
</li>
</ol>
</li>
<li><p>Are you thinking about building a mobile app to go along with your web app?</p>
<ol>
<li><p>If yes, go with React. React Native is a thing and it's pretty good when you're bootstrapping a business. React skills are 60% transferable to React Native.</p>
</li>
<li><p>If not, go with Svelte.</p>
</li>
</ol>
</li>
<li><p>Are you going to have an API product along with your web app?</p>
<ol>
<li>If yes, consider a meta framework like NextJS, NuxtJS or SvelteKit. These come with a backend you can just reuse for your API without needing to build a separate project.</li>
</ol>
</li>
<li><p>Do you want your developers to be super productive?</p>
<ol>
<li>Svelte</li>
</ol>
</li>
<li><p>Do you already know Angular and plan on building everything yourself?</p>
<ol>
<li>Svel... just kidding, go with what you know best: Angular</li>
</ol>
</li>
</ol>
<p>Above are my opinions, based on my own experience with React, Vue and Svelte.</p>
<p>There are many other things to consider. Mobile-first businesses may consider React Native, but also Flutter and .NET MAUI. If you're going with Flutter, you're in Google's ecosystem, so Angular may be a viable choice, especially if you're using Firebase as your backend. Anyone building with .NET MAUI, good luck! You're in the Microsoft ecosystem, so you may want to consider Blazor -- especially since Blazor Hybrid makes component transfer between projects a breeze.</p>
<p>Let me leave you with this:</p>
<p>If you hire strong front-end developers with solid JavaScript basics, they will be able to jump into any framework you throw at them. Ramp-up time is 1-2 weeks, so it honestly doesn't matter which framework you choose.</p>
<p>Happy building!</p>
]]></content:encoded></item><item><title><![CDATA[Building REST APIs with C#, Go, Python, JavaScript]]></title><description><![CDATA[Browsing around job postings with the query "Senior Software Engineer", a common responsibility that comes up is "Build out high-quality APIs and web services providing a scalable, efficient and tailored set of interfaces." This generic requirement, ...]]></description><link>https://paulers.com/building-rest-apis-with-c-go-python-javascript</link><guid isPermaLink="true">https://paulers.com/building-rest-apis-with-c-go-python-javascript</guid><category><![CDATA[REST API]]></category><category><![CDATA[cloud architecture]]></category><category><![CDATA[Startups]]></category><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Wed, 09 Aug 2023 06:49:49 GMT</pubDate><content:encoded><![CDATA[<p>Browsing around job postings with the query "Senior Software Engineer", a common responsibility that comes up is "Build out high-quality APIs and web services providing a scalable, efficient and tailored set of interfaces." This generic requirement, translated, is asking "can you build a REST API?" The "scalable, efficient and tailored set of interfaces" likely refers to the endpoints not exploding when many concurrent users hit it.</p>
<p>Most popular programming languages have their REST API frameworks:</p>
<ul>
<li><p>Node has Express, Fastify and Nest</p>
</li>
<li><p>.NET has Active Server Pages (ASP)</p>
</li>
<li><p>Python has FastAPI, Flask and Django</p>
</li>
<li><p>Go has Echo, Fiber and Gin</p>
</li>
<li><p>Rust has Rocket and Rustless</p>
</li>
</ul>
<p>I didn't list other languages here as I am less familiar with those, but out of the 5 listed above, I've tried most and there is one definite truth about them: <strong>They are all essentially the same.</strong></p>
<p>There are differences between how Rust and Go, or Python and .NET handle stuff behind the scenes, but my hot take is not meant to compare lab benchmarks of all these frameworks. Instead, I'm talking about the dev and customer experience. Let's narrow it down a bit.</p>
<h3 id="heading-dev-experience">Dev Experience</h3>
<p>Some of the above frameworks are a bit more complicated than others. ASP and Django are "batteries-included" frameworks, NestJS is more serious than Express and Fastify, while Echo and Fiber are overall less capable than Gin, but still have everything you'd ever need to write an API. At the core of these frameworks is their promise to serve REST requests. Here are some examples:</p>
<pre><code class="lang-go">app := fiber.New()

app.Get(<span class="hljs-string">"/"</span>, <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(c *fiber.Ctx)</span> <span class="hljs-title">error</span></span> {
    <span class="hljs-keyword">return</span> c.SendString(<span class="hljs-string">"Hello, World 👋!"</span>)
})

app.Listen(<span class="hljs-string">":3000"</span>)
</code></pre>
<p>Another:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> express = <span class="hljs-built_in">require</span>(<span class="hljs-string">'express'</span>)
<span class="hljs-keyword">const</span> app = express()

app.get(<span class="hljs-string">'/'</span>, <span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
  res.send(<span class="hljs-string">'Hello World!'</span>)
})

app.listen(<span class="hljs-number">3000</span>)
</code></pre>
<p>And another:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">var</span> builder = WebApplication.CreateBuilder(args);
<span class="hljs-keyword">var</span> app = builder.Build();

app.MapGet(<span class="hljs-string">"/"</span>, () =&gt; <span class="hljs-string">"Hello World!"</span>);

app.Run();
</code></pre>
<p>One more:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> flask <span class="hljs-keyword">import</span> Flask
app = Flask(__name__)

<span class="hljs-meta">@app.route("/")</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">hello</span>():</span>
        <span class="hljs-keyword">return</span> <span class="hljs-string">"Hello World!"</span>

<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">"__main__"</span>:
    app.run()
</code></pre>
<p>And last one...</p>
<pre><code class="lang-rust"><span class="hljs-meta">#![feature(proc_macro_hygiene, decl_macro)]</span>

<span class="hljs-meta">#[macro_use]</span> <span class="hljs-keyword">extern</span> <span class="hljs-keyword">crate</span> rocket;

<span class="hljs-meta">#[get(<span class="hljs-meta-string">"/"</span>)]</span>
<span class="hljs-function"><span class="hljs-keyword">fn</span> <span class="hljs-title">index</span></span>() -&gt; &amp;<span class="hljs-symbol">'static</span> <span class="hljs-built_in">str</span> {
    <span class="hljs-string">"Hello, world!"</span>
}

<span class="hljs-function"><span class="hljs-keyword">fn</span> <span class="hljs-title">main</span></span>() {
    rocket::ignite().mount(<span class="hljs-string">"/"</span>, routes![index]).launch();
}
</code></pre>
<p>As you can see in the above examples, these are all conceptually the same, simply different syntax since they're all written in different languages.</p>
<ol>
<li><p>Instantiate a server object</p>
</li>
<li><p>Define a route, then add a handler to it.</p>
</li>
</ol>
<blockquote>
<p>Note: I'm not familiar with Rust, but wanted to include it since it's been all the rage recently.</p>
</blockquote>
<p>ASP.NET is neat because it can get as simple or as complicated as you need it to be without needing additional 3rd party packages. The syntax above is called "minimal", but the framework supports the MVC pattern, dependency injection and more.</p>
<p>Node users will usually start with Express, then be told they're tools for using that and shown the light of Fastify. Once they switch to Fastify, they'll discover nobody actually uses it in serious production apps, because NestJS exists. Coincidentally, NestJS is built on top of Express/Fastify. Ah, the world of Node is a doozy.</p>
<p>Python has several good options too, with Django being the 1000 pound gorilla... but FastAPI and Flask are popular too. If you're a Go dev, you may first find Gin and then learn about Fiber.</p>
<p>If you're an entrepreneurial spirit thinking about starting your own SaaS, you may look at this list and collapse in a heap. But wait! Good news! It doesn't matter which one you choose, because the customer cares about the experience, not how that experience is delivered.</p>
<h3 id="heading-customer-experience">Customer Experience</h3>
<p>Whether you go with ASP.NET, NestJS, Gin or Flask to deliver your API, the experience will be the same. The customer will get a Bearer token somewhere, toss it into the header, then send a GET or POST request to your API. They'll get a JSON response and be pleased. Sure, sure... in some cases, they'll send a GraphQL or a gRPC request (in which case you will want to make sure your framework supports these), but 98.4% of the time you'll serve a JSON response either directly to the customer or to your mobile or web app.</p>
<p>As I mentioned earlier, I'm not considering minute differences in performance when serializing huge data sets, or any kind of data transformation that might occur in the controllers/handlers of these frameworks. In most cases, data transformation should not be happening in the API anyway, or ideally, it shouldn't be happening "live" at all and if you're considering JSON serialization speed, you're designing your API incorrectly anyway (or you're Salesforce/Oracle and don't care about customer experience).</p>
<p>Customers make requests and receive responses. Regardless of the framework you use, your kickass new SaaS will be successful.</p>
<h3 id="heading-parting-thoughts">Parting Thoughts</h3>
<p>If you've designed your cloud architecture correctly, the framework does not matter. If you're consciously building a monolith, you may opt for Django. If you're gonna be running several or several hundred microservices, Go, Flask, Fastify and Rustless are awesome choices. If you're writing a robust customer-facing API, NestJS and ASP.NET are great choices. You can even write everything in just one language and framework like ASP.NET, Express or Flask.</p>
<ul>
<li><p>You won't need Rust's garbage collector-less performance advantages if you're just starting out. Don't sweat the details -- pick whatever is the quickest to be productive with.</p>
</li>
<li><p>Any decent software engineer (especially Sr level) can join your company and be productive with any of the above frameworks even if they have never used them or written any code in the language.</p>
<blockquote>
<p>I've been coding primarily in C# and JavaScript, but I picked up Go/Fiber in one week and then Python/Flask the week after. Once you're an expert in one language, other languages are a cakewalk. Truly.</p>
</blockquote>
</li>
<li><p>I didn't mention meta frameworks like NextJS, NuxtJS and SvelteKit. These come with their own REST APIs that marry the backend and frontend technologies in one tech stack. These are all JavaScript/TypeScript and should strongly be considered especially if you're building a web app.</p>
</li>
<li><p>Before making a technology decision, take a step back and look at the big picture. Are you going to need to scale quickly? Do you have a mobile app and a web app? Are there microservices? Think about cron jobs or any ETLs you're gonna write. Is your product AI based (where you need to train your own models) or do you just plug into OpenAI or Bard? How about community and support?</p>
</li>
</ul>
<p>At the end of the day, the API framework you choose for your customer facing product is irrelevant. The API framework you choose for backing microservices is mostly irrelevant (there are some less than optimal choices, but nothing that would sink your business). What you build your front-end in -- well, that may be a bit relevant... and about that in the next article!</p>
]]></content:encoded></item><item><title><![CDATA[Designing a SaaS in 2023 - Frontend]]></title><description><![CDATA[Our chosen stack is Nuxt 3, which is Vue 3 + Typescript. This is a solid and future-proof tech stack. The project has been around for a while, there's a moderately sized community and several big-name backers. We're also going to use Bulma, but we're...]]></description><link>https://paulers.com/designing-a-saas-in-2023-frontend</link><guid isPermaLink="true">https://paulers.com/designing-a-saas-in-2023-frontend</guid><category><![CDATA[SaaS]]></category><category><![CDATA[advice]]></category><category><![CDATA[frontend]]></category><category><![CDATA[architecture]]></category><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Tue, 21 Feb 2023 03:00:50 GMT</pubDate><content:encoded><![CDATA[<p>Our chosen stack is <a target="_blank" href="https://nuxt.com">Nuxt 3</a>, which is Vue 3 + Typescript. This is a solid and future-proof tech stack. The project has been around for a while, there's a moderately sized community and several big-name backers. We're also going to use <a target="_blank" href="https://bulma.io">Bulma</a>, but we're not going to use a component library. To find out the reasons behind this decision, read on!</p>
<h2 id="heading-packages">Packages</h2>
<p>Here are some other libraries off NPM we're going to use with Nuxt:</p>
<ul>
<li><p>date-fns - date formatting and manipulation made easy!</p>
</li>
<li><p>auth0/auth0spa-js - we'll need this to facilitate logging in with Auth0</p>
</li>
<li><p>hapi/iron - this will be used to seal/unseal a cookie on the server</p>
</li>
<li><p>jose - used to parse JWKS on the server</p>
</li>
<li><p>pinia-nuxt - global state management on the front-end (goes with pinia below)</p>
</li>
<li><p>numeral - number formatting made easy!</p>
</li>
<li><p>sass - we'll need this to transpile sass into CSS</p>
</li>
<li><p>bulma - this is our CSS framework</p>
</li>
</ul>
<p>And some Nuxt modules:</p>
<ul>
<li><p>icon</p>
</li>
<li><p>color-mode</p>
</li>
<li><p>device</p>
</li>
<li><p>pinia</p>
</li>
</ul>
<p>You may be wondering why we didn't opt to go with Tailwind, Windicss or Unocss. This is where big-picture planning comes into play. We're building a SaaS application to support our main product, which is IoT hardware. This means that our customer is not expected to be spending a lot of time in the UI, if any at all beyond the initial setup. Thus, in the interest of time and developer productivity, Bulma with its ready-styled components, customizability and extensions is the smart way to go. Would I prefer to use Windi? Yes I would. Is it the smart thing to do in this case? It is not.</p>
<h3 id="heading-authentication">Authentication</h3>
<p>We chose to go with Auth0, which means we're going to need to use one of their login SDKs. However, rather than communicating with their server every time to validate a token, we're gonna create our encrypted cookie which contains all the information we need to communicate with any APIs we protect with Auth0. Because Nuxt 3 comes with excellent cookie handling infrastructure, this is the easiest, most performant, and very secure way to manage session state for a SaaS application. Libraries used in this process are <a target="_blank" href="https://www.npmjs.com/package/@hapi/iron">@hapi/iron</a>, <a target="_blank" href="https://www.npmjs.com/package/jose">jose</a> and <a target="_blank" href="https://www.npmjs.com/package/@auth0/auth0-spa-js">@auth0/auth0-spa-js</a>. We're not going to use Auth0's Vue SDK, because we're not using any functionality contained therein.</p>
<h3 id="heading-utilities">Utilities</h3>
<p>To make formatting easier, we'll use <em>date-fns</em> and <em>numeral</em>. These are standard libraries used in thousands of projects. I know <em>numeral</em> has been around for an exceptionally long time, so if anyone has suggestions for a more modern alternative, I'm happy to check them out.</p>
<p>We're also going to use Pinia for state management. Pinia is the spiritual successor to Vuex, Vue's own state management library modeled on React's Redux. Pinia is easy to use, conceptually straightforward forward and does all that Vuex used to do.</p>
<h3 id="heading-development-process">Development Process</h3>
<p>Since Nuxt 3 can be used to build both the front-end and middle-tier, it makes sense to hire full-stack engineers to drive the project. It will be more cost-efficient, and work will get done faster. This is the theme of the project -- developer efficiency resulting in quick product to market.</p>
<p>In my experience, the most efficient way to build in Nuxt is by focusing on features rather than problem spaces. For example, if we're building "the like button", the engineer in charge of that is responsible for designing the button itself, hooking it up to the Pinia store, the server API, and the database. This is as opposed to one engineer working on the button front-end code and another on the middle-tier and database. For small to medium-sized projects, separating by problem spaces introduces unnecessary complexity. This is why full-stack engineers are well-paid and highly sought after!</p>
<p>When building out the initial project, I like to start with this process:</p>
<ol>
<li><p>Scaffold out start project via CLI</p>
</li>
<li><p>Add all necessary packages</p>
</li>
<li><p>Update any configuration files (tsconfig, nuxtconfig, etc)</p>
</li>
<li><p>Add CSS bindings</p>
</li>
<li><p>Build out starter routes / pages, state store, components, composables</p>
</li>
<li><p>Add a layout if necessary (probably is!)</p>
</li>
<li><p>Add authentication</p>
</li>
</ol>
<p>Once this is all done, feature development can begin. Developers coming in will have a solid foundation to build on, examples of project assets like composables and pages, and most importantly, be able to deploy the code to a publicly available host because it will be secured.</p>
<p>In the next article, we'll tackle backend implementation with microservices and our database of choice - Redis Stack.</p>
]]></content:encoded></item><item><title><![CDATA[Recursion in C#]]></title><description><![CDATA[It's not often that we use recursion in real-world programming. The last time I wrote recursion was during an interview 6+ years ago. Recursive functions are not performant and thus are avoided. A few days ago, however, I had a task to navigate down ...]]></description><link>https://paulers.com/recursion-in-c</link><guid isPermaLink="true">https://paulers.com/recursion-in-c</guid><category><![CDATA[C#]]></category><category><![CDATA[Recursion]]></category><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Sat, 28 Jan 2023 06:05:20 GMT</pubDate><content:encoded><![CDATA[<p>It's not often that we use recursion in real-world programming. The last time I wrote recursion was during an interview 6+ years ago. Recursive functions are not performant and thus are avoided. A few days ago, however, I had a task to navigate down a tree given an ancestor node, so ... recursion it is!</p>
<h3 id="heading-the-scenario">The Scenario</h3>
<p>Given a GUID Id of an ancestor Account object (think like a Salesforce account or an Organization), find all the Accounts under that ancestor to a maximum depth of 5. Something like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674884794511/99d1abc5-80dc-48af-ba8a-5851644fea57.png" alt class="image--center mx-auto" /></p>
<p>Given Account A's Id, get all the Accounts under it. First, here's the function which executes the recursion:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;IEnumerable&lt;Account&gt;&gt; GetAccountsWithChildren(Guid id)
    {
        <span class="hljs-comment">// Get all accounts first</span>
        <span class="hljs-keyword">var</span> allAccounts = <span class="hljs-keyword">await</span> GetAccounts();
        <span class="hljs-comment">// Bail early if no results</span>
        <span class="hljs-keyword">if</span> (allAccounts == <span class="hljs-literal">null</span> || allAccounts.Count == <span class="hljs-number">0</span>) <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> List&lt;Account&gt;();
        <span class="hljs-comment">// Get the root account</span>
        <span class="hljs-keyword">var</span> rootAccount = allAccounts.FirstOrDefault(x =&gt; x.Id == id);
        <span class="hljs-comment">// Bail if root account not found</span>
        <span class="hljs-keyword">if</span> (rootAccount == <span class="hljs-literal">null</span>) <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> List&lt;Account&gt;();
        <span class="hljs-comment">// Define the max depth we want to run our recursion</span>
        <span class="hljs-keyword">var</span> maxDepth = <span class="hljs-number">5</span>;
        <span class="hljs-comment">// Set up the list we're going to populate with results</span>
        List&lt;Account&gt; childAccounts = <span class="hljs-keyword">new</span> List&lt;Account&gt;();
        <span class="hljs-comment">// Add the root account at the start</span>
        childAccounts.Add(rootAccount);
        <span class="hljs-comment">// Run the recursion!</span>
        ExtractAccountsRecursively(maxDepth, id, allAccounts, childAccounts, <span class="hljs-number">0</span>);
        <span class="hljs-comment">// Finally, return the results</span>
        <span class="hljs-keyword">return</span> childAccounts;
    }
</code></pre>
<p>Nothing exceptional in the above method. The fun stuff is in this next bit:</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">void</span> <span class="hljs-title">ExtractAccountsRecursively</span>(<span class="hljs-params">
    <span class="hljs-keyword">int</span> maxDepth, 
    Guid? parentId, 
    IEnumerable&lt;Account&gt; allAccounts, 
    List&lt;Account&gt; childAccounts, 
    <span class="hljs-keyword">int</span> currDepth = <span class="hljs-number">0</span></span>)</span>
    {
        <span class="hljs-comment">// Bail if depth reached</span>
        <span class="hljs-keyword">if</span> (currDepth &gt;= maxDepth) <span class="hljs-keyword">return</span>;
        <span class="hljs-comment">// Bail if parentId is null</span>
        <span class="hljs-keyword">if</span> (!parentId.HasValue) <span class="hljs-keyword">return</span>;
        <span class="hljs-comment">// Bail if there are no more children</span>
        <span class="hljs-keyword">var</span> currentChildren = allAccounts.Where(a =&gt; a.Parent != <span class="hljs-literal">null</span> &amp;&amp; a.Parent.Id == parentId);
        <span class="hljs-comment">// Increase the current depth to pass down to the recursive function</span>
        currDepth++;
        <span class="hljs-comment">// Iterate over the children found with the parentId above</span>
        <span class="hljs-keyword">foreach</span> (<span class="hljs-keyword">var</span> account <span class="hljs-keyword">in</span> currentChildren)
        {
            <span class="hljs-comment">// Prevent duplicates</span>
            <span class="hljs-keyword">if</span> (childAccounts.Contains(account)) <span class="hljs-keyword">continue</span>;
            <span class="hljs-comment">// Add the account to the child accounts list</span>
            childAccounts.Add(account);
            <span class="hljs-comment">// Run this function again for the account we're iterating through</span>
            ExtractAccountsRecursively(maxDepth, account.Id, allAccounts, childOnlyAccounts, currDepth);
        }
    }
</code></pre>
<p>I commented the code above so it should be easy to follow. The incoming list, <code>allAccounts</code> , is a flat list of Account objects with <code>ParentId</code> property referring to the Id of another Account object.</p>
<p>I suspect there's a better way to write this using <code>yield</code> but that syntax confuses me. If anyone has a good example for the above scenario using <code>yield</code> I'd be interested to check it out.</p>
<p>Cheers!</p>
]]></content:encoded></item><item><title><![CDATA[Designing a SaaS in 2023 - Tooling]]></title><description><![CDATA[In the last article, we discussed tech stacks. Here is my preferred tech stack for the project. As a quick reminder, I'm building an IoT service with a web dashboard.

Frontend: Nuxt 3, Bulma CSS, no component library

Middle Tier: Nuxt 3 (Nitro/h3)
...]]></description><link>https://paulers.com/designing-a-saas-in-2023-tooling</link><guid isPermaLink="true">https://paulers.com/designing-a-saas-in-2023-tooling</guid><category><![CDATA[SaaS]]></category><category><![CDATA[tools]]></category><category><![CDATA[tooling]]></category><category><![CDATA[design and architecture]]></category><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Mon, 23 Jan 2023 05:14:03 GMT</pubDate><content:encoded><![CDATA[<p>In the last article, we discussed tech stacks. Here is my preferred tech stack for the project. As a quick reminder, I'm building an IoT service with a web dashboard.</p>
<ul>
<li><p>Frontend: Nuxt 3, Bulma CSS, no component library</p>
</li>
<li><p>Middle Tier: Nuxt 3 (Nitro/h3)</p>
</li>
<li><p>Database: Redis Cloud</p>
</li>
<li><p>Microservice infrastructure, services written in .NET 6.0 running in Kubernetes</p>
</li>
<li><p>Authentication: Auth0</p>
</li>
<li><p>Logging: Custom (Winston in nuxt3, ILogger in .NET)</p>
</li>
</ul>
<p>With these example technologies, let's look at the tooling I'm going to need to install to get up and running.</p>
<p>My (and I would imagine most people's) choice for coding front-end is Visual Studio Code. Since Nuxt is both the front-end middle tier, VS Code will cover both!</p>
<p>For the database, I chose Redis Cloud. I don't want to host my own Redis server and the Cloud option is free to start and cheap regardless. I will use RedisInsight to work with Redis since it's honestly the best tool out there.</p>
<p>I'll build out a set of microservices to support the application. Hosting microservices is best done in Kubernetes (even though there are some cool Level 2 options out there), so I'll install the necessary tooling:</p>
<ul>
<li><p><a target="_blank" href="https://kubernetes.io/docs/tasks/tools/">Kubectl</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/ahmetb/kubectx">Kubectx</a></p>
</li>
<li><p><a target="_blank" href="https://k9scli.io/">K9s CLI</a></p>
</li>
</ul>
<p>If you know anything about Kubernetes, you may be wondering where HELM tools are at. In this case, my project is not big enough to necessitate using HELM. One of the big lessons I took away from the past 3 years of working with Kubernetes is that it's extremely easy to over-engineer infrastructure -- especially Kubernetes. In my experience, building up to 20-25 microservices does not warrant anything more than basic levels of orchestration via YAML.</p>
<p>For writing .NET I'll use Visual Studio. Yes, theoretically I could just use VS Code, but the proper VS experience is better than VS Code. The community edition is free.</p>
<p>Working with REST and GraphQL services is best done with a tool like Postman, but my personal preference is <a target="_blank" href="https://insomnia.rest/">Insomnia.REST</a> ... their free version is good enough for what I need. If you can recommend something else, I'm open to suggestions in the comments!</p>
<p>Some other not completely related tools I like to use are <a target="_blank" href="https://apps.microsoft.com/store/detail/windows-terminal/9N0DX20HK701?hl=en-us&amp;gl=us">Windows Terminal</a>, <a target="_blank" href="https://notepad-plus-plus.org/downloads/">Notepad++</a>, and <a target="_blank" href="https://whimsical.com/">Whimsical</a>. I have several CLIs open in the Terminal, keep quick notes or open random files in Notepad++, and create designs and share with my team in Whimsical.</p>
<blockquote>
<p><strong>Side note:</strong> On personal projects I like to use <a target="_blank" href="https://www.notion.so/">Notion</a> and <a target="_blank" href="https://penpot.app/">Penpot</a>. Though I don't use them daily at work, they're very cool and I recommend checking them out if you're a one-stop-shop full-stack engineer.</p>
</blockquote>
<p>These are all the tools I can recommend at this time. I'm always looking for cool new tools, so if you have any suggestions, please leave them in the comments!</p>
]]></content:encoded></item><item><title><![CDATA[Designing a SaaS in 2023 - Tech Stack]]></title><description><![CDATA[In the previous article in this series, we covered some initial considerations when starting a SaaS business. Here, we'll cover choosing the tech stack to build your application with. I know this choice can be emotional. Developers have their prefere...]]></description><link>https://paulers.com/designing-a-saas-in-2023-tech-stack</link><guid isPermaLink="true">https://paulers.com/designing-a-saas-in-2023-tech-stack</guid><category><![CDATA[SaaS]]></category><category><![CDATA[cloud architecture]]></category><category><![CDATA[tech stacks]]></category><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Fri, 30 Dec 2022 05:16:27 GMT</pubDate><content:encoded><![CDATA[<p>In the previous article in this series, we covered some initial considerations when starting a SaaS business. Here, we'll cover choosing the tech stack to build your application with. I know this choice can be emotional. Developers have their preferences and will man the barricades in their defense.</p>
<p>In the real world, Gin is not a framework you're gonna be able to hire easily for. Nobody builds enterprise-grade apps with Laravel. Good luck getting audited for an IPO running on Node. And yet, Gin, Laravel and Node are excellent platforms to build on. I have written public-facing applications using two of those! Would I choose them for my SaaS though?</p>
<p>Maybe. Depends. Some questions first:</p>
<ol>
<li><p>Will this tech be supported 10 years from now?</p>
</li>
<li><p>Will I be able to hire people who can work with this tech?</p>
<ol>
<li><p>Is the tech appealing and easy to get started with?</p>
</li>
<li><p>Does it have a large community?</p>
</li>
</ol>
</li>
<li><p>Are there any potential security or performance issues with the tech?</p>
<ol>
<li>These could be non-existent at low user volumes but start showing up as the user base grows.</li>
</ol>
</li>
<li><p>Will I enjoy working with this tech?</p>
</li>
</ol>
<p>Answer the above questions for the tech stack of your choice. If you answer "yes" to all four, you may have a winner!</p>
<p>Let's explore the various parts of a SaaS and answer some questions.</p>
<h3 id="heading-middle-tier">Middle Tier</h3>
<p>I'm familiar with .NET, so I'll use that as an example.</p>
<ol>
<li><p>.NET has been around since the mid-2000s, and Microsoft is not about to let go. I'm confident .NET will be around for many years to come.</p>
</li>
<li><p>There are many .NET developers. Hiring should be no problem. Since .NET Core and more recently .NET 6.0, the stack is quite easy to get started with and the documentation is phenomenal. Since the framework has been around for 20 years, the community is huge too.</p>
</li>
<li><p>There are always gonna be potential security and performance issues with any framework, and .NET is no exception (look at EF Core early on, what a mess). However, Microsoft is particularly good at patching those up and their release cadence is clear and solid.</p>
</li>
<li><p>I enjoy working with .NET 6.0. C# specifically is an awesome language. C# 7 and recently C# 10 introduced a lot of modern features and made coding less verbose and just generally a better user experience.</p>
</li>
</ol>
<h3 id="heading-backend">Backend</h3>
<p>Though I've done a bit of React and Angular, my framework of choice is Vue.</p>
<ol>
<li><p>Vue has been around a while, it has a large community and lots of support. The OSS community backs it, so I don't have to depend on a corporation to keep it current (see Angular...). Vue checks out.</p>
</li>
<li><p>Vue may be difficult to find developers for, but it's extremely easy to pick up, so anyone with generic JavaScript knowledge should be able to fit right in. As mentioned above, the community is huge so getting help should be easy.</p>
</li>
<li><p>JavaScript has had a tough time with performance and security. Today, it's in the best place it has ever been, but still has ways to go. Vue 3 is a significant performance improvement over Vue 2, so chances are Vue 4 will be too. I trust it.</p>
</li>
<li><p>I love working with Vue (Svelte close second!), so no problems there!</p>
</li>
</ol>
<h3 id="heading-data">Data</h3>
<p>Data storage is a different story. Though some of the questions still do apply, database choice is a matter of need rather than want. Some key points:</p>
<ul>
<li><p>If your data is relational, you'll want to use a SQL database. SQL is also the most common way to store business data and is easy to hire for. There are a lot of choices here, though I would highly recommend a cloud-hosted managed solution. Cockroach DB, Azure SQL, or any cloud-hosted SQL database honestly.</p>
<ul>
<li>Self-hosting a database is a lot of work. Automated backups, geo-replication, etc... It's a lot of overhead. Better just to pay!</li>
</ul>
</li>
<li><p>If you're building out a microservice infrastructure, you're going to silo each business entity behind an API. This means you can use document storage like MongoDB, Azure Cosmos, Google Firestore, or even Redis.</p>
<ul>
<li>Joining between entities will happen in the API Gateway layer.</li>
</ul>
</li>
<li><p>Binary file storage is straightforward. Amazon S3, Azure/Google Storage. There are several options here so depending on which cloud hosting provider you go with; you'll want to go with that solution.</p>
</li>
</ul>
<p>There are some other data storage solutions such as a graph database, which have niche applications. It's worth it to spend the time to proof-of-concept different solutions and use different data stores in the process.</p>
<h3 id="heading-last-but-not-least">Last, but not least</h3>
<p>Some questions which I didn't ask above but should be considered:</p>
<ul>
<li><p>If my business is successful and I'm getting bought out (or going public), will I pass a security audit?</p>
</li>
<li><p>Does the front-end/middle-tier framework have SDK support for the data backend I want to use?</p>
</li>
<li><p>Do my financial backers have a say in the tech stack? Some VCs may have enough technical chops to butt into your technology choices.</p>
</li>
</ul>
<p>Also, depending on the size of your application, you may want to mix-and-match services from different hosting providers. For example, I have an application running on Render, static assets hosted in Google Storage and data sitting in Redis Cloud. When creating these, I made sure to put them all in the same geographic area (US West in my case) to reduce performance implications when my middle tier must talk to my database.</p>
<p>Choosing a tech stack is a monumental decision which will stay with you for many years, if not forever. Moving infrastructure, migrating data or switching frameworks are time-intensive and thus expensive endeavors. Sometimes switching is unavoidable due to product End-of-Life or hosted service deprecation. A cloud developer's job is to anticipate all the plays and lead their team to victory by making the right decisions at every opportunity! Don't be afraid to pivot should a choice you made turn sour either. Lastly, don't over-complicate your architecture. Your microservices may not need an event hub or a pubsub to communicate. You may not even need microservices. Hell, your app may not even need a middle tier -- perhaps Firebase, Supabase or <a target="_blank" href="https://pocketbase.io/">Pocketbase</a> is all you need...</p>
]]></content:encoded></item><item><title><![CDATA[Designing a SaaS in 2023 - Intro]]></title><description><![CDATA[This is the first article in the Designing a SaaS in 2023 series. The series will cover everything from choosing the technology stack and the hosting platform, through designing the back-end services and data storage, to front-end considerations. I'l...]]></description><link>https://paulers.com/designing-a-saas-in-2023-intro</link><guid isPermaLink="true">https://paulers.com/designing-a-saas-in-2023-intro</guid><category><![CDATA[SaaS]]></category><category><![CDATA[cloud architecture]]></category><category><![CDATA[advice]]></category><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Thu, 01 Dec 2022 09:12:15 GMT</pubDate><content:encoded><![CDATA[<p>This is the first article in the <em>Designing a SaaS in 2023</em> series. The series will cover everything from choosing the technology stack and the hosting platform, through designing the back-end services and data storage, to front-end considerations. I'll share lessons I learned over the past decade, the tools I use, and any tips and tricks for working with our chosen technologies. Finally, I will conclude with a look towards the future.</p>
<p>Throughout the series, I will assume my business is hardware-centric with a SaaS application supporting customers. A good example of this is any IoT business such as DIY alarm systems (SimpliSafe, Abode), any smart home company such as Samsung, Wyze or Philips Hue, or automotive smart integrations such as the Hyundai Bluelink, BMW's My BMW or Ford's FordPass. I chose this scenario because it's a bit more complicated than a regular run-of-the-mill web app and my goal is to cover more advanced scenarios. That said, a lot of the thought process will apply to regular web apps too.</p>
<p>Before getting into the meat of said thought process, let's look at some decisions we're gonna have to make before writing that first line of code.</p>
<h3 id="heading-initial-considerations">Initial Considerations</h3>
<p>Most early-stage SaaS green field projects will need to make key decisions that will stay with them for the lifetime of the product. <em>Build vs buy</em> is going to be one such decision. Depending on your business's core competency, you will want to outsource difficult or time-consuming pieces of your SaaS. For example:</p>
<ul>
<li><strong>Authentication.</strong> Do you build your own or let an Identity Provider handle it?</li>
<li><strong>Server-side caching.</strong> Do you host your own Redis or use a managed service?</li>
<li><strong>Logging.</strong> Do you store and parse your own logs, or send them to a hosted log processor?</li>
<li><strong>Front-end components.</strong> Do you use an existing library, or build your own?</li>
</ul>
<p>Deciding what to buy will of course depend on multiple factors such as your budget, team expertise and time horizon. For fledgling startups, buying early on makes a lot of sense. Many SaaS products in the above categories have generous free or low-cost tiers. Here are just some:</p>
<ul>
<li><strong>Authentication</strong>: Google Identity, AWS Cognito, Auth0</li>
<li><strong>Caching</strong>: Redis Cloud</li>
<li><strong>Logging</strong>: Datadog, AWS CloudWatch, Azure Application Insights</li>
<li><strong>Hosting</strong>: Google Firebase, Render.com, DigitalOcean, Heroku</li>
</ul>
<p>Let's look at some costs.</p>
<h3 id="heading-cost-of-doing-business">Cost of Doing Business</h3>
<p>As your application grows, some of these "buy" services will start to become very costly and potentially hinder your growth (especially in the hosting space). Here is one such scenario: </p>
<p>You start out on render.com with five microservices paying $25/mo for a 2GB/1vCPU single instance per service. You have one environment, because you've just started to build. A few months later you're ready to go to production so you create another mirrored environment of five services, but this time, since it's production, you want to give the services a bit more oomph, so you opt for the $50/mo instances. A couple sprints later you're ready to push recent changes to production, but first you want to ensure they work well with production data... so, you need a staging environment. This environment should, optimally, mirror production, so another 5 * $50/mo.</p>
<p>Next, each microservice is going to store data in a silo, so each one needs a database. You choose to use Postgres: (5 <em> $20/mo)... </em> 3, because each environment needs its own data.</p>
<p>Last, we can add some static costs:</p>
<ul>
<li>Auth0: $23</li>
<li>Redis Cloud: $35</li>
<li>Datadog: $5 (you're not logging much...)</li>
</ul>
<p>To sum up:</p>
<p>Hosting:</p>
<ul>
<li>$25 x 5</li>
<li>$50 x 5</li>
<li>$50 x 5</li>
</ul>
<p>Data:</p>
<ul>
<li>$20 x 5 x 3</li>
</ul>
<p>Other:</p>
<ul>
<li>$63</li>
</ul>
<p>Total:
$988/mo</p>
<p>These are very conservative numbers, too. In most cases you will run at least two instances of your production code so that if one goes down for some reason your entire application doesn't fold with it. As more traffic comes to your product, you will want to increase memory and CPU of your hosting machines and potentially add more instances (or let it auto-scale if such feature is available).</p>
<p><em>Note: We evaluated Render as a hosting platform to reduce Azure spend but decided to pass. Instead, we built out a Kubernetes cluster. Had to learn a lot of DevOps in the process, but now we are saving thousands per month.</em></p>
<h3 id="heading-recommendations">Recommendations</h3>
<p>It's difficult to give raw recommendations because everyone's situation is going to be different. I can, however, tell you what I would do today if I was starting from nothing given the hardware SaaS company scenario I mentioned earlier.</p>
<p><strong>Authentication</strong>
Build it. Identity Providers start cheap, but they can get out of control quickly, especially if you're successful and scale up. They do provide easy-to-use SDKs and have nice UIs for user management (except Azure B2C, that thing is gross), but if you're just starting out, you do not need 98% of what they offer you. Instead, use whatever AuthN libraries come with the web framework of your choice. In our case, we used Identity Server 4. In other projects, I used Passport.js. These are battle-tested solutions that give you everything you need out of the box for free.</p>
<p><strong>Hosting</strong>
Buy it but be strategic about it. In a microservice scenario, it will most definitely be cheaper to run your own Kubernetes cluster (especially if you pre-pay for years of VMs). If you are just building a monolith, hosting it on a beefy PaaS (Platform-as-a-Service) like Heroku or Render is great. The ease of use of these services is a tremendous boon to your productivity and quick delivery to market. I wouldn't try to nickel and dime around hosting by going with a VM like DigitalOcean or Linode, or worse -- self-hosting on your Raspberry Pi.</p>
<p><strong>Caching</strong>
Buy it. Redis Cloud. No, they didn't sponsor me. It's just the smartest way to go if you need caching (or if you're cool and want to use their permanent storage solution instead of a database).</p>
<p><strong>Database</strong>
Buy it. Managed databases excellent value. They save you from stress. You don't have to worry about backups and when you scale, your databases scale with you, including to other continents! Schema migration went awry? No problem, do a restore. All you need is a connection string and you're off to the races.</p>
<p><strong>Logging</strong>
Buy it first but build it later. When your product becomes complex and has hundreds or thousands of try/catch blocks, and each service has several instances, it's easy to flood logging services with records resulting in excessive costs. The thing about logging is that it's conceptually quite simple. Many logging libraries support bringing your own "sink", which you can then use to deposit logs to a datastore of your choice, aggregate it, write your own retention policies and so on. This is of course way later, but in my experience, logging costs at larger scale can get ridiculous, so it's worth exploring an in-house solution once you have the engineers and capacity to build it.</p>
<p><strong>Front end components</strong>
Get it for free or build it. You may be tempted by pretty, premium component libraries, but the truth is that customers aren't gonna keel over from overwhelming wonder when they see a premium pie chart versus a c3.js one. Tables, dropdowns, multi-selects, they can all be either found for free in the OSS community or straight up written yourself. We used Buefy (a VueJS lib), and we'll stick with Bulma CSS going forward, but personally I've also experimented with Tailwind and Uno when writing my own custom components. There are many awesome, free component libraries out there, so... don't buy.</p>
<h3 id="heading-in-conclusion">In Conclusion</h3>
<p>There are many items on the check list when spinning up a new SaaS. The goal of the next few articles in this series will be to do a bit more of a deep dive into specific topics mentioned at the beginning of this article, beginning with choosing your tech stack!</p>
]]></content:encoded></item><item><title><![CDATA[Moving to Hashnode]]></title><description><![CDATA[For the past two years, this blog was a Vue3 pre-rendered application using highlight.js, headful, markdown-loader and service workers. I was obsessed with scoring as high as possible on Lighthouse... for my blog. We all go through stages... right?
T...]]></description><link>https://paulers.com/moving-to-hashnode</link><guid isPermaLink="true">https://paulers.com/moving-to-hashnode</guid><category><![CDATA[Blogging]]></category><category><![CDATA[Hashnode]]></category><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Wed, 23 Nov 2022 07:41:43 GMT</pubDate><content:encoded><![CDATA[<p>For the past two years, this blog was a Vue3 pre-rendered application using <a target="_blank" href="https://www.npmjs.com/package/highlight.js">highlight.js</a>, <a target="_blank" href="https://www.npmjs.com/package/vue-headful">headful</a>, <a target="_blank" href="https://www.npmjs.com/package/vue-markdown-loader">markdown-loader</a> and service workers. I was obsessed with scoring as high as possible on Lighthouse... for my blog. We all go through stages... right?</p>
<p>The problem with file system article indexing is that you must check in an updated version of your blog every time you author a new article. The blog was hosted on render.com and their build pipeline is very neat so overall not a big deal, but there's still a fair amount of friction there.</p>
<p>Before the above, I was writing on Medium, and before that I was hosting my own ASP.NET Razor Pages blog on Azure. Medium was good, but their monetization strategy is not to my liking. I also wanted to host using my own domain rather than writing "for" Medium.</p>
<p>After a bit of research, I decided on Hashnode. Their comparison to other platforms linked in the hashnode.com footer (when you're not logged in) is pretty spot on. It was between dev.to and hashnode and dev.to does not have custom domains... let alone with automated Let's Encrypt SSL certificates. Pretty sweet!</p>
<p>Thus, for now, I am here, on Hashnode! Hopefully, they work out the editor bugs soon. I know text editing in web browsers is as bad as working with dates, so I for now I can live with it.</p>
<p>Please look forward to a slew of new articles soon!</p>
]]></content:encoded></item><item><title><![CDATA[Cleaning Up Old Docker Images]]></title><description><![CDATA[Cleaning Up Old Docker Images
Quick primer on how to clean up old images in Docker. One of the first things you're gonna find when you search for this question is docker images prune. This works, but only on dangling images. Dangling images are essen...]]></description><link>https://paulers.com/docker-old-images-cleanup</link><guid isPermaLink="true">https://paulers.com/docker-old-images-cleanup</guid><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Tue, 05 Jul 2022 19:37:15 GMT</pubDate><content:encoded><![CDATA[<h1 id="heading-cleaning-up-old-docker-images">Cleaning Up Old Docker Images</h1>
<p>Quick primer on how to clean up old images in Docker. One of the first things you're gonna find when you search for this question is <code>docker images prune</code>. This works, but only on <em>dangling</em> images. Dangling images are essentially orphaned images created as a process in making another image. They're usually generated when a Dockerfile creates a few layers during final image creation.</p>
<p>Thing is, when you're building images and tagging them with, for example, the build number, those images are not dangling and they will be omitted during the <code>image prune</code> command. Example:</p>
<pre><code class="lang-bash">REPOSITORY                        TAG            IMAGE ID        CREATED          SIZE
fancy.service                     latest         813d1928f6b8    3 days ago       156MB
fancy.service                     v10            813d1928f6b8    3 days ago       156MB
contoso.azurecr.io/fancy.service  latest         813d1928f6b8    3 days ago       156MB
contoso.azurecr.io/fancy.service  v10            813d1928f6b8    3 days ago       156MB
</code></pre>
<p>We have 4 images that are essentially the same. Perhaps today you built <em>v11</em> and want to get rid of v10, or just get rid of anything older than today's. Here's how:</p>
<pre><code class="lang-bash">docker images | grep <span class="hljs-string">'days ago\|weeks ago\|months ago\|years ago'</span> | awk <span class="hljs-string">'{print $3}'</span> | xargs docker rmi --force
</code></pre>
<p>There are 4 parts to this command.</p>
<ol>
<li><code>docker images</code> pulls the list of all images in the system and pipes it to </li>
<li><code>grep 'days ago\|months ago\|years ago'</code> which selects every row that has those words in it and then pipes it to</li>
<li><code>awk '{print $3}'</code> which returns the item in the 3rd column from each row - in this case the IMAGE ID - and then pipes it to</li>
<li><code>xargs docker rmi --force</code> which is the crown jewel of this entire command. Xargs takes each line from the previous output and passes it as a parameter to the command <code>docker rmi --force</code>.</li>
</ol>
<p>This should clean up all images older than today. You can remove <code>days ago\|</code> if you want to keep recent images for example.</p>
<p>Keep in mind this command will not remove images which are <strong>in use</strong>. That is anything that's mounted as a container -- even a stopped container.</p>
]]></content:encoded></item><item><title><![CDATA[Creating, Using and Editing Kubernetes Secrets]]></title><description><![CDATA[Creating, Using and Editing Kubernetes Secrets
Secrets in Kubernetes are essentially key-value pairs stored in K8s' API server's database. They are by default not encrypted so anyone with access to the API can retrieve, change or delete a secret. Thu...]]></description><link>https://paulers.com/kubernetes-secrets-management</link><guid isPermaLink="true">https://paulers.com/kubernetes-secrets-management</guid><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Sat, 14 May 2022 18:44:32 GMT</pubDate><content:encoded><![CDATA[<h1 id="heading-creating-using-and-editing-kubernetes-secrets">Creating, Using and Editing Kubernetes Secrets</h1>
<p>Secrets in Kubernetes are essentially key-value pairs stored in K8s' API server's database. They are by default not encrypted so anyone with access to the API can retrieve, change or delete a secret. Thus, when storing sensitive information in K8s' secrets store, you must enable encryption-at-rest.</p>
<p>Good uses for secrets:</p>
<ul>
<li>Storing SSL certificates</li>
<li>Storing container registry authentication information</li>
<li>Storing sensitive access keys to 3rd party distributed configuration tools</li>
</ul>
<p>Kubernetes documentation is fairly good, so I am not going to rehash it here, but I did want to focus on a common scenario:</p>
<blockquote>
<p>You have a microservice which needs to read a configuration from a 3rd-party Distributed Configuration Service (DCS). To access this DCS, the microservice needs a key.</p>
</blockquote>
<h3 id="heading-creating">Creating</h3>
<p>Let's see how this can be done using the generic secret type. The following <code>kubectl</code> command will create the secret.</p>
<pre><code class="lang-powershell">kubectl create secret generic dcscreds --from-literal=clientid=abc --from-literal=secret=123
</code></pre>
<p>The first 4 words in the statement are specific to <code>kubectl</code>. The last word, <code>generic</code>, states we want to create a generic type of secret. Other types include TLS and container registry secrets. The <code>dcscreds</code> in the above statement is the name of the secret which you'll reference in your pods later to pull the secrets. Lastly, there are two values we're storing in the secret. Secrets can be created from different sources:</p>
<ul>
<li>env file (such as Docker's .env)</li>
<li>actual files with the values</li>
<li>literal strings</li>
</ul>
<p>In the above example we're using literal strings. I feel like that actually makes the most sense since the other two options mean you're reading from files which could potentially be checked into source control... and we shouldn't be checking credentials into source control to begin with. The <code>--from-literal=</code> parameter picks the third option from the list above, and is then followed by the key=value pair <code>clientid=abc</code>.</p>
<p>Let's check the secret with</p>
<pre><code class="lang-powershell">kubectl get secrets
</code></pre>
<p>One of the secrets should be</p>
<pre><code>NAME                    TYPE                  DATA    AGE
dcscreds                Opaque                <span class="hljs-number">2</span>       <span class="hljs-number">1</span>m
</code></pre><h3 id="heading-using">Using</h3>
<p>How do we reference this secret in our deployment YML files now? Let's say we want to inject a secret as an environment variable into our microservice so that it can access the DCS. Here's an example deployment YML:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">cache-deployment</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">integration</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">cache</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">cache</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">cache</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">cache</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">redis</span>
          <span class="hljs-attr">env:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">CLIENTID</span>
              <span class="hljs-attr">valueFrom:</span>
                <span class="hljs-attr">secretKeyRef:</span>
                  <span class="hljs-attr">name:</span> <span class="hljs-string">dcscreds</span>
                  <span class="hljs-attr">key:</span> <span class="hljs-string">clientid</span>
                  <span class="hljs-attr">optional:</span> <span class="hljs-literal">false</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">PASSWORD</span>
              <span class="hljs-attr">valueFrom:</span>
                <span class="hljs-attr">secretKeyRef:</span>
                  <span class="hljs-attr">name:</span> <span class="hljs-string">dcscreds</span>
                  <span class="hljs-attr">key:</span> <span class="hljs-string">secret</span>
                  <span class="hljs-attr">optional:</span> <span class="hljs-literal">false</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">cache</span>
</code></pre>
<p>In the above deployment you can see the <code>env</code> has two values. The <code>env.name</code> of the value is what you'd refer to inside your application, while the <code>env.valueFrom.secretKeyRef.name</code> is the name of secret we created above, followed by the <code>env.valueFrom.secretKeyRef.key</code> which refers to the key of the key=value pair. The <code>optional</code> flag will fail to create the deployment if the secret/key does not exist.</p>
<p>Great, but, what if the secret changes?</p>
<h3 id="heading-editing">Editing</h3>
<p>To edit a secret, we can use the following command:</p>
<pre><code class="lang-powershell">kubectl edit secrets dcscreds
</code></pre>
<p>This will open the secrets YML in the operating system's default text editor. It will look something like this:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Please edit the object below. Lines beginning with a '#' will be ignored,</span>
<span class="hljs-comment"># and an empty file will abort the edit. If an error occurs while saving this file will be</span>
<span class="hljs-comment"># reopened with the relevant failures.</span>
<span class="hljs-comment">#</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">data:</span>
  <span class="hljs-attr">clientid:</span> <span class="hljs-string">YWJj</span>
  <span class="hljs-attr">secret:</span> <span class="hljs-string">MTIz</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Secret</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">creationTimestamp:</span> <span class="hljs-string">"2022-05-14T20:49:15Z"</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">dcscreds</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
  <span class="hljs-attr">resourceVersion:</span> <span class="hljs-string">"34419738"</span>
  <span class="hljs-attr">uid:</span> <span class="hljs-string">5d489368-27d6-4e57-b16b-b936109b027f</span>
<span class="hljs-attr">type:</span> <span class="hljs-string">Opaque</span>
</code></pre>
<p>Values in <code>data</code> are base64 encoded. You can use a base64 decoder to see what they are. If you need to encode new values, you'll have to encode them. As an example, let's say the secret changes from <code>123</code> to <code>456</code>. Here's a helpful command to encode that:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> -n <span class="hljs-string">'456'</span> | base64
</code></pre>
<p>The ouput would look like</p>
<pre><code>NDU2
</code></pre><p>Once you have that base64 encoded value, simply replace it in the secrets YAML file:</p>
<pre><code class="lang-yaml"><span class="hljs-string">...</span>
<span class="hljs-attr">data:</span>
  <span class="hljs-attr">clientId:</span> <span class="hljs-string">YWJj</span>
  <span class="hljs-attr">secret:</span> <span class="hljs-string">NDU2</span>
<span class="hljs-string">...</span>
</code></pre>
<p>Save the file, close it, and kubectl will update the secret in Kubernetes. Done and done!</p>
<p>Deleting a secret is done the same way as deleting other entities:</p>
<pre><code class="lang-powershell">kubectl delete secret dcscreds
</code></pre>
<hr />
<p>Resources</p>
<ul>
<li><a target="_blank" href="https://kubernetes.io/docs/concepts/configuration/secret/">Secrets</a></li>
<li><a target="_blank" href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/">Encryption at Rest</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Dependency Injection in Azure Functions]]></title><description><![CDATA[Dependency Injection in Azure Functions
A little while ago Microsoft finally added support for dependency injection in their Azure Function product. Now, there are some subtle but important differences, but for the most part the experience is similar...]]></description><link>https://paulers.com/aspnet-di-azure-functions</link><guid isPermaLink="true">https://paulers.com/aspnet-di-azure-functions</guid><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Wed, 08 Sep 2021 21:58:42 GMT</pubDate><content:encoded><![CDATA[<h1 id="heading-dependency-injection-in-azure-functions">Dependency Injection in Azure Functions</h1>
<p>A little while ago Microsoft finally added support for dependency injection in their Azure Function product. Now, there are some subtle but important differences, but for the most part the experience is similar to what you're used to in ASPNET Core.</p>
<p>Microsoft's official documentation, found <a target="_blank" href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-dotnet-dependency-injection">here</a>, is as usual lacking in real-world examples. Thus, the purpose of this article is to expose some real world use cases to the wider public. We're going to build an Azure Function which listens to a Service Bus and writes to an Azure Search index.</p>
<p>Before we started, a brief introduction.</p>
<h3 id="heading-azure-function-primer">Azure Function Primer</h3>
<p>Azure Functions are supposed to be single-purpose pieces of code which are trigger when an action happens. These actions are in fact called triggers. Once your code is trigger, it can do anything you want it to, but most often you'll want to do some kind of processing on the input from the trigger and spit out the result into a 'binding'. Bindings are essentially output buckets you can deposit the result of your function into. The key is to understand that <strong>triggers</strong> are mandatory but <strong>bindings</strong> are optional.</p>
<p>Microsoft's own <a target="_blank" href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings">documentation on triggers and bindings</a> is pretty good, so if you're new to Azure Functions, I recommend giving that a read.</p>
<p>In our case, we'll be using a Service Bus trigger, but since there is no Azure Search index binding, we'll have to write to the index manually. </p>
<blockquote>
<p>Not having a binding you need is a pretty common scenario, because Microsoft doesn't really support anything outside of their own ecosystem and even within the ecosystem getting them to build bindings you need is a tall order. How many years have we been waiting for <a target="_blank" href="https://github.com/Azure/azure-webjobs-sdk-extensions/issues/14">Azure File Storage bindings</a>?</p>
</blockquote>
<p>Right, so now that we know what we're building, let's dive into the code.</p>
<h3 id="heading-creating-the-function">Creating the Function</h3>
<p>Go ahead and spin up a new Azure Function in Visual Studio. If you want, you can choose the Service Bus trigger while scaffolding -- that will make the process a tad bit faster. Choose whatever default storage account you use to store your functions (or Storage Emulator), make sure you've selected at least version 2 of Azure Functions (though as of this writing, 3 is the latest one so I'll be using that).</p>
<p>You already have a <code>Function1.cs</code> file which contains the default function code. There are also <code>host.json</code>, <code>local.settings.json</code> and possibly a <code>.gitignore</code>. Let's add a couple more files:</p>
<ul>
<li>Startup.cs</li>
<li>ConfigBuilder.cs</li>
</ul>
<p>Startup is going to contain our dependency injection code and ConfigBuilder will pull config files from the filesystem.</p>
<p>We're also going to need to add a Nuget package <code>Microsoft.Azure.Functions.Extensions</code>.</p>
<p>Inside <code>Startup.cs</code> class, add the following code:</p>
<pre><code class="lang-csharp">[<span class="hljs-meta">assembly: FunctionsStartup(typeof(FancyFunction.Startup))</span>]
<span class="hljs-keyword">namespace</span> <span class="hljs-title">FancyFunction</span>
{
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">Startup</span> : <span class="hljs-title">FunctionsStartup</span>
    {
        <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">Startup</span>(<span class="hljs-params"></span>)</span> { }

        <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">override</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Configure</span>(<span class="hljs-params">IFunctionsHostBuilder builder</span>)</span>
        {
            <span class="hljs-comment">// 1. Build Config</span>

            <span class="hljs-comment">// 2. Add Config as a Singleton</span>

            <span class="hljs-comment">// 3. Add logging</span>

            <span class="hljs-comment">// 4. Add the search service</span>
        }
    }
}
</code></pre>
<p>First, we have to add an assembly reference attribute at the top of the namespace and specify the Startup class as the entrypoint. Then, we need to inherit from the <code>FunctionsStartup</code> base class and add an empty constructor. Lastly, implement the required <code>Configure</code> override.</p>
<p>We have our steps defined in the comments there, so let's start with building the config. We can add <code>appsettings.json</code> via Microsoft's IConfiguration interface. Inside <code>ConfigBuilder.cs</code> add the following code:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">namespace</span> <span class="hljs-title">FancyFunction</span>
{
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">class</span> <span class="hljs-title">ConfigBuilder</span>
    {
        <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> IConfiguration <span class="hljs-title">BuildConfiguration</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> rootDir = <span class="hljs-literal">null</span></span>)</span>
        {
            <span class="hljs-comment">// We're allowing specifying a custom root directory</span>
            <span class="hljs-keyword">if</span> (<span class="hljs-keyword">string</span>.IsNullOrEmpty(rootDir))
            {
                <span class="hljs-comment">// Theoretically this should exist in Azure Function apps</span>
                <span class="hljs-keyword">var</span> localRoot = Environment.GetEnvironmentVariable(<span class="hljs-string">"AzureWebJobsScriptRoot"</span>);
                <span class="hljs-comment">// But if it doesnt, then this will</span>
                <span class="hljs-keyword">var</span> azureRoot = <span class="hljs-string">$"<span class="hljs-subst">{Environment.GetEnvironmentVariable(<span class="hljs-string">"HOME"</span>)}</span>/site/wwwroot"</span>;

                rootDir = localRoot ?? azureRoot;
            }

            <span class="hljs-comment">// Grab the environment setting to use below</span>
            <span class="hljs-keyword">var</span> environment = Environment.GetEnvironmentVariable(<span class="hljs-string">"ASPNETCORE_ENVIRONMENT"</span>) ?? <span class="hljs-string">"Development"</span>;

            <span class="hljs-comment">// Create and retun the config</span>
            <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> ConfigurationBuilder()
                .SetBasePath(rootDir)
                .AddJsonFile(<span class="hljs-string">"appsettings.json"</span>, optional: <span class="hljs-literal">true</span>)
                .AddJsonFile(<span class="hljs-string">$"appsettings.<span class="hljs-subst">{environment}</span>.json"</span>, optional: <span class="hljs-literal">true</span>)
                <span class="hljs-comment">// Add any more sources you need here</span>
                .Build();
        }
    }
}
</code></pre>
<p>Resolve any <code>using</code>s missing. The code is explained in the comments.</p>
<p>Inside Startup.cs, let's do steps 1 and 2:</p>
<pre><code class="lang-csharp"><span class="hljs-comment">// 1. Build Config</span>
<span class="hljs-keyword">var</span> config = ConfigBuilder.BuildConfiguration();

<span class="hljs-comment">// 2. Add Config as a Singleton</span>
builder.Services.AddSingleton(config);
</code></pre>
<p>Next, logging. Simple:</p>
<pre><code class="lang-csharp"><span class="hljs-comment">// 3. Add logging</span>
builder.Services.AddLogging();
</code></pre>
<p>This will add the default loggers, one of which is Console. This step is not strictly necessary if you intend on using just the Console, since functions are injected ILogger out of the box, but if you plan on using any 3rd party or maybe your own logger, you can do so here by passing in a parameter:</p>
<pre><code class="lang-csharp">builder.Services.AddLogging(cfg =&gt; {
    <span class="hljs-comment">// your code here</span>
})
</code></pre>
<p>We will inject this and other services via Dependency Injection into the function soon.</p>
<p>Lastly, let's add the Azure Search Service. We need to create the search service first, then add it as a Singleton. Go ahead and create a new class: <code>AzureSearchService.cs</code>. Inside, create an interface and the class which inherits from it:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">interface</span> <span class="hljs-title">IAzureSearchService</span> {
    <span class="hljs-function"><span class="hljs-title">Task</span>&lt;<span class="hljs-title">bool</span>&gt; <span class="hljs-title">MergeDocument</span>&lt;<span class="hljs-title">T</span>&gt;(<span class="hljs-params">T entity, <span class="hljs-keyword">string</span> index</span>)</span>;
}

<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">AzureSearchService</span> : <span class="hljs-title">IAzureSearchService</span> {
    <span class="hljs-keyword">public</span> SearchServiceClient Client;
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">AzureSearchService</span>(<span class="hljs-params">SearchServiceClient adminClient</span>)</span> {
        Client = adminClient;
    }
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">Task</span>&lt;<span class="hljs-title">bool</span>&gt; <span class="hljs-title">MergeDocument</span>&lt;<span class="hljs-title">T</span>&gt;(<span class="hljs-params">T entity, <span class="hljs-keyword">string</span> index</span>)</span> {
        <span class="hljs-comment">// ... code to merge a message from SB into Search Index</span>
    }
}
</code></pre>
<p>We're not going to build out the search client in this post, since the focus here is on Azure Functions. You can think of this service as a standard DI service you'd write in a typical .NET Core application. The interface is there to make it testable.</p>
<p>You may have to install Microsoft's latest Azure Search Nuget package. As of this writing, it's <code>Microsoft.Azure.Search</code> version 10.1.0, but knowing Microsoft, by the time you read this, it'll be deprecated...</p>
<p>Now that you have the Azure Search Service class, you can add it to Startup.cs:</p>
<pre><code class="lang-csharp"><span class="hljs-comment">// 4. Add the search service</span>
builder.Services.AddSingleton&lt;IAzureSearchService&gt;((provider) =&gt;
{
    <span class="hljs-comment">// Create the client first using config values from appsettings.json (or alternative source)</span>
    <span class="hljs-keyword">var</span> adminClient = <span class="hljs-keyword">new</span> SearchServiceClient(config[<span class="hljs-string">"SearchServiceName"</span>], <span class="hljs-keyword">new</span> SearchCredentials(config[<span class="hljs-string">"SearchServiceAdminKey"</span>]));
    <span class="hljs-comment">// Return the implementation of the class with this adminClient.</span>
    <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> AzureSearchService(adminClient);
});
</code></pre>
<p>We're creating a singleton here, because the Search Client is reusable and does not need to instantiate every time a request is made.</p>
<h3 id="heading-using-di-in-the-function">Using DI in the Function</h3>
<p>Finally, we can get to writing our function! Inside <code>Function1.cs</code> (or, if you renamed it, whatever the new file name is) make the following changes:</p>
<ol>
<li>Remove <code>static</code> from the class definition.<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">Function1</span>
</code></pre>
</li>
<li><p>Add a constructor</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">Function1</span>
{
 <span class="hljs-keyword">private</span> IAzureSearchService _searchService;
 <span class="hljs-keyword">private</span> IConfiguration _config;
 <span class="hljs-keyword">private</span> ILogger&lt;Function1&gt; _logger;

 <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">Function1</span>(<span class="hljs-params">IAzureSearchService azureSearchService, IConfiguration configuration, ILogger&lt;Function1&gt; logger</span>)</span>
 {
     _searchService = azureSearchService;
     _config = configuration;
     _logger = logger;
 }

 <span class="hljs-comment">// ... Your function code here</span>
}
</code></pre>
</li>
<li><p>Write out the function just as you would a typical controller endpoint. You'll have access to anything you injected in the constructor and reassigned out of its scope. Here's an example:</p>
<pre><code class="lang-csharp">[<span class="hljs-meta">FunctionName(<span class="hljs-meta-string">"Function1"</span>)</span>]
<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">Run</span>(<span class="hljs-params">[ServiceBusTrigger(<span class="hljs-string">"fancy-topic"</span>, <span class="hljs-string">"fancy-subscription"</span>, Connection = <span class="hljs-string">"ConnectionStringDefinedInEnvironment"</span></span>)]Message message)</span>
{
 <span class="hljs-comment">// Get the body of the message</span>
 <span class="hljs-keyword">var</span> body = Encoding.UTF8.GetString(message.Body);

 <span class="hljs-comment">// Get user defined properties (UserProperties) is just a dictionary</span>
 message.UserProperties.TryGetValue(<span class="hljs-string">"customProperty"</span>, <span class="hljs-keyword">out</span> <span class="hljs-keyword">object</span> customProperty);

 <span class="hljs-comment">// Deserialize the body</span>
 <span class="hljs-keyword">var</span> msgObject = JsonSerializer.Deserialize&lt;FancyMessage&gt;(body);

 <span class="hljs-comment">// Use the search service</span>
 <span class="hljs-keyword">await</span> _searchService.MergeDocument&lt;FancySearchIndexModel&gt;(<span class="hljs-keyword">new</span> FancySearchIndexModel
 { 
     <span class="hljs-comment">/* Reassign properties from message to search model here */</span>
 }, <span class="hljs-string">"FancyIndex"</span>);
}
</code></pre>
<p>This is it!</p>
</li>
</ol>
<p>The entire process basically goes as follows:</p>
<ol>
<li>Create Startup.cs</li>
<li>Add some attributes to specify that we're using a Functions-specific Startup class</li>
<li>Add services to the DI container</li>
<li>Inject services into the constructor of the function</li>
</ol>
<p>Thanks for reading!</p>
]]></content:encoded></item><item><title><![CDATA[How to access metrics in Kubernetes]]></title><description><![CDATA[First, let me say that there are multiple solutions to achieve the same outcome. One of the simplest ones is to install Kubernetes Dashboard and call it good. That said, though, while the dashboard is an effective way to quickly get insight into your...]]></description><link>https://paulers.com/kubernetes-how-to-access-metrics</link><guid isPermaLink="true">https://paulers.com/kubernetes-how-to-access-metrics</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[.NET]]></category><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Thu, 21 May 2020 20:08:14 GMT</pubDate><content:encoded><![CDATA[<p>First, let me say that there are multiple solutions to achieve the same outcome. One of the simplest ones is to install <a target="_blank" href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/">Kubernetes Dashboard</a> and call it good. That said, though, while the dashboard is an effective way to quickly get insight into your cluster, perhaps you want to grab the metrics and feed them into our own data pipeline for more serious cluster performance monitoring.</p>
<p>The case I want to present today is somewhere between the Dashboard and the Data Pipeline. In these next few articles, we'll use a .NET Core library called <a target="_blank" href="https://github.com/tintoy/dotnet-kube-client/">KubeClient</a> to retrieve metrics from the cluster's API and print them to console. In the example code, we'll write a simple .NET Core console application in preparation for turning into a cron job in a future article. Let's get started.</p>
<p>You can create a new project via <code>dotnet new console</code> and add the KubeClient package I linked above. I created a new directory called KubeMetricScraper to house this project.</p>
<pre><code class="lang-powershell">mkdir KubeMetricScraper
cd KubeMetricScraper
dotnet new console
dotnet add package KubeClient
</code></pre>
<p>You should see in your KubeMetricScraper.csproj file the following:</p>
<pre><code class="lang-xml"><span class="hljs-tag">&lt;<span class="hljs-name">Project</span> <span class="hljs-attr">Sdk</span>=<span class="hljs-string">"Microsoft.NET.Sdk"</span>&gt;</span>

  <span class="hljs-tag">&lt;<span class="hljs-name">PropertyGroup</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">OutputType</span>&gt;</span>Exe<span class="hljs-tag">&lt;/<span class="hljs-name">OutputType</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">TargetFramework</span>&gt;</span>netcoreapp3.1<span class="hljs-tag">&lt;/<span class="hljs-name">TargetFramework</span>&gt;</span>
  <span class="hljs-tag">&lt;/<span class="hljs-name">PropertyGroup</span>&gt;</span>

  <span class="hljs-tag">&lt;<span class="hljs-name">ItemGroup</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">PackageReference</span> <span class="hljs-attr">Include</span>=<span class="hljs-string">"KubeClient"</span> <span class="hljs-attr">Version</span>=<span class="hljs-string">"2.3.11"</span> /&gt;</span>
  <span class="hljs-tag">&lt;/<span class="hljs-name">ItemGroup</span>&gt;</span>

<span class="hljs-tag">&lt;/<span class="hljs-name">Project</span>&gt;</span>
</code></pre>
<p>Obviously if you're doing this months from now when .NET Core is version 17.8 or whatever, you'll see a different target framework. You'll have to make sure that KubeClient can run in the version of netcore you're targeting.</p>
<p>Let's add a couple more packages:</p>
<pre><code class="lang-powershell">dotnet add package microsoft.extensions.logging
dotnet add package microsoft.extensions.logging.console
dotnet add package newtonsoft.json
</code></pre>
<p>Open up the project using whatever editor you like and let's take a look at what KubeClient can do. First, let's add the logger:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">class</span> <span class="hljs-title">Program</span> 
{
  <span class="hljs-function"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">Main</span>(<span class="hljs-params"><span class="hljs-keyword">string</span>[] args</span>)</span> 
  {
    ILoggerFactory loggers = <span class="hljs-keyword">new</span> LoggerFactory();
    loggers.AddConsole();
  }
}
</code></pre>
<p>You'll notice I swapped the default <code>void</code> for <code>async Task</code>. This is a feature of C# 7.1 so if you're running something older than that, you'll need to stick to <code>void</code> and use <code>GetAwaiter()</code> in the code whenever asynchronous code shows up. Hopefully you're on 7.1 though, because that's ugly.</p>
<p>Below the loggers let's instantiate the KubeCpiClient:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">var</span> client = KubeApiClient.Create(<span class="hljs-string">"http://localhost:8001"</span>, loggers);
</code></pre>
<p>Above code requires that you're running <code>kube proxy</code> and have proxy access to the cluster. To set that up, you need kubectl installed and an SSH key registered with the cluster. For more information, the <a target="_blank" href="https://kubernetes.io/docs/tasks/access-kubernetes-api/http-proxy-access-api/">official docs</a> are actually not bad. Assuming you have all that set up, let's continue on with C#.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">var</span> nodes = <span class="hljs-keyword">await</span> client.NodesV1().List();
<span class="hljs-keyword">var</span> serializedNodes = JsonConvert.SerializeObject(nodes);
Console.WriteLine(serializedNodes, Formatting.Indented);
</code></pre>
<p>This should print out a list of nodes in your cluster. Great! Try this:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">var</span> pods = <span class="hljs-keyword">await</span> client.PodsV1().List(<span class="hljs-literal">null</span>, <span class="hljs-string">"default"</span>);
</code></pre>
<p>When querying pods, you can provide the label and the namespace. If you don't provide the namespace, the <code>default</code> namespace is used. If you've been following my past articles, you can replace <code>"default"</code> with <code>"integration"</code> to get a list of pods in there.</p>
<p>You can also retrieve namespaces, services, jobs and more using the same syntax. Unfortunately, with the basic KubeClient, you cannot retrieve metrics. But wait... how then? The title of this article promised me...</p>
<p>One of KubeClient's chief perks is its extensibility. The library exposes the underlying ResourceClient and lets you build your own API queries on top of it. Anything the base KubeClient doesn't cover, you can build yourself. That's exactly what we're going to do in the next article.</p>
<p>Your code by the end of this article should look like this:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">class</span> <span class="hljs-title">Program</span> 
{
  <span class="hljs-function"><span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">Main</span>(<span class="hljs-params"><span class="hljs-keyword">string</span>[] args</span>)</span> 
  {
    ILoggerFactory loggers = <span class="hljs-keyword">new</span> LoggerFactory();
    loggers.AddConsole();

    <span class="hljs-keyword">var</span> nodes = <span class="hljs-keyword">await</span> client.NodesV1().List();
    <span class="hljs-keyword">var</span> serializedNodes = JsonConvert.SerializeObject(nodes);
    Console.WriteLine(serializedNodes, Formatting.Indented);

    <span class="hljs-keyword">var</span> pods = <span class="hljs-keyword">await</span> client.PodsV1().List(<span class="hljs-literal">null</span>, <span class="hljs-string">"default"</span>);
    <span class="hljs-keyword">var</span> serializedPods = JsonConvert.SerializeObject(pods);
    Console.WriteLine(serializedPods, Formatting.Indented);

    Console.ReadLine();
  }
}
</code></pre>
<p>I encourage you to see what else KubeClient has to offer. In the next article we'll build a KubeClient extension and get us some metrics.</p>
]]></content:encoded></item><item><title><![CDATA[How to call other services in Kubernetes]]></title><description><![CDATA[In a scenario where you have multiple services running in one cluster, you will want them to communicate with each other. There are multiple ways to approach interservice communication; pubsub channel (like Redis), queues/topics (like RabbitMQ or Azu...]]></description><link>https://paulers.com/kubernetes-how-to-call-other-services</link><guid isPermaLink="true">https://paulers.com/kubernetes-how-to-call-other-services</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[aks]]></category><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Thu, 14 May 2020 22:12:12 GMT</pubDate><content:encoded><![CDATA[<p>In a scenario where you have multiple services running in one cluster, you will want them to communicate with each other. There are multiple ways to approach interservice communication; pubsub channel (like Redis), queues/topics (like RabbitMQ or Azure Service Bus) or plain ole HTTP calls. Let's briefly discuss this last option but let me introduce you to Kubenet really quick.</p>
<p>As I mentioned in my previous articles on Kubernetes, intracluster communication is facilitated by something called Kubenet. Kubenet is a basic network plugin that does its job well but doesn't come with any bells and whistles like cross-node networking or policy management. If you're hosting your K8s cluster on Azure (or any other serious cloud provider), they will set up all the routing rules for your nodes and in some cases give you control over network policy configuration. In fact, Azure has an Advanced networking mode which gives you more control over the networking aspect of your cluster. You can read about Kubenet and Azure CNI <a target="_blank" href="https://docs.microsoft.com/en-us/azure/aks/configure-kubenet">here</a>.</p>
<p>In essence, kubenet's job is to assign IP addresses to pods and resolve service hostnames to these IP addresses. When you add a new deployment with a pod and service, it's automatically assigned an IP address and immediately discoverable on your cluster. Let's say you've added a service like this:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">fancyapi-service</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">integration</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">fancyapi</span>
  <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">http</span>
      <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
      <span class="hljs-attr">targetPort:</span> <span class="hljs-number">5000</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">ClusterIP</span>
</code></pre>
<p>Kubenet will assign this service an IP address (which you can view via <code>kubectl get svc</code>). To access this service from another service, you don't need to worry about using that IP address -- you can just use the name of this service; <strong>fancyapi-service</strong>.</p>
<p>An example in C# would look like this:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">using</span> (<span class="hljs-keyword">var</span> client = <span class="hljs-keyword">new</span> HttpClient()) {
  <span class="hljs-keyword">var</span> result = <span class="hljs-keyword">await</span> client.GetAsync(<span class="hljs-string">$"http://fancyapi-service/v1/fancyproducts"</span>);
  <span class="hljs-keyword">if</span> (result.IsSuccessStatusCode) {
    <span class="hljs-keyword">return</span> JsonConvert.DeserializeObject&lt;List&lt;Product&gt;&gt;(<span class="hljs-keyword">await</span> result.Content.ReadAsStringAsync());
  }
  <span class="hljs-keyword">return</span> <span class="hljs-literal">null</span>;
}
</code></pre>
<p>We instantiate a new HTTP client (inside an async Task method) and call the <strong>fancyapi-service</strong> to get a list of products from the v1 controller. Kubenet will resolve <strong>fancyapi-service</strong> to the correct IP address and route the traffic appropriately. Easy as pie!</p>
]]></content:encoded></item><item><title><![CDATA[Journey into Kubernetes - Ingress]]></title><description><![CDATA[The last step in our cluster creation process -- creating an ingress controller. Let's talk about what that is.
Ingress means "going in". In the K8s scenario, this means traffic going into the cluster. Egress, the anytonym of ingress, means going out...]]></description><link>https://paulers.com/journey-into-kubernetes-ingress</link><guid isPermaLink="true">https://paulers.com/journey-into-kubernetes-ingress</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[aks]]></category><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Sat, 09 May 2020 22:02:55 GMT</pubDate><content:encoded><![CDATA[<p>The last step in our cluster creation process -- creating an ingress controller. Let's talk about what that is.</p>
<p>Ingress means "going in". In the K8s scenario, this means traffic going into the cluster. Egress, the anytonym of ingress, means going out. That would be the traffic leaving the cluster such the service sending data out from inside the cluster. An ingress controller (IC) is essentially a gateway into the cluster.</p>
<p>There are many different ICs to choose from, but we're going to focus on the K8s supported <code>ingress-nginx</code> project (not to be confused with <code>nginx-ingress</code> which is supported by the folks behind nginx). Both nginx ICs are good choices, and the nginx-supported one even has a 'plus' version you must pay for with some extra features. However, for our purposes, and quite honestly purposes of many production-bound projects, we're good with <code>ingress-nginx</code>.</p>
<p>You can visit the project website <a target="_blank" href="https://kubernetes.github.io/ingress-nginx/">here</a>.</p>
<p>On the site, you can visit the Deployment section -&gt; Installation guide and click on Azure. You must run one command to install everything... but before you do that, we need to modify the installation file, because it doesn't include our SSL certificate. Go ahead and download the YAML file. You can either grab the linke from the official website, or just right-click <a target="_blank" href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml">here</a> and Save As <code>deploy.yaml</code>.</p>
<p>The above YAML file is a master installer which will create a lot of artifacts in your cluster. It creates a namespace (we already did this step in the previous article, so nothing new will happen here), some ConfigMaps, a ServiceAccount, a ClusterRole and ClusterRoleBinding, some Roles and Services and of course a Deployment of a special version of nginx. There's a lot of stuff that's created but worry not! Our focus here is pretty narrow.</p>
<p>We must modify the nginx configuration to read our SSL certificate. The default nginx configuration which we get from the above installation does not include this important flag, so we must do it ourselves. Inside the downloaded YAML file, look for lines:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Source: ingress-nginx/templates/controller-deployment.yaml</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
</code></pre>
<p>This is the deployment for the nginx ingress controller. If you scroll down a bit, you'll see where the container spec is defined and these lines:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">args:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">/nginx-ingress-controller</span>
  <span class="hljs-string">...</span>
</code></pre>
<p>Go ahead and add this right after <code>/nginx-ingress-controller</code></p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-string">--default-ssl-certificate=ingress-nginx/aks-ingress-tls</span>
</code></pre>
<p>You should have this complete args:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">args:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">/nginx-ingress-controller</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">--default-ssl-certificate=ingress-nginx/aks-ingress-tls</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">--publish-service=ingress-nginx/ingress-nginx-controller</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">--election-id=ingress-controller-leader</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">--ingress-class=nginx</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">--configmap=ingress-nginx/ingress-nginx-controller</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">--validating-webhook=:8443</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">--validating-webhook-certificate=/usr/local/certificates/cert</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">--validating-webhook-key=/usr/local/certificates/key</span>
</code></pre>
<p>Save the modified installation YAML and apply it</p>
<pre><code class="lang-powershell">kubectl apply -f deploy.yaml
</code></pre>
<p>Halfway there! The ingress controller is created. One last step we need to take is to tell the ingress controller where to route the incoming traffic.</p>
<p>Inside your integration folder, create a new file; <code>ingress.yml</code> and put this inside:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">networking.k8s.io/v1beta1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">ingress-fancyapi</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">integration</span>
  <span class="hljs-attr">annotations:</span>
    <span class="hljs-attr">kubernetes.io/ingress.class:</span> <span class="hljs-string">"nginx"</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">tls:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">hosts:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">yourcooldomain.io</span>
    <span class="hljs-attr">secretName:</span> <span class="hljs-string">aks-ingress-tls</span>
  <span class="hljs-attr">rules:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">host:</span> <span class="hljs-string">integration.yourcooldomain.io</span>
    <span class="hljs-attr">http:</span>
      <span class="hljs-attr">paths:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">backend:</span>
          <span class="hljs-attr">serviceName:</span> <span class="hljs-string">fancyapi-service</span>
          <span class="hljs-attr">servicePort:</span> <span class="hljs-number">80</span>
</code></pre>
<p>There are a few new things here. First, we're adding an annotation. Annotations are metadata. In this case, we're attaching metadata required by the nginx ingress controller.</p>
<p>Next, in the spec map, we have tls and a list of hosts. I defined a made-up host and attached the secret we created in the previous article to it. For this to work however, you'll need to edit the A record of the domain first to point yourcooldomain.io to the IP address of the ingress controller, and second add a wildcard CNAME of *.yourcooldomain.io. (you are likely to need the . at the end there, as it's a standard) as well. Also, it's important that the host <code>yourcooldomain.io</code> is the same host you defined when creating your SSL certificate!</p>
<p>Both the A record and the wildcard CNAME need to point to the IP address of the ingress service. You can find this IP address via the following command:</p>
<pre><code class="lang-powershell">kubectl get svc -n=ingress-nginx
</code></pre>
<p>External IP is what you're looking for. Assuming your IP is 1.2.3.4, here's an example of what your DNS records should look like:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>DNS Type</td><td>Hostname</td><td>IP Address</td></tr>
</thead>
<tbody>
<tr>
<td>A Record</td><td>yourcooldomain.io</td><td>1.2.3.4</td></tr>
<tr>
<td>CNAME</td><td>*.yourcooldomain.io.</td><td>1.2.3.4</td></tr>
</tbody>
</table>
</div><p>Your domain address provider is likely to have instructions on how to set this up.</p>
<p>In the rules map, we have a list again, but only one item. Here, we're saying "hey when someone hits <code>integration.yourcooldomain.io</code>, I want them to go to the backend service <code>fancyapi-service</code> on port <code>80</code>". When you create your production ingress file inside the production namespace, you will possible just have <code>yourcooldomain.io</code> without a subdomain, or maybe <code>www.yourcooldomain.io</code> or something.</p>
<p>Almost there! Go ahead and apply this config to the K8s cluster:</p>
<pre><code class="lang-powershell">kubectl apply -f ingress.yml
</code></pre>
<p>If your domain provider has updated their DNS entries, you can visit <code>https://integration.yourcooldomain.io</code> and you should be warned by your browser that the SSL certificate cannot be trusted. Thankfully, you know you can trust it, because you created it! If you skip the warning in the browser, you should arrive at your service!</p>
<p>Congratulations! You now have a namespaced cluster with a running service, an ingress controller and integrated SSL certificate. This is the end of the series, but there are at least 3 more Kubernetes articles to come. They will build on the knowledge we've gained in this series.</p>
<p>Happy Kubernetesing!</p>
<hr />
<p>Resources</p>
<ul>
<li><a target="_blank" href="https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/">Annotations</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Journey into Kubernetes - SSL]]></title><description><![CDATA[In this short article I want to prepare us for what will be the last step in this series - ingress controller and TLS termination. This preparation will entail generating an SSL certificate and key on our local machine and storing them in a 'secret' ...]]></description><link>https://paulers.com/journey-into-kubernetes-ssl</link><guid isPermaLink="true">https://paulers.com/journey-into-kubernetes-ssl</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[aks]]></category><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Wed, 06 May 2020 20:57:23 GMT</pubDate><content:encoded><![CDATA[<p>In this short article I want to prepare us for what will be the last step in this series - ingress controller and TLS termination. This preparation will entail generating an SSL certificate and key on our local machine and storing them in a 'secret' in our K8s cluster.</p>
<p>In a production scenario, you'll want to get a real certificate from a trusted entity like Verisign or use Let's Encrypt. That's not related to Kubernetes though, so it's out of scope of this series.</p>
<p>Let's get started!</p>
<p>You're gonna need bash shell installed on your PC, or some way to run <code>openssl</code>. If you're on Linux or Mac, you're golden, but if you're on Windows like me, you may already have Git for Windows installed which comes with bash and thus openssl. Open bash and type in:</p>
<pre><code class="lang-bash">openssl req -x509 -nodes -days 365 -newkey rsa:2048 -out aks-ingress-tls.crt -keyout aks-ingress-tls.key
</code></pre>
<p>We're creating a new x509 certificate with RSA-2048 encryption valid for 365 days. We're also asking for <strong>crt</strong> and <strong>key</strong> files to be created. When you run this command, you'll be asked a few questions. The only thing that really matters is the company name. Everything else you can leave blank.</p>
<p>The <strong>crt</strong> and <strong>key</strong> files are created in the directory where you ran the command. I recommend you store them somewhere safe, perhaps in the same folder as your cluster definition YAML files. </p>
<p>In preparation for the next article, we're going to create a new namespace. This namespace was also created by the ingress controller installation YAML, but we need it now... not in the next article. Go ahead and create a YAML file named <code>nginx-ingress.yml</code> and add this to it:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Namespace</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">ingress-nginx</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app.kubernetes.io/name:</span> <span class="hljs-string">ingress-nginx</span>
    <span class="hljs-attr">app.kubernetes.io/instance:</span> <span class="hljs-string">ingress-nginx</span>
</code></pre>
<p>Apply it:</p>
<pre><code class="lang-powershell">kubectl apply -f nginx-ingress.yml
</code></pre>
<p>This will create a new namespace where we'll store the ingress controller. We'll discuss why we want a new namespace in the next article, but for now, just trust me!</p>
<p>In the same directory as the <strong>crt</strong> and <strong>key</strong> files, you can now run the following command:</p>
<pre><code class="lang-powershell">kubectl create secret tls aks-ingress-tls --namespace ingress-nginx --key aks-ingress-tls.key --cert aks-ingress-tls.crt
</code></pre>
<p>We're creating a <strong>secret</strong> in our cluster which we can then refer to later when creating the ingress controller and setting up TLS termination. This secret contains the certificate and the key we created earlier.</p>
<p>Great! We're now ready for the moment you've all been waiting for -- making our API service hosted inside our K8s cluster accessible to the public, complete with SSL support! That's coming up in the next article.</p>
<hr />
<p>Resources:</p>
<ul>
<li><a target="_blank" href="https://kubernetes.io/docs/concepts/configuration/secret/">Secrets</a></li>
<li><a target="_blank" href="https://spin.atomicobject.com/2014/05/12/openssl-commands/">OpenSSL Commands</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Journey into Kubernetes - Services]]></title><description><![CDATA[A service makes pods accessible. If you want the pods in your deployment to communicate with other pods or be accessible from outside the cluster, you must define a service. There are 4 types of services, but the two we're interested in are ClusterIP...]]></description><link>https://paulers.com/journey-into-kubernetes-services</link><guid isPermaLink="true">https://paulers.com/journey-into-kubernetes-services</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[aks]]></category><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Sun, 03 May 2020 21:17:10 GMT</pubDate><content:encoded><![CDATA[<p>A service makes pods accessible. If you want the pods in your deployment to communicate with other pods or be accessible from outside the cluster, you must define a service. There are 4 types of services, but the two we're interested in are ClusterIP and LoadBalancer. ClusterIP does not expose a public IP address, so the service can only communicate within the cluster. LoadBalancer on the other hand does expose a public IP and can be hit from the outside world.</p>
<p>If you have pods which do not require communication with any other pods, then you don't need a service. This could be a cron job, or an app which listens to a queue and sends an e-mail. Services have several properties, but to get up and running you need just a few pieces in place. Let's create a <code>fancyapi-service.yml</code> file and toss this inside:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">fancyapi-service</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">integration</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">fancyapi</span>
  <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">http</span>
      <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
      <span class="hljs-attr">targetPort:</span> <span class="hljs-number">5000</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">ClusterIP</span>
</code></pre>
<p>The interesting parts of this definition are in the spec map. First, we have the selector. You may recall from the last article about deployments that selectors let you grab objects which have tags assigned. In this definition, we're grabbing things tagged with <code>app: fancyapi</code>. Just so happens that our deployment has that tag!</p>
<p>The ports map is where you define how you want your pods to be accessible. In the above example, we're defining just one port named <code>http</code> using the <code>TCP</code> protocol. The next two values are important to get right:</p>
<ul>
<li><code>port</code> is what other services will call this service on</li>
<li><code>targetPort</code> is the port that the container is exposing</li>
</ul>
<p>In this example we're assuming the exposed port in the container is 5000 (default ASP.NET Core application port), but obviously yours may be different. This port is the same one you may have defined in your Dockerfile under <code>EXPOSE</code> property, or when you were building your container image with the -p flag.</p>
<p>The last thing we need to talk about is the <code>type: ClusterIP</code>. ClusterIP means the service will have an internal IP address assigned by K8s (kubenet specifically). Your service will not have an outside IP address. If you wanted to expose your service outside the cluster, you could do so by setting the type to NodePort. When you do this, K8s requests an IP address from the hosting service (in our case Azure) and then assigns a port to it. The port is usually in the 30000s range by default.</p>
<p>You could also expose it via <code>type: LoadBalancer</code>. We're going to use LoadBalancer a bit later in this article series when we set up an Ingress Controller. For now, let's stick to ClusterIP. Go ahead and apply this YML:</p>
<pre><code class="lang-powershell">kubectl apply -f fancyapi-service.yml
</code></pre>
<p>Then check your services:</p>
<pre><code class="lang-powershell">kubectl get services
</code></pre>
<p>You'll see a table with one item in it. It will have a CLUSTER-IP that starts with 10 and not have an EXTERNAL-IP. That's okay, we'll get there.</p>
<p>Most tutorials I found end here at this step. They'll tell you to use NodePort or LoadBalancer and call it good. Setting up an Ingress Controller is honestly not that difficult though, so I don't know why nobody goes further. We'll get started with the Ingress Controller in the next article by creating a local SSL certificate.</p>
]]></content:encoded></item><item><title><![CDATA[Journey into Kubernetes - Deployments]]></title><description><![CDATA[Now that our namespaces and resource quotas are set, it's time to finally start putting some microservices into the cluster.
The way a service gets up into the cluster is via two main paths: Pod and Deployment. In YAML, you can define a pod, tell it ...]]></description><link>https://paulers.com/journey-into-kubernetes-deployments</link><guid isPermaLink="true">https://paulers.com/journey-into-kubernetes-deployments</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[aks]]></category><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Thu, 30 Apr 2020 21:15:58 GMT</pubDate><content:encoded><![CDATA[<p>Now that our namespaces and resource quotas are set, it's time to finally start putting some microservices into the cluster.</p>
<p>The way a service gets up into the cluster is via two main paths: Pod and Deployment. In YAML, you can define a pod, tell it what image to run and go! This will spin up a single pod in the cluster. As I mentioned in a previous article, pods can contain more than one container, but generally they don't. Pods have an finite lifetime and when they restart everything inside the pod gets lost. Any sort of data persistence inside a pod is thus moot.</p>
<p>There's a lot more to learn about pods, but for the sake of not getting bogged down in details, let's just understand pods as container instances. Great, let's move onto Deployments.</p>
<h3 id="heading-what-is-a-deployment">What is a Deployment?</h3>
<p>An easy way to understand a deployment is as a definition of a microservice. A deployment is a wrapper around a pod or pods with additional functionality. You can define how many replicas of a pod you always want up. You can control a set of pods via the deployment instead of each individual pod. For example, if you have a deployment which defines 6 replicas of a pod, if you wanted to stop the pods, you'd have to individually shut down each one. Or... you just stop the deployment, and it stops all those replicas for you. You can scale the replicas up or down, and more! Check the link in resources below for a LOT more information about deployments.</p>
<p>Let's create a deployment! Call it 'fancyapi-deployment.yaml':</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">fancyapi-deployment</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">integration</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">fancyapi</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">2</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">fancyapi</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">fancyapi</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">fancyapi</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">...</span>
          <span class="hljs-attr">imagePullPolicy:</span> <span class="hljs-string">Always</span>
          <span class="hljs-attr">env:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">ASPNETCORE_ENVIRONMENT</span>
              <span class="hljs-attr">value:</span> <span class="hljs-string">Integration</span>
          <span class="hljs-attr">resources:</span>
            <span class="hljs-attr">requests:</span>
              <span class="hljs-attr">cpu:</span> <span class="hljs-string">250m</span>
              <span class="hljs-attr">memory:</span> <span class="hljs-string">128Mi</span>
            <span class="hljs-attr">limits:</span>
              <span class="hljs-attr">cpu:</span> <span class="hljs-string">1000m</span>
              <span class="hljs-attr">memory:</span> <span class="hljs-string">512mi</span>
        <span class="hljs-attr">restartPolicy:</span> <span class="hljs-string">Always</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">fancyapi</span>
</code></pre>
<p>There's a lot going on here. Let's take a look piece by piece.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">fancyapi-deployment</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">fancyapi</span>
</code></pre>
<p>This should be pretty standard by now. Creating a <code>Deployment</code> named <code>fancyapi-deployment</code>. Theoretically the <code>-deployment</code> is redundant, but when I <code>kubectl get deployments</code> and see the list, I like to be reassured I'm looking at deployments and not something else. Personal preference really. We add a label to the deployment named <code>app</code> with the value <code>fancyapi</code>.</p>
<p>Next, replicas.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">2</span>
</code></pre>
<p>This tells K8s that we always want at least 2 instances of the pod running. When adding this KVP, K8s creates a ReplicaSet in the background. We'll come back to it later.</p>
<p>Under the <code>template</code> map is where we define what every pod replica should look like. You're already familiar with the metadata, so let's skip right down to the spec.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">spec:</span>
  <span class="hljs-attr">containers:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">fancyapi</span>
      <span class="hljs-attr">image:</span> <span class="hljs-string">...</span>
      <span class="hljs-attr">imagePullPolicy:</span> <span class="hljs-string">Always</span>
      <span class="hljs-attr">env:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">ASPNETCORE_ENVIRONMENT</span>
          <span class="hljs-attr">value:</span> <span class="hljs-string">Integration</span>
      <span class="hljs-attr">resources:</span>
        <span class="hljs-attr">requests:</span>
          <span class="hljs-attr">cpu:</span> <span class="hljs-string">250m</span>
          <span class="hljs-attr">memory:</span> <span class="hljs-string">128Mi</span>
        <span class="hljs-attr">limits:</span>
          <span class="hljs-attr">cpu:</span> <span class="hljs-string">1000m</span>
          <span class="hljs-attr">memory:</span> <span class="hljs-string">512mi</span>
    <span class="hljs-attr">restartPolicy:</span> <span class="hljs-string">Always</span>
</code></pre>
<p>The <code>containers</code> property is in plural, because as I mentioned in a previous article, you can host multiple containers in one pod. In our case we are gonna have a list with one item in it.</p>
<p>Much like everything else in the K8s world, a container needs a name. The image needs to be a URI pointing at the ACR. Let's say, for example, that your registry is called CoolContainerRegistry and that your image is called <code>fancyapi</code> and has a tag <code>v1</code>. You'll point it at ACR like this:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">image:</span> <span class="hljs-string">coolcontainerregistry.azurecr.io/fancyapi:v1</span>
</code></pre>
<p>You'll need to find the URI of your registry in Azure Portal (or you can also find it using Azure CLI:)</p>
<pre><code class="lang-powershell">az acr list | sls "loginServer"
</code></pre>
<p>The imagePullPolicy tells the container to always pull the image when it starts up. This ensures that when you restart your pods after a new release, the new image will be acquired.</p>
<blockquote>
<p>There are multiple different strategies for deploying containers to pods. The strategy in this deployment is just one, where you use the same tag (v1) for every release. However, you may want to tag every release with a new version (like v1.16.3.879). This will make your image look like <code>fancyapi:v1.16.3.879</code> and to ensure your pod's running the latest version, you'd have to use the <code>kubectl set image</code> command. Or more likely you have releases tagged with <code>:int</code> and <code>:prod</code> like in the scenario we've been building. In any case, you have options.</p>
</blockquote>
<p>Next, we have the <code>env</code> list. This is a list of maps that contains the name of the environment variable and its value. You can add any environment variables you want your container to have right here. In the case of ASP.NET Core, you may want to inject the environment name. We're hardcoding it here, but there's a neat trick you can use, where you can use the Pod's metadata properties as environment variable values.</p>
<p>For example, if your pod is hosted in the namespace <strong>Production</strong>, you could inject the variable like this:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">env:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">ASPNETCORE_ENVIRONMENT</span>
    <span class="hljs-attr">valueFrom:</span>
      <span class="hljs-attr">fieldRef:</span>
        <span class="hljs-attr">fieldPath:</span> <span class="hljs-string">metadata.namespace</span>
</code></pre>
<p>This would make ASPNETCORE_ENVIRONMENT=Production and thus enable you to use production-specific appsettings.Production.json.</p>
<p>Right then. Next, we have the resources map with <code>requests</code> and <code>limits</code> child maps. If you recall the Resources discussion from an earlier article, these are pod-specific values. </p>
<p>Requests are what the pod is guaranteed to have (provided it's not over the namespace's request limit) and limits are what the pod is allowed to have at maximum. These values are extremely important to set! It will take some time to determine how much CPU and memory your service takes up, but once you have the rough numbers and set them, the cluster will manage everything for you. If your traffic somehow explodes, your entire cluster won't get hosed -- only the pod under load will get throttled (thus possibly resulting in 502s to calling users, but better than a total outage).</p>
<blockquote>
<p>As an aside, when you have your microservices all nice and cozy in the cluster and you're ready to load test, check out <a target="_blank" href="https://artillery.io">artillery.io</a>. I use this nodejs lib to hammer my APIs and collect results. Very handy tool.</p>
</blockquote>
<p>Okay, lastly, we have the <code>restartPolicy: Always</code>. This tells the pod that we want it to restart whenever anything makes the container exit. This includes things like if the pod runs out of resources or there's a critical boot failure inside the container, or one of the containers inside the pod dies. It also includes positive events like the container gracefully exiting. In this case, the pod will restart itself and stay in 'Running' state. There are other policies like <code>OnFailure</code> and <code>Never</code>. These are handy for Cron/Jobs -- specifically Never, since you don't want a job to keep rerunning itself after it's finished... unless you do.</p>
<p>Great, almost done!</p>
<p>Last section is the selector:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">selector:</span>
  <span class="hljs-attr">matchLabels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">fancyapi</span>
</code></pre>
<p>Here we have defined a label selector. You can see that this selector is on the same level as the <code>template</code> map. The selector applies to the deployment and is not part of the pod template. <code>fancyapi</code> is a label we added to pod template. If you look at the other two keys on this level, <code>replicas</code> and <code>template</code>, you can deduce that the deployment basically says _I want two replicas to run the container template with the label <code>app: fancyapi</code>. It's a bit confusing, isn't it? Leave it to Google to overcomplicate a concept and then <a target="_blank" href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/">not document it well</a>.</p>
<p>Anyway, you now have your deployment YAML. Go ahead and apply it:</p>
<pre><code class="lang-powershell">kubectl apply -f fancyapi-deployment.yaml
</code></pre>
<p>Let's see it in our cluster:</p>
<pre><code class="lang-powershell">kubectl get deployments
</code></pre>
<p>We can also see that K8s spun up two pods for us, because we requested 2 replicas:</p>
<pre><code class="lang-powershell">kubectl get pods
</code></pre>
<p>Hurrah! But wait... you have an API running now, but how do you access it from the outside? That's coming up in the next article!</p>
<hr />
<p>Resources:</p>
<ul>
<li><a target="_blank" href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">Deployments</a></li>
<li><a target="_blank" href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/">Environment Variables</a></li>
<li><a target="_blank" href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy">Pod Restart Policy</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Journey into Kubernetes - Resources]]></title><description><![CDATA[We don't want a rogue microservice to eat up all the CPU and memory in the cluster. Pods can be restricted to use a specific % of CPU and a flat amount of memory. They can be restricted either individually or at namespace level.

Pods are individual ...]]></description><link>https://paulers.com/journey-into-kubernetes-resources</link><guid isPermaLink="true">https://paulers.com/journey-into-kubernetes-resources</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[aks]]></category><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Mon, 27 Apr 2020 23:01:01 GMT</pubDate><content:encoded><![CDATA[<p>We don't want a rogue microservice to eat up all the CPU and memory in the cluster. Pods can be restricted to use a specific % of CPU and a flat amount of memory. They can be restricted either individually or at namespace level.</p>
<blockquote>
<p>Pods are individual instances of our microservice. They are wrappers around your Docker containers -- but they can contain more than just one container. You're unlikely to have more than one container running inside a pod, unless you need something like a timed cron-job or a queue listener which the main service running inside the pod depends on. That additional container is called a sidecar. To summarize, for simplicity's sake, a pod is an instance of a container with some K8s specific metadata.</p>
</blockquote>
<h3 id="heading-resource-quotas">Resource Quotas</h3>
<p>We'll use resource quotas for individual pods in a future article. For now, let's focus on namespace-level resource quotas and limits. In our scenario we have two namespaces - Production and Integration. We're running these two environments on the same cluster, so we want to make sure that our Integration environment doesn't blow up Production when we run some tests. We can thus limit Integration to have a finite amount of resources available to it, while letting Production consume the rest.</p>
<p>To do this, we'll be writing a new YAML file. Inside the Cluster folder you created in the previous article add a new file <code>resources.yml</code>. Here's what it looks like to start:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ResourceQuota</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">resourcequotas</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">integration</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">hard:</span>
    <span class="hljs-attr">requests.cpu:</span> <span class="hljs-string">1500m</span>
    <span class="hljs-attr">requests.memory:</span> <span class="hljs-string">2Gi</span>
    <span class="hljs-attr">limits.cpu:</span> <span class="hljs-string">3000m</span>
    <span class="hljs-attr">limits.memory:</span> <span class="hljs-string">4Gi</span>
</code></pre>
<p>Then, apply it:</p>
<pre><code class="lang-powershell">kubectl apply -f resources.yml
</code></pre>
<p>Great, so, what did we do there? Let's talk about it.</p>
<p>The first few lines are the same as every other YAML file you're gonna write. The <code>apiVersion</code>, <code>kind</code> and <code>metadata</code> are all the same keys. The kind is of type ResourceQuota, because that's what we're adding! There's a new addition to the metadata map called <code>namespace</code>. This tells K8s to apply these settings to a specific namespace.</p>
<p>Next, we have the <code>spec</code> - short for specification. This map is where you define what you want the ResourceQuota to look like. The <code>hard</code> keyword tells Kubernetes that these are hard limits... that is, they cannot be skirted. If you define a pod which has a request.cpu higher than limit.cpu, it will fail to start.</p>
<p>Inside the hard map we have 4 KVPs. The values defined here are valid across all running pods and are a sum of all request and limit resources. Let's look at a scenario to illustrate:</p>
<p>PodA requests 100m. PodB requests 300m. PodC requests 1000m. Total, the requests are 1400m. That's within the hard limit of 1500m, so we're okay. If the sum of all requests from our pods exceeds that of the request quotas, some of the pods will fail to start. Same goes for limits.</p>
<p>You might be wondering ... what is 1500m? 'm' stands for millicpu, or 1/1000th of a CPU. When you provision your cluster, you pick the size of your nodes. For example, you might have 2 vCPUs per node and 8GB of memory. 2 vCPUs is 2000m. 4 vCPUs is 4000m. You can also use decimals for CPU: 0.1, 0.3, 1. That's, 100m, 300m and 1000m respectively. Usually, you'll want to stick to the M notation, but just know that decimals are possible.</p>
<p>For memory, you can use GB, MB, Kb for 1000 or Gi, Mi, Ki for 1024... etc. Nothing new here.</p>
<p>Okay great, we have our quotas for the integration environment! Let's take at one more thing - Limit Ranges.</p>
<h3 id="heading-limit-ranges">Limit Ranges</h3>
<p>Resource quotas set limits on namespaces. However, they do not set limits on individual pods. This means that even though a rogue, untested microservice in integration won't bring down our cluster, it may bring down our integration environment! That's because within a namespace, a pod may consume as much memory as it wants. That's where limit ranges come in.</p>
<p>Let's define one, then talk about it. Inside the same <code>resource.yml</code> file, after the last written line, add the following:</p>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>

<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">LimitRange</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">limitranges</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">integration</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">limits:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">default:</span>
        <span class="hljs-attr">cpu:</span> <span class="hljs-string">300m</span>
        <span class="hljs-attr">memory:</span> <span class="hljs-string">256Mi</span>
      <span class="hljs-attr">defaultRequest:</span>
        <span class="hljs-attr">cpu:</span> <span class="hljs-string">10m</span>
        <span class="hljs-attr">memory:</span> <span class="hljs-string">128Mi</span>
      <span class="hljs-attr">max:</span>
        <span class="hljs-attr">cpu:</span> <span class="hljs-string">600m</span>
        <span class="hljs-attr">memory:</span> <span class="hljs-string">1024Mi</span>
      <span class="hljs-attr">min:</span>
        <span class="hljs-attr">cpu:</span> <span class="hljs-string">10m</span>
        <span class="hljs-attr">memory:</span> <span class="hljs-string">32Mi</span>
      <span class="hljs-attr">type:</span> <span class="hljs-string">Container</span>
</code></pre>
<p>Let's say you have a pod in which you don't specify request or limit values (remember, you can specify these on pod level -- we'll get to it in a later article). With the LimitRange set for the namespace, said pod would automatically get the default limit/request values under default/defaultRequest.</p>
<p>The max and min maps specify limits that an individual pod can define inside its YAML file. If you have a pod where you DO define these limits, they cannot exceed the limits defined in the namespace. These limits are meant to ensure the stability of the cluster.</p>
<p>Lastly, we assign these limits to a container via the <code>type</code> KVP. As I mentioned above, a pod is a wrapper around a container or multiple containers. If you have two containers inside a pod, each one would get the defaults and max/min values defined above.</p>
<p>Go ahead and apply the limit ranges:</p>
<pre><code class="lang-powershell">kubectl apply -f resources.yml
</code></pre>
<h3 id="heading-summary">Summary</h3>
<p>Defining request quotas and limit ranges at namespace level ensures that the cluster won't get hosed. In addition to CPU and Memory resource limits, there are also object limits. You can limit how many pods can run inside a namespace for example. We didn't touch on those here, but it's possible. Check the resources at the bottom of this article for more information.</p>
<p>You can view more information about the integration namespace and see the resource quotas and limits with this command:</p>
<pre><code class="lang-powershell">kubectl describe namespace integration
</code></pre>
<p>In the next article we'll introduce Deployments.</p>
<hr />
<p>Resources</p>
<ul>
<li><a target="_blank" href="https://kubernetes.io/docs/concepts/policy/resource-quotas/">Resource Quotas</a></li>
<li><a target="_blank" href="https://kubernetes.io/docs/concepts/policy/limit-range/">Limit Range</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Journey into Kubernetes - Namespaces]]></title><description><![CDATA[Great, you have your cluster, you have your kubectl CLI installed... let's get creatin'!

We're going to be using the YAML declarative language to create all our objects in our K8s cluter. It's also possible to do all this via the kubectl command, bu...]]></description><link>https://paulers.com/journey-into-kubernetes-namespaces</link><guid isPermaLink="true">https://paulers.com/journey-into-kubernetes-namespaces</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[aks]]></category><dc:creator><![CDATA[Paul K]]></dc:creator><pubDate>Fri, 24 Apr 2020 20:02:20 GMT</pubDate><content:encoded><![CDATA[<p>Great, you have your cluster, you have your kubectl CLI installed... let's get creatin'!</p>
<blockquote>
<p>We're going to be using the YAML declarative language to create all our objects in our K8s cluter. It's also possible to do all this via the <code>kubectl</code> command, but quite frankly, that's silly. Just FYI though, you can create namespaces, deployments, pods, and other things via <code>kubectl</code>, you just shouldn't.</p>
</blockquote>
<p>When we write a YAML file, we'll then <em>apply</em> it with this command:</p>
<pre><code class="lang-powershell">kubectl apply -f .\namespaces.yml
</code></pre>
<p>You'll be using the <code>apply</code> command a lot. Applying a YAML file tells the K8s cluster what you want some piece of it to look like. If you're confused, worry not, it will all become clear as we go along!</p>
<p>Go ahead and create a new folder where you'll be storing your YAML files. Inside that folder, create two new folders: Cluster, Services. We want to keep the cluster setup separate from the rest of your YAML files.</p>
<p>Inside the Cluster folder, create a file <code>namespaces.yml</code>. We're going to create two namespaces: Integration and Production. In a real-world scenario, you'd also want at least a Staging namespace in addition to the two aforementioned ones, but here we'll be fine with just two.</p>
<p>Inside the new <code>namespaces.yml</code> file, add the following code:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Namespace</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">production</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">prod</span>

<span class="hljs-meta">---</span>

<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Namespace</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">integration</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">int</span>
</code></pre>
<p>One thing to note, when writing YAML, you do <strong>NOT</strong> want to use tabs. Always use spaces. The number of spaces is up to you, but no tabs! It will fail to parse when applying if you use tabs.</p>
<p>Let's check to make sure kubectl works and you see the nodes you created in the previous step.</p>
<pre><code class="lang-powershell">kubectl get nodes
</code></pre>
<p>You should see two nodes. If you don't, ensure you're targeting the right context (cluster). You can use the <code>az aks</code> commands to do this.</p>
<p>If you see two nodes, you're good to go. Let's apply this YAML file to the cluster, then discuss its contents.</p>
<pre><code class="lang-powershell">kubectl apply -f .\namespaces.yml
</code></pre>
<p>You'll see something like this:</p>
<pre><code class="lang-powershell">namespace/production created
namespace/integration created
</code></pre>
<p>Okay, let's look at the YAML real quick. The first key-value pair (KVP) is <code>apiVersion: v1</code>. Kubernetes is an evolving project and there have been lots of different versions of definition syntax. It's pretty confusing which version should be used and when even to a seasoned K8s navigator. In this case, we're using <code>v1</code>.</p>
<p>Next, the KVP <code>kind: Namespace</code> tells K8s that this definition is for a Namespace object. Pretty self-explanatory.</p>
<p>Then we have a metadata map. The first KVP is <code>name: production</code>. As you guessed, this sets the namespace's name.</p>
<p>Last, we have the labels map with a label KVP <code>name: prod</code>. Labels are used to organize and group things... as in any other service. For example, in Azure, you can add tags to any entities you create. Same concept. If you create additional objects, you can add the same label <code>name: prod</code>. Labels are <em>not unique</em>.</p>
<p>Okay! Now that you know what's what, let's see your namespaces.</p>
<pre><code class="lang-powershell">kubectl get namespaces
</code></pre>
<p>You'll get a list of namespaces, along with Status and Age properties. There's a <code>default</code> namespace, that's where all objects go if you don't specify a namespace for them. There are some <code>kube-*</code> namespaces which are system namespaces. Lastly, your two namespaces should be there as well.</p>
<blockquote>
<p>By default, all your kubectl commands go against the <code>default</code> namespace. To make things more convenient when applying all YAML configuration files later, we're going to set the integration namespace as our default instead. You can use the <code>kubectl config set-context --current --namespace=integration</code> command to do that. Alternatively, if you don't want to set your current context, you can add <code>-n integration</code> after every command you type. I think setting the context is more convenient, but it's your call.</p>
</blockquote>
<p>Let's leave it here. In the next article, we'll talk about Resource Quotas and Limit Ranges. We don't want our microservices to go rogue and bring down our entire cluster...</p>
]]></content:encoded></item></channel></rss>