<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Frontend Ramblings feed</title><description>Ramblings and scraps about frontend topics</description><link>https://webpro.nl</link><lastBuildDate>Sun, 12 Oct 2025 09:48:12 GMT</lastBuildDate><atom:link href="https://webpro.nl/blog/feed.xml" rel="self" type="application/rss+xml"/><language>en</language><copyright>© 2023 Lars Kappert</copyright><category>frontend</category><category>javascript</category><category>node.js</category><category>dotfiles</category><category>terminal</category><item><title>Shell function to show line in file with context</title><link>https://webpro.nl/scraps/shell-function-show-file-line-with-context</link><guid isPermaLink="true">https://webpro.nl/scraps/shell-function-show-file-line-with-context</guid><description>&lt;h1&gt;Shell function to show line in file with context&lt;/h1&gt;
&lt;p&gt;Today I wrote a shell function to display lines in a file, with optionally some
surrounding lines for context. This can come in handy when looking at output
like stack traces including files and line numbers and you want to see what&apos;s in
that file on that line.&lt;/p&gt;
&lt;p&gt;Built-in IDE terminals often feature clickable links, this &lt;code&gt;line&lt;/code&gt; function is
for when you don&apos;t have that luxury.&lt;/p&gt;
&lt;h2&gt;Usage&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ line
Usage: line &amp;lt;file&amp;gt; &amp;lt;line_number&amp;gt; [lines_around=0]
       line &amp;lt;file:line_number[:column]&amp;gt;
       cat &amp;lt;file&amp;gt; | line &amp;lt;line_number&amp;gt; [lines_around=0]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The following examples are all equivalent and show line number 10 of &lt;code&gt;file.txt&lt;/code&gt;
with 2 extra lines before and after:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ line file.txt 10 2
$ line file.txt:10:5 2
$ cat file.txt | line 10 2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The default context is &lt;code&gt;0&lt;/code&gt; lines.&lt;/p&gt;
&lt;h2&gt;Script&lt;/h2&gt;
&lt;p&gt;Add this function somewhere in your &lt;code&gt;$HOME/.bash_profile&lt;/code&gt; (or equivalent):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;line() {
  local FILE LINE_NUMBER LINES_AROUND=0
  local NAME=&amp;quot;${FUNCNAME[0]}&amp;quot;

  if [[ ! -t 0 ]]; then
    LINE_NUMBER=$1
    LINES_AROUND=${2:-$LINES_AROUND}
  elif [[ $1 =~ ^([^:]+):([0-9]+)(:[0-9]+)?$ ]]; then
    FILE=&amp;quot;${BASH_REMATCH[1]}&amp;quot;
    LINE_NUMBER=&amp;quot;${BASH_REMATCH[2]}&amp;quot;
    LINES_AROUND=${2:-$LINES_AROUND}
  else
    FILE=$1
    LINE_NUMBER=$2
    LINES_AROUND=${3:-$LINES_AROUND}
  fi

  if [[ -t 0 &amp;amp;&amp;amp; -z &amp;quot;$FILE&amp;quot; || -z &amp;quot;$LINE_NUMBER&amp;quot; ]]; then
    echo &amp;quot;Usage: ${NAME} &amp;lt;file&amp;gt; &amp;lt;line_number&amp;gt; [lines_around=0]
       ${NAME} &amp;lt;file:line_number[:column]&amp;gt;
       cat &amp;lt;file&amp;gt; | ${NAME} &amp;lt;line_number&amp;gt; [lines_around=0]&amp;quot;
    return 1
  fi

  if [[ -t 0 &amp;amp;&amp;amp; -n &amp;quot;$FILE&amp;quot; &amp;amp;&amp;amp; ! -f &amp;quot;$FILE&amp;quot; ]]; then
    echo &amp;quot;${NAME}: $FILE: No such file or directory&amp;quot;
    return 1
  fi

  sed -n &amp;quot;`expr $LINE_NUMBER - $LINES_AROUND`,`expr $LINE_NUMBER + $LINES_AROUND`p&amp;quot; ${FILE}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;TIL&lt;/h2&gt;
&lt;p&gt;What I found interesting:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Use &lt;code&gt;-t 0&lt;/code&gt; to check if &lt;code&gt;stdin&lt;/code&gt; (file descriptor &lt;code&gt;0&lt;/code&gt;) is connected to a
terminal, and if negative it means non-interactive and thus piped (or
redirected, tbh I haven&apos;t dug into that)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;BASH_REMATCH&lt;/code&gt; is the equivalent of, for example, this in JavaScript:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const BASH_REMATCH = str.match(regex);
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;FUNCNAME[0]&lt;/code&gt; holds the function name of itself&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;That &lt;code&gt;sed&lt;/code&gt; always seems to have a new trick up its sleeve. The result in the
script with &lt;code&gt;sed&lt;/code&gt; + &lt;code&gt;expr&lt;/code&gt; above is, for example, &lt;code&gt;sed -n &amp;quot;8,12p&amp;quot;&lt;/code&gt; which
shows lines 8-12 of the input.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;In closing&lt;/h2&gt;
&lt;p&gt;Obviously I&apos;ve added &lt;code&gt;line&lt;/code&gt; to &lt;a href=&quot;https://github.com/webpro/dotfiles&quot;&gt;my dotfiles&lt;/a&gt;!&lt;/p&gt;
</description><pubDate>Wed, 02 Jul 2025 00:00:00 GMT</pubDate><category>shell</category><category>bash</category><category>line</category><category>file</category><category>pipe</category><category>sed</category><category>BASH_REMATCH</category></item><item><title>Beyond Git aliases: git clone + cd</title><link>https://webpro.nl/scraps/beyond-git-aliases-clone-cd</link><guid isPermaLink="true">https://webpro.nl/scraps/beyond-git-aliases-clone-cd</guid><description>&lt;h1&gt;Beyond Git aliases: git clone + cd&lt;/h1&gt;
&lt;p&gt;When a Git alias doesn&apos;t provide enough scripting powers, here&apos;s a clean way to
go beyond them.&lt;/p&gt;
&lt;p&gt;This is a little technique I cobbled together trying to clone a GitHub repo and
&lt;code&gt;cd&lt;/code&gt; into the created directory in a single go, but couldn&apos;t find an easy way
using a Git alias alone.&lt;/p&gt;
&lt;p&gt;The first step is optional. For a clean approach, create an alias to hook into
later. In your global Git config:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ini&quot;&gt;[alias]
	c = clone
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To &lt;code&gt;cd&lt;/code&gt; into the latest modified directory we&apos;ll create a little function. This
is useful on its own so let&apos;s store it separately:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cdl() {
  cd &amp;quot;$(ls -dt */ | head -n 1)&amp;quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now to bring it all together, add this function somewhere in the &lt;code&gt;.bash_profile&lt;/code&gt;
(or what have you):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;function git() {
  if [ $1 = &amp;quot;c&amp;quot; ];
  then
    local url=&amp;quot;$2&amp;quot;;
    [[ &amp;quot;$url&amp;quot; =~ ^[^:]+/.+$ ]] &amp;amp;&amp;amp; url=&amp;quot;git@github.com:${url}.git&amp;quot;
    command git clone &amp;quot;$url&amp;quot; &amp;quot;${@:3}&amp;quot; &amp;amp;&amp;amp; cdl
  else
    command git $@
  fi
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This only affects &lt;code&gt;git c&lt;/code&gt; and leaves everything else alone, including
&lt;code&gt;git clone&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;From now on, &lt;code&gt;git c webpro/venz&lt;/code&gt; will clone the repository and &lt;code&gt;cd&lt;/code&gt; into it
right away. Any valid Git URL will still work as expected.&lt;/p&gt;
</description><pubDate>Thu, 19 Jun 2025 00:00:00 GMT</pubDate><category>git</category><category>clone</category><category>cd</category></item><item><title>Privileged, but not a clown</title><link>https://webpro.nl/articles/privileged-but-not-a-clown</link><guid isPermaLink="true">https://webpro.nl/articles/privileged-but-not-a-clown</guid><description>&lt;h1&gt;Privileged, but not a clown&lt;/h1&gt;
&lt;p&gt;This is a story in 3 parts about OSS and me: how it started, how we got here,
and how it&apos;s going.&lt;/p&gt;
&lt;h2&gt;Part 1. How it started&lt;/h2&gt;
&lt;h3&gt;View Source&lt;/h3&gt;
&lt;p&gt;Soon after the first &amp;quot;View Source&amp;quot; in Internet Explorer 5.0, and copy-pasting a
snippet from a &lt;code&gt;&amp;lt;SCRIPT&amp;gt;&lt;/code&gt; tag into some text editor, I&apos;ve started using free and
open source software.&lt;/p&gt;
&lt;p&gt;PHP, MySQL, phpMyAdmin, JSLint, Prototype, jQuery and Lodash come to mind. Using
and understanding this free software taught me a lot about programming, and
allowed me to start a career as a developer.&lt;/p&gt;
&lt;h3&gt;Enthusiasm&lt;/h3&gt;
&lt;p&gt;Remember Apache Ant, Closure Compiler or YUI Compiler? It&apos;s what I&apos;ve used to
bundle and minify CSS, JS and HTML, using XML! This little tool became
&lt;a href=&quot;https://github.com/webpro/jaguarundi&quot;&gt;Jaguarundi&lt;/a&gt; 13 years ago, and now I&apos;m sitting here realizing I&apos;ve always had
this relentless enthusiasm for developer tooling.&lt;/p&gt;
&lt;h3&gt;Movement&lt;/h3&gt;
&lt;p&gt;Around 2009, projects like jQuery and Lodash got me into both freelancing and
open source. I remember working for Backbase at the time, but also looking at
this OSS movement. I wanted to use it, be part of it, build cool things with it!
Alongside heroes like John Resig (jQuery) and John-David Dalton (Lodash), many
developers in the OSS community were doing their part. Some genius work got
pushed and it was all available right there in the open! I&apos;ve always enjoyed
solving problems with code, and more and more browser APIs and libraries became
available to do just that.&lt;/p&gt;
&lt;p&gt;Being a freelance frontend developer getting to use the latest &amp;amp; greatest OSS in
my work. This was (and still is!) great, nothing to complain.&lt;/p&gt;
&lt;h3&gt;Open&lt;/h3&gt;
&lt;p&gt;I love coding, I love sharing. I&apos;m coding in the open since 2010. As a
self-taught developer I&apos;ve learned a ton from reading other people&apos;s source
code. From pushing code and receiving &lt;a href=&quot;https://github.com/release-it/release-it/issues/356&quot;&gt;honest feedback&lt;/a&gt; about it. I still
like to think some of it has once been helpful or inspirational to someone in
one way or another.&lt;/p&gt;
&lt;h3&gt;Act of kindness&lt;/h3&gt;
&lt;p&gt;There&apos;s a little story I&apos;d like to share here. &lt;a href=&quot;https://x.com/ksatirli&quot;&gt;Kerim Satirli&lt;/a&gt;, an early
adopter of release-it, reached out to me about 6 years ago. He wanted to send me
a gift as an appreciation of my work, and sent me two delicious bottles of beer!
I&apos;ll never forget this wonderful act of kindness. I mean, sending swag and all
is cool, but this hit different. Sometimes we almost forget there are actually
human beings on both ends. Thanks, Kerim!&lt;/p&gt;
&lt;h3&gt;Win&lt;/h3&gt;
&lt;p&gt;And just this morning, &lt;a href=&quot;https://www.linkedin.com/posts/jonas-felix_thank-you-lars-kappert-for-letting-us-letsbootch-activity-7212367489383497730-wNl4&quot;&gt;Jonas Felix reminds me&lt;/a&gt; of his feature request for
&lt;a href=&quot;https://github.com/webpro/reveal-md&quot;&gt;reveal-md&lt;/a&gt; earlier this year. Support for Mermaid diagrams in Reveal.js
slides is useful to other users of reveal-md as well, and they&apos;ve sponsored my
work on this. A win for everyone, thanks Jonas!&lt;/p&gt;
&lt;h3&gt;Privilege&lt;/h3&gt;
&lt;p&gt;Pushing code. Honestly, &lt;a href=&quot;https://github.com/release-it/release-it/issues/425#issuecomment-448718399&quot;&gt;I didn&apos;t mean to earn anything&lt;/a&gt; with any of this. My
projects aren&apos;t extraordinary, I&apos;m a developer in a safe and rich country, and
so many other developers deserve at least the same. Being involved in OSS is
simply a great way to keep learning and stay hirable with an open profile, which
is an invaluable privilege in itself.&lt;/p&gt;
&lt;h2&gt;Part 2. How we got here&lt;/h2&gt;
&lt;h3&gt;Return the favor&lt;/h3&gt;
&lt;p&gt;Let&apos;s imagine for a moment you&apos;re a developer working in a commercial setting.
You&apos;re adding a valuable open-source package to &lt;code&gt;package.json&lt;/code&gt;. The pull request
gets merged, and you or someone else on the team returns the favor to the
creator of the dependency on behalf of the company. Sounds fair enough, right?&lt;/p&gt;
&lt;p&gt;In contrast with more traditional, commercial software, this transaction usually
doesn&apos;t happen, while no-one is accountable. Businesses love permissive
licenses, this is why open-source became so popular in the first place. No
matter how you look at it, exploitation it is. Feels like an odd thing to say,
knowing eventually there&apos;s always humans on both ends. That&apos;s why we need to
&lt;a href=&quot;https://x.com/slicknet/status/1798420849214783831&quot;&gt;normalize companies paying for OSS&lt;/a&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Does your company have a budget for FOSS?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Yes, that&apos;s &amp;quot;budget&amp;quot; and &amp;quot;FOSS&amp;quot; in the same sentence. If the software is of
value in a commercial setting and creators or maintainers indicate support is
welcome, then by all means, please provide it if you can.&lt;/p&gt;
&lt;p&gt;Like myself, many of you probably prefer support from companies over that from
fellow developers. Because the latter is mostly a redistribution of hard-earned
income (let alone tax-over-tax deductions).&lt;/p&gt;
&lt;h3&gt;Sustainable&lt;/h3&gt;
&lt;p&gt;There&apos;s something sadly ironic about &lt;a href=&quot;https://x.com/rakyll/status/1803198809671082074&quot;&gt;developers trying to get hired&lt;/a&gt;
elsewhere, just to sustain their work on OSS. If we talk about sustainable open
source, this isn&apos;t it. Some of those brilliant minds are potentially much more
effective and happy when working on their own projects used by many others,
compared to being an IC for a single company.&lt;/p&gt;
&lt;h3&gt;Initiatives&lt;/h3&gt;
&lt;p&gt;Lots of awesome people and companies are improving the situation around
open-source funding. By financially supporting people and projects, hiring
developers to let them work on (their own) open source, and initiatives like
&lt;a href=&quot;https://opencollective.com&quot;&gt;OpenCollective&lt;/a&gt;, &lt;a href=&quot;https://tidelift.com/about/lifter&quot;&gt;Tidelift&lt;/a&gt;, &lt;a href=&quot;https://polar.sh&quot;&gt;Polar&lt;/a&gt;, &lt;a href=&quot;https://tea.xyz&quot;&gt;Tea&lt;/a&gt;, and so on. I&apos;m glad
to see the needle is moving.&lt;/p&gt;
&lt;h3&gt;Incentives&lt;/h3&gt;
&lt;p&gt;Some metrics in use by such initiatives are based on the number of downloads or
the number of dependencies. SMART perhaps, but such metrics can be abused and
don&apos;t necessarily represent the actual value of a project. Sometimes popular
projects become even counter-productive to the OSS community as a whole. We need
different incentives to better distribute funds.&lt;/p&gt;
&lt;h3&gt;Bills&lt;/h3&gt;
&lt;p&gt;The number of dependent projects, GitHub stars, npm downloads, contributors and
so on are useful metrics. They&apos;re good to get a sense of usage and popularity.
Grateful people publicly and privately praising your work and contributing to
your work is important and a great motivation overall. These types of input
might often be fulfilling for quite some time. But eventually, the harsh reality
is that none of this pays the bills.&lt;/p&gt;
&lt;h3&gt;AS-IS&lt;/h3&gt;
&lt;p&gt;Now let&apos;s imagine for a moment you&apos;re a developer and excited your work is on
GitHub. It&apos;s open, it&apos;s visible, and it seems useful to others. The moment other
people start using your work an interesting dynamic starts to happen. Feedback,
bug reports and feature requests begin to trickle in. Open-source licenses come
with an &amp;quot;AS-IS&amp;quot; clause and no SLAs, so remember: you don&apos;t owe anything to
anyone. Yet chances are you feel a sense of connection and responsibility.&lt;/p&gt;
&lt;h3&gt;Trust&lt;/h3&gt;
&lt;p&gt;In fact, &lt;a href=&quot;https://x.com/antfu7/status/1805891402204672367&quot;&gt;trust is essential to OSS&lt;/a&gt; and most developers tend to build a lot
of trust working in the open. In many ways: by being responsive to questions and
feedback, by adding automated test suites, by not breaking things unexpectedly,
and so on. This requires time and energy. And some of of that feedback might be
ungrateful and negative, consuming even more energy.&lt;/p&gt;
&lt;h3&gt;Space&lt;/h3&gt;
&lt;p&gt;Dealing with negativity is another interesting dynamic. How do you handle it?
Sometimes you feel the urge to address negative feedback immediately and get
over with it. Other times, you should get a good night&apos;s rest first and avoid
responding in a way you might regret. I try to remind myself such decisions are
made in the &lt;a href=&quot;https://www.goodreads.com/quotes/5231688-between-stimulus-and-response-there-is-a-space-in-that&quot;&gt;space between stimulus and response&lt;/a&gt;, and understanding I&apos;m in
control here makes all the difference.&lt;/p&gt;
&lt;p&gt;Dealing positively even with the most unconstructive negative feedback is a
superpower. It&apos;s just another data point highlighting a potential weakness in
the project. Why return negativity immediately, if you can find the space for a
positive response instead?&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You are not your code&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;As a developer, I think this is one of the best principles to live by. Bad code
will get better. The more important question is: does it solve a real problem?
If it&apos;s useful to you, then that&apos;s great already. If it&apos;s your ambition to reach
more people and if it solves a problem they&apos;re having, then that&apos;s fantastic.
You put in the work and hopefully you&apos;ll reap the rewards.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Only you get to decide what the rewards actually mean to you&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Stories&lt;/h3&gt;
&lt;p&gt;Rewards can come in many shapes or forms. But whatever constitutes low rewards
to you is what fuels burnout. Here are some recent stories I&apos;ve read about
burnout and exploitation in the OSS community:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://sapegin.me/blog/open-source-no-more/&quot;&gt;Artem Sapegin: Why I quit open source&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://antfu.me/posts/mental-health-oss&quot;&gt;Anthony Fu: Mental Health in Open Source&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=9YQgNDLFYq8&quot;&gt;David Whitney: Open-Source Exploitation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://youtu.be/KEWV5yaj_2o?si=-tNvxLvhc-D4jPM6&quot;&gt;ThePrimeagen: My Burnout Experience&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Unfortunately this is just a tiny tip of the iceberg. I&apos;m grateful Artem,
Anthony, David and Michael shared their stories. So many projects that their
creator poured their energy in, but for one reason or another didn&apos;t get the
recognition they deserved or the support they needed.&lt;/p&gt;
&lt;h2&gt;Part 3. How it&apos;s going&lt;/h2&gt;
&lt;p&gt;To be honest, I&apos;m not even sure what I&apos;m getting at with all of this. Is it
idealism? Ignorance? Profit-impaired? I&apos;m still not sure what to call it.
Whatever it is, my vision on things slowly started to change. I feel like I can
and want to contribute and share in a more sustainable way than I&apos;m used to, and
I guess that&apos;s what I&apos;m trying to figure out these days.&lt;/p&gt;
&lt;h3&gt;Stability&lt;/h3&gt;
&lt;p&gt;My most-downloaded project is Release It! Over a decade old and &lt;a href=&quot;https://github.com/release-it/release-it#projects-using-release-it&quot;&gt;tons of cool
projects&lt;/a&gt; use it daily to publish their packages. &lt;a href=&quot;https://github.com/intuit/auto&quot;&gt;There&lt;/a&gt; &lt;a href=&quot;https://github.com/googleapis/release-please&quot;&gt;are&lt;/a&gt;
&lt;a href=&quot;https://github.com/semantic-release/semantic-release&quot;&gt;plenty&lt;/a&gt; &lt;a href=&quot;https://github.com/sindresorhus/np&quot;&gt;of&lt;/a&gt; &lt;a href=&quot;https://github.com/changesets/changesets&quot;&gt;alternatives&lt;/a&gt;, but apparently its stability and
feature set is still appreciated by plenty of people. So I&apos;ll happily keep
maintaining it, it&apos;s not going anywhere.&lt;/p&gt;
&lt;h3&gt;Knip&lt;/h3&gt;
&lt;p&gt;The thing you might know I can hardly contain my excitement about these days is
Knip. I love working on Knip, because it solves a real problem I have in
projects myself. Finding dead code and unused dependencies in an automated
fashion is an invaluable tool in my belt. Another reason I enjoy working on it
is that it&apos;s a right-sized project for me with still many elements I can learn
from and improve. This can&apos;t possibly go without saying Knip has really great
contributors and I&apos;m receiving &lt;a href=&quot;https://knip.dev/sponsors&quot;&gt;financial support from wonderful backers&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Reward&lt;/h3&gt;
&lt;p&gt;The greatest reward I can get is fellow developers sharing they got rid of dead
code in their codebase, and how they&apos;ve added Knip to their CI workflow. I will
never get enough of those little red blocks on GitHub for deleted code
(specially when done using Knip). It&apos;s that relentless enthusiasm over developer
tooling, I guess!&lt;/p&gt;
&lt;h3&gt;Cycle&lt;/h3&gt;
&lt;p&gt;At some point Knip will have a successor, or become obsolete. That&apos;s the cycle,
and that&apos;s how it should be. Until then, my goal is to keep raising the bar and
just keep doing my part.&lt;/p&gt;
&lt;h3&gt;Trying&lt;/h3&gt;
&lt;p&gt;Principles to live by and stories about burnout in the OSS community are
important to share and have taught me a lot.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I try to only pour in the amount of energy I can afford to lose&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Fortunate enough to not have experienced a burnout myself, I feel like this is
going well. I won&apos;t let anyone trick, threat or nerd-snipe me into anything
more. Unless it&apos;s a critical issue, that fix can usually wait.&lt;/p&gt;
&lt;p&gt;There are days I couldn&apos;t care less about silly edge cases. Other days, bugfixes
arrive in your &lt;code&gt;node_modules&lt;/code&gt; faster than you can say &amp;quot;Node.js is actually
pretty damn good&amp;quot;.&lt;/p&gt;
&lt;h3&gt;Clown&lt;/h3&gt;
&lt;p&gt;As with so many things, there&apos;s a tipping point where the law of diminishing
returns kicks in, and sometimes I can&apos;t help but feel like a clown.&lt;/p&gt;
&lt;p&gt;Since I believe the value of my contributions to OSS is too far off from what
I&apos;m receiving in monthly sponsorships, I&apos;ll sure be looking for more funding in
one way or another. Or rethink what the heck I&apos;m doing here.&lt;/p&gt;
&lt;h2&gt;Care&lt;/h2&gt;
&lt;p&gt;Coding is great, the OSS community is great. I can&apos;t imagine not sharing and
learning, and I want to keep doing my part. Rest assured I&apos;ll be taking care of
myself along the way. Writing this article was a great step!&lt;/p&gt;
</description><pubDate>Fri, 28 Jun 2024 00:00:00 GMT</pubDate></item><item><title>Using subpath imports &amp; path aliases</title><link>https://webpro.nl/articles/using-subpath-imports-and-path-aliases</link><guid isPermaLink="true">https://webpro.nl/articles/using-subpath-imports-and-path-aliases</guid><description>&lt;p&gt;Subpath imports and TypeScript path aliases are useful and convenient features,
especially in large codebases. Both are pretty widely supported across runtimes
and bundlers for the web. However, the situation is different in more &amp;quot;vanilla&amp;quot;
setups when using the TypeScript compiler (&lt;code&gt;tsc&lt;/code&gt;) directly.&lt;/p&gt;
&lt;p&gt;This article is about using subpath imports and path aliases with &lt;code&gt;tsc&lt;/code&gt;.
Specifically, we&apos;re going to discuss two pitfalls when compiling to JavaScript
for a runtime like Node.js:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;#subpath-imports&quot;&gt;Subpath imports&lt;/a&gt; are less well-known, and not fully supported in IDEs
before TypeScript v5.4.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#typescript-path-aliases&quot;&gt;TypeScript path aliases&lt;/a&gt; are not supported by Node.js.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;tl/dr;&lt;/em&gt; See the &lt;a href=&quot;#recommendations&quot;&gt;recommendations&lt;/a&gt; and &lt;a href=&quot;#closing-note&quot;&gt;closing note&lt;/a&gt; at the end.&lt;/p&gt;
&lt;h2&gt;Subpath imports&lt;/h2&gt;
&lt;p&gt;Subpath imports are configured in &lt;code&gt;package.json&lt;/code&gt;, They&apos;re a runtime and
dependency-free option to use aliases. Here&apos;s an example import with a hash
specifier:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;import { add } from &apos;#utils/calc.js&apos;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Internal subpath &lt;code&gt;imports&lt;/code&gt; are configured in &lt;code&gt;package.json&lt;/code&gt; like so:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;name&amp;quot;: &amp;quot;my-lib&amp;quot;,
  &amp;quot;version&amp;quot;: &amp;quot;1.0.0&amp;quot;,
  &amp;quot;imports&amp;quot;: {
    &amp;quot;#utils/*.js&amp;quot;: &amp;quot;./lib/utils/*.js&amp;quot;,
    &amp;quot;#sub/*.js&amp;quot;: &amp;quot;./lib/sub/path/*.js&amp;quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Using &lt;code&gt;*.js&lt;/code&gt; in subpath imports configuration is essentially the same as
&lt;code&gt;**/*.js&lt;/code&gt; in glob patterns, so it recurses into subdirectories.&lt;/p&gt;
&lt;p&gt;Make sure to check out the &lt;a href=&quot;https://nodejs.org/api/packages.html#subpath-imports&quot;&gt;Node.js → subpath imports&lt;/a&gt; documentation for more
features, such as conditional exports.&lt;/p&gt;
&lt;h3&gt;Problem&lt;/h3&gt;
&lt;p&gt;Support for subpath imports in &lt;code&gt;package.json&lt;/code&gt; has been in TypeScript since v4.5,
so &lt;code&gt;tsc&lt;/code&gt; compiles them just fine. But the TypeScript Language Server did not
fully catch up until v5.4.&lt;/p&gt;
&lt;h3&gt;Solution (option 1)&lt;/h3&gt;
&lt;p&gt;Upgrade TypeScript to v5.4.0+ and use only a single subpath &lt;code&gt;&amp;quot;imports&amp;quot;&lt;/code&gt;
configuration in &lt;code&gt;package.json&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;imports&amp;quot;: {
    &amp;quot;#utils/*.js&amp;quot;: &amp;quot;./dist/utils/*.js&amp;quot;,
    &amp;quot;#sub/*.js&amp;quot;: &amp;quot;./dist/sub/path/*.js&amp;quot;
  },
  &amp;quot;scripts&amp;quot;: {
    &amp;quot;build&amp;quot;: &amp;quot;tsc&amp;quot;
  },
  &amp;quot;dependencies&amp;quot;: {
    &amp;quot;typescript&amp;quot;: &amp;quot;5.4.0&amp;quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;(Install &lt;code&gt;typescript@beta&lt;/code&gt; until &lt;code&gt;latest&lt;/code&gt; is &lt;code&gt;5.4.0&lt;/code&gt; or higher.)&lt;/p&gt;
&lt;p&gt;TypeScript will resolve paths properly and prioritize the aliases with
auto-import suggestions in your IDE. Here&apos;s an example of how that looks like:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./auto-import-suggestion.png&quot; alt=&quot;Auto-import suggestion&quot;&gt;&lt;/p&gt;
&lt;p&gt;Pros:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Supported natively by Node.js (since v12.19.0/v14.6.0) and fully supported in
TypeScript since v5.4.0.&lt;/li&gt;
&lt;li&gt;Subpath imports can make use of &lt;a href=&quot;https://nodejs.org/api/packages.html#conditional-exports&quot;&gt;conditional exports&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Cons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Syntax is restricted to what subpath imports support:
&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;#hash&lt;/code&gt; specifier syntax must be used (not &lt;code&gt;@&lt;/code&gt; or &lt;code&gt;~&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;&amp;quot;#/*&amp;quot;&lt;/code&gt; alias is invalid, but as short as e.g. &lt;code&gt;&amp;quot;#@/*&amp;quot;&lt;/code&gt; is valid.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you have path aliases configured in &lt;code&gt;tsconfig.json&lt;/code&gt; you&apos;d need to replace
them with subpath imports across your codebase.&lt;/p&gt;
&lt;p&gt;If this is not an option for you, let&apos;s discuss some alternatives.&lt;/p&gt;
&lt;h2&gt;TypeScript path aliases&lt;/h2&gt;
&lt;p&gt;Path aliases are a similar feature to subpath imports. Here&apos;s an example
configuration for the TypeScript compiler:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;compilerOptions&amp;quot;: {
    &amp;quot;paths&amp;quot;: {
      &amp;quot;~/utils/*&amp;quot;: [&amp;quot;./src/utils/*&amp;quot;],
      &amp;quot;~/sub/*&amp;quot;: [&amp;quot;./src/sub/path/*&amp;quot;]
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Problem&lt;/h3&gt;
&lt;p&gt;The TypeScript compiler (&lt;code&gt;tsc&lt;/code&gt;) does not rewrite import specifiers, so they&apos;re
still the same when compiled to JavaScript:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;import { add } from &apos;~/utils/calc.js&apos;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;However, this syntax is not supported during runtime in Node.js, resulting in an
error:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ node index.js
Error [ERR_MODULE_NOT_FOUND]: Cannot find package &apos;~&apos; imported from [...]/index.js
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Solution&lt;/h3&gt;
&lt;p&gt;We have some options here:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Switch to &lt;a href=&quot;#solution-option-1&quot;&gt;subpath imports with TypeScript v5.4.0+&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#option-2-build-time-resolution&quot;&gt;Build time resolution&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#option-3-runtime-resolution&quot;&gt;Runtime resolution&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Option 2: Build time resolution&lt;/h2&gt;
&lt;p&gt;You can use path aliases and &lt;a href=&quot;https://www.npmjs.com/package/tsc-alias&quot;&gt;tsc-alias&lt;/a&gt; to convert them after the fact to
relative paths in the output that &lt;code&gt;tsc&lt;/code&gt; generates:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;scripts&amp;quot;: {
    &amp;quot;build&amp;quot;: &amp;quot;tsc &amp;amp;&amp;amp; tsc-alias&amp;quot;
  },
  &amp;quot;dependencies&amp;quot;: {
    &amp;quot;tsc-alias&amp;quot;: &amp;quot;1.8.8&amp;quot;,
    &amp;quot;typescript&amp;quot;: &amp;quot;5.3.3&amp;quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;compilerOptions&amp;quot;: {
    &amp;quot;paths&amp;quot;: {
      &amp;quot;~/*&amp;quot;: [&amp;quot;./src/*&amp;quot;],
      &amp;quot;~such/path/*&amp;quot;: [&amp;quot;./src/much/wow/*&amp;quot;]
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Pros:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use path aliases as supported by TypeScript.&lt;/li&gt;
&lt;li&gt;No duplicate configuration.&lt;/li&gt;
&lt;li&gt;No performance hit during runtime.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Cons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Requires a dependency (e.g. &lt;code&gt;tsc-alias&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Option 3: Runtime resolution&lt;/h2&gt;
&lt;p&gt;Other solutions work at runtime. A popular option is &lt;a href=&quot;https://www.npmjs.com/package/tsconfig-paths&quot;&gt;tsconfig-paths&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;After compilation with &lt;code&gt;tsc&lt;/code&gt; you can use a dependency like &lt;code&gt;tsconfig-paths&lt;/code&gt; as a
loader to convert the import paths during runtime:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;node -r tsconfig-paths/register main.js
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Pros:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use path aliases as supported by TypeScript.&lt;/li&gt;
&lt;li&gt;No duplicate configuration.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Cons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Requires a dependency (e.g. &lt;code&gt;tsconfig-paths&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Requires injection of a loader via command line or in code.&lt;/li&gt;
&lt;li&gt;Small(?) performance hit during runtime.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Recommendations&lt;/h2&gt;
&lt;h3&gt;1. Relative paths&lt;/h3&gt;
&lt;p&gt;Your safest bet is to use no subpath imports or path aliases at all.&lt;/p&gt;
&lt;h3&gt;2. Subpath imports&lt;/h3&gt;
&lt;p&gt;Second best is to use only subpath imports (&lt;a href=&quot;#solution-option-1&quot;&gt;option 1&lt;/a&gt;), if supported by
other tooling in your project such as TypeScript, test runners and code linters.
The Node.js and Bun runtimes do support it.&lt;/p&gt;
&lt;h3&gt;3. Path aliases + build time resolution&lt;/h3&gt;
&lt;p&gt;And if that&apos;s not an option yet, I&apos;d recommend to use path aliases with build
time resolution (&lt;a href=&quot;#option-2-build-time-resolution&quot;&gt;option 2&lt;/a&gt;). This is fairly well supported across tooling
today. There&apos;s no runtime performance hit, and no risk of running the code in an
environment that has no support.&lt;/p&gt;
&lt;p&gt;Check out the documentation of your tooling to see what&apos;s supported.&lt;/p&gt;
&lt;h2&gt;Closing Note&lt;/h2&gt;
&lt;p&gt;Subpath imports are perhaps less well known and less used today compared to
TypeScript path aliases, but likely to become even more of a standard in the
future. So subpath imports are generally recommended over path aliases going
forward, especially considering support in TypeScript v5.4 has fully caught up.&lt;/p&gt;
&lt;h2&gt;Resources&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://nodejs.org/api/packages.html#subpath-imports&quot;&gt;Node.js → subpath imports&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.typescriptlang.org/tsconfig#paths&quot;&gt;TS Config reference → paths&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.npmjs.com/package/tsc-alias&quot;&gt;tsc-alias&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.npmjs.com/package/tsconfig-paths&quot;&gt;tsconfig-paths&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description><pubDate>Sun, 25 Feb 2024 00:00:00 GMT</pubDate></item><item><title>The State of Benchmarking in Node.js</title><link>https://webpro.nl/articles/the-state-of-benchmarking-in-nodejs</link><guid isPermaLink="true">https://webpro.nl/articles/the-state-of-benchmarking-in-nodejs</guid><description>&lt;h1&gt;The State of Benchmarking in Node.js&lt;/h1&gt;
&lt;p&gt;Benchmarking becomes more important as we build more and more applications and
tooling for runtimes like Node.js and Bun. This article is about macro and micro
benchmarking, and explores options we can use today. The article includes code
examples and a CodeSandbox to try and implement in your own applications.&lt;/p&gt;
&lt;h2&gt;Contents&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;#macro-benchmarking&quot;&gt;Macro benchmarking&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;#ecosystem&quot;&gt;Ecosystem&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#example-performanceobserver-for-functions&quot;&gt;Example: PerformanceObserver for functions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#example-timerify-application-code&quot;&gt;Example: timerify application code&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#micro-benchmarking&quot;&gt;Micro benchmarking&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;#ecosystem-1&quot;&gt;Ecosystem&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#pitfalls&quot;&gt;Pitfalls&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#example-string-concatenation&quot;&gt;Example: string concatenation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#benchmarkjs&quot;&gt;Benchmark.js&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#tinybench&quot;&gt;Tinybench&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#a-cli-maybe&quot;&gt;A CLI, maybe?&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Macro benchmarking&lt;/h2&gt;
&lt;p&gt;Benchmarking a part of your application code is an important scenario. While
running a real-world application, how many times are expensive functions called
and how much time is spent in each? These are essential metrics for any CPU
intensive code such as bundlers, compilers, linters, formatters, and so on.&lt;/p&gt;
&lt;p&gt;Not many of those tools are using the &lt;a href=&quot;https://nodejs.org/docs/latest/api/perf_hooks.html&quot;&gt;node:perf_hooks&lt;/a&gt; module, while much
of this native module is available since Node.js v8.5.0 (released over 6 years
ago). This included &lt;code&gt;performance.now()&lt;/code&gt;, &lt;code&gt;performance.timerify()&lt;/code&gt; and
&lt;code&gt;PerformanceObserver&lt;/code&gt;, and the built-in module has been improved and extended
ever since. This module allows to integrate all sorts of performance timings
right into your application.&lt;/p&gt;
&lt;h3&gt;Ecosystem&lt;/h3&gt;
&lt;p&gt;There aren&apos;t that many libraries or runners on top of the Node.js built-ins. I
sure hope I&apos;m missing something, but here are some data points at the time of
writing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;1 package found for &lt;a href=&quot;https://www.npmjs.com/search?q=timerify&quot;&gt;npmjs.com/search?q=timerify&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;4 packages found for &lt;a href=&quot;https://www.npmjs.com/search?q=PerformanceObserver&quot;&gt;npmjs.com/search?q=PerformanceObserver&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;32 hits for &lt;a href=&quot;https://www.npmjs.com/search?q=perf_hooks&quot;&gt;npmjs.com/search?q=perf_hooks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;a few hits for &lt;a href=&quot;https://twitter.com/search?q=perf_hooks&quot;&gt;twitter.com/search?q=perf_hooks&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I think there&apos;s room for tooling in this space to reduce boilerplate and improve
accessibility. For instance, it would help a lot if we could import a utility to
wrap functions in any application, and render or return metrics about the
wrapped functions as needed.&lt;/p&gt;
&lt;h3&gt;Example: PerformanceObserver for functions&lt;/h3&gt;
&lt;p&gt;Let&apos;s look at an example which logs each recorded function invocation with a
&lt;code&gt;PerformanceObserver&lt;/code&gt; instance:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;const fnObserver = new PerformanceObserver(items =&amp;gt; {
  items.getEntries().forEach(entry =&amp;gt; {
    console.log(entry);
  });
  fnObserver.disconnect();
});
fnObserver.observe({ entryTypes: [&apos;function&apos;] });

function myFunctionUnderTest() {
  // Such intensive, very cpu, much wow
}

const wrapped = performance.timerify(myFunctionUnderTest);

wrapped();
wrapped();
wrapped();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will log three &lt;code&gt;PerformanceEntry&lt;/code&gt; objects and one of the properties is
&lt;code&gt;duration&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ node observer.mjs
PerformanceNodeEntry {
  name: &apos;myFunctionUnderTest&apos;,
  entryType: &apos;function&apos;,
  startTime: 20.211291000247,
  duration: 0.02987500000745058,
  detail: []
}
PerformanceNodeEntry {
  name: &apos;myFunctionUnderTest&apos;,
  entryType: &apos;function&apos;,
  startTime: 20.426791000179946,
  duration: 0.0017919996753335,
  detail: []
}
PerformanceNodeEntry {
  name: &apos;myFunctionUnderTest&apos;,
  entryType: &apos;function&apos;,
  startTime: 20.432208000682294,
  duration: 0.0006669992581009865,
  detail: []
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now we have a basis to record the number of calls to a function and the duration
of each call.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: this article focuses on the &lt;code&gt;function&lt;/code&gt; performance entry type. Other
valid &lt;code&gt;entryTypes&lt;/code&gt; include &lt;code&gt;mark&lt;/code&gt;, &lt;code&gt;measure&lt;/code&gt;, &lt;code&gt;http&lt;/code&gt;, &lt;code&gt;net&lt;/code&gt; and &lt;code&gt;dns&lt;/code&gt;. See the
&lt;a href=&quot;https://nodejs.org/api/perf_hooks.html&quot;&gt;Node.js docs on perf_hooks&lt;/a&gt; for more details and examples.&lt;/p&gt;
&lt;h3&gt;Example: timerify application code&lt;/h3&gt;
&lt;p&gt;Expanding on this idea, I was hoping there would be utilities to make this
easier and more accessible, but unfortunately I didn&apos;t find much.&lt;/p&gt;
&lt;p&gt;Since Node.js provides the building blocks, I created this &lt;a href=&quot;https://github.com/webpro/knip/blob/main/packages/knip/src/util/Performance.ts&quot;&gt;Performance.js class
in Knip&lt;/a&gt; last year. I&apos;ve been meaning to turn this into a separate module to
be published, but haven&apos;t got around doing this.&lt;/p&gt;
&lt;p&gt;For this article I created a modified version to try and play with. The code is
in &lt;a href=&quot;https://codesandbox.io/p/devbox/state-of-benchmarking-zct39y&quot;&gt;this CodeSandbox&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Run the demo from the terminal inside the sandbox:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;node index.js --performance
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here&apos;s the gist of it, again with the &lt;code&gt;PerformanceObserver&lt;/code&gt; class and
&lt;code&gt;performance.timerify()&lt;/code&gt; function as the main building blocks:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;import { performance, PerformanceObserver } from &apos;node:perf_hooks&apos;;
import EasyTable from &apos;easy-table&apos;;
import { parseArgs } from &apos;node:util&apos;;

export const timerify = fn =&amp;gt; (isEnabled ? performance.timerify(fn) : fn);

export class Performance {
  constructor(isEnabled) {
    if (isEnabled) {
      this.startTime = performance.now();

      this.fnObserver = new PerformanceObserver(items =&amp;gt; {
        items.getEntries().forEach(entry =&amp;gt; this.entries.push(entry));
      });
      this.fnObserver.observe({ entryTypes: [&apos;function&apos;] });
    }
  }

  getTable() {
    const entriesByName = this.entries;
    const table = new EasyTable();
    // ..build table..
    return table.toString().trim();
  }

  getTotalTime() {
    return this.endTime - this.startTime;
  }

  async finalize() {
    this.endTime = performance.now();
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And here&apos;s how to use it in any real-world application:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;import { setTimeout } from &apos;node:timers/promises&apos;;

const fnA = setTimeout;
const fnB = setTimeout;

const wrap = fn =&amp;gt; (isEnabled ? timerify(fn) : fn);
const wrappedA = wrap(fnA); // (1) Wrap functions
const wrappedB = wrap(fnB); // to get metrics when called

async function myApplication() {
  await Promise.all([wrappedA(100), wrappedA(200), wrappedA(300)]);
  await wrappedB(500);
}

// (2) Installs PerformanceObserver#observe({ entryTypes: [&apos;function&apos;] }) to observe functions
const perfObserver = new Performance(isEnabled);

await myApplication();

await perfObserver.finalize();
console.log(perfObserver.getTable());
console.log(&apos;Total running time:&apos;, prettyMs(perfObserver.getTotalTime()));

perfObserver.reset();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After running this, here&apos;s some example output:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ node performance.mjs --performance
Name  size  min     max     median  sum
----  ----  ------  ------  ------  ------
fnA      3  101.18  300.59  200.70  602.47
fnB      1  502.24  502.24  502.24  502.24
Total running time: 804ms
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The functions are only wrapped when using the &lt;code&gt;--performance&lt;/code&gt; flag. Without the
flag, the functions are not wrapped and there is no overhead.&lt;/p&gt;
&lt;h2&gt;Micro benchmarking&lt;/h2&gt;
&lt;p&gt;Benchmarking arbitrary code in isolation is important too. Sometimes you want to
benchmark and compare two or more ways to do the same thing. Paste some code,
let it ramble for a bit and see results. There&apos;s plenty of options available to
do this in a browser, but what about Node.js and other runtimes?&lt;/p&gt;
&lt;p&gt;We have options &lt;code&gt;console.time()&lt;/code&gt; and &lt;code&gt;performance.now()&lt;/code&gt;, but there&apos;s some
boilerplate and ceremony involved to get results.&lt;/p&gt;
&lt;p&gt;And we shouldn&apos;t have to worry about things like process isolation, state resets
between runs, external conditions, turbulence, and aggregating numbers to yield
statistically significant results.&lt;/p&gt;
&lt;p&gt;For some more serious benchmarking, we&apos;ll need something better.&lt;/p&gt;
&lt;h3&gt;Ecosystem&lt;/h3&gt;
&lt;p&gt;Node.js was pretty close to having a built-in &lt;code&gt;node:benchmark&lt;/code&gt; module. In
November 2023, a pull request to &lt;a href=&quot;https://github.com/nodejs/node/pull/50768&quot;&gt;add an experimental &lt;code&gt;node:benchmark&lt;/code&gt;&lt;/a&gt;
module to Node.js core was opened. And closed, after an interesting debate.&lt;/p&gt;
&lt;p&gt;This leaves us with a diverse set of packages for micro benchmarking in Node.js:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/RafaelGSS/bench-node&quot;&gt;node-bench&lt;/a&gt; - the effort that led to this PR, currently in active
development, looking for feedback and ideas; aims to be the foundation of
&lt;code&gt;node:benchmark&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/bestiejs/benchmark.js&quot;&gt;Benchmark.js&lt;/a&gt; - still good and widely used&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/yamiteru/isitfast&quot;&gt;isitfast&lt;/a&gt; - not production-ready yet, but innovative and promising&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://sw.cowtech.it/cronometro&quot;&gt;cronometro&lt;/a&gt; - runs tests in isolated worker threads&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/tinylibs/tinybench&quot;&gt;Tinybench&lt;/a&gt; - also works in the browser (like Benchmark.js)&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/evanwashere/mitata&quot;&gt;mitata&lt;/a&gt; - fast and accurate (used by Bun and Deno)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you need something production-ready today, Benchmark.js is a good choice.
It&apos;s battle-tested and versatile. However, its latest release was 6 years ago
and the repository has been archived (as of 2024-04-14).&lt;/p&gt;
&lt;p&gt;The other options are all worth checking out. Consult the &lt;a href=&quot;https://github.com/nodejs/node/pull/50768#issuecomment-1818004282&quot;&gt;overview table&lt;/a&gt;
and &lt;a href=&quot;https://github.com/H4ad/benchmarks-comparisons&quot;&gt;benchmarks-comparisons&lt;/a&gt; that &lt;a href=&quot;https://twitter.com/vinii_joga10&quot;&gt;Vinicius Lourenço&lt;/a&gt; put together for
more details.&lt;/p&gt;
&lt;p&gt;For the record, Deno has a built-in &lt;a href=&quot;https://docs.deno.com/runtime/manual/tools/benchmarker&quot;&gt;benchmark runner&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A CLI for such tools would be great. Have some code in a file and let a CLI tool
import and benchmark it. Much like aforementioned tools, but move the API from
runtime to CLI.&lt;/p&gt;
&lt;h3&gt;Pitfalls&lt;/h3&gt;
&lt;p&gt;Before we continue, here&apos;s the mandatory warning to not forget about the
pitfalls of micro benchmarking:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Running code in isolation means missing real-world context and different
compiler optimizations. For various reasons, the same code may have different
performance characteristics when running in the context of a real-world
application.&lt;/li&gt;
&lt;li&gt;Micro benchmarking is often associated with premature optimization. Don&apos;t lose
sight of the big picture!&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Example: string concatenation&lt;/h3&gt;
&lt;p&gt;Let&apos;s look at an example. We want to know which function is the fastest, and by
how much. The following functions do the same thing:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;function join(strings) {
  return strings.join(&apos;&apos;);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;function concat(strings) {
  return &apos;&apos;.concat(...strings);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Benchmark.js&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/bestiejs/benchmark.js&quot;&gt;Benchmark.js&lt;/a&gt; is great and battle-tested software, despite the fact its
last publish was early 2017 when it was tested on Node.js version 10 and 11.&lt;/p&gt;
&lt;p&gt;Let&apos;s create a test suite to benchmark and compare three string concatenation
alternatives:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;import Benchmark from &apos;benchmark&apos;;

const strings = [&apos;aa&apos;, &apos;bb&apos;, &apos;cc&apos;, &apos;dd&apos;, &apos;ee&apos;, &apos;ff&apos;, &apos;gg&apos;, &apos;hh&apos;];

function plus(strings) {
  let result = &apos;&apos;;
  for (const str of strings) result += str;
  return result;
}

function join(strings) {
  return strings.join(&apos;&apos;);
}

function concat(strings) {
  return &apos;&apos;.concat(...strings);
}

const suite = new Benchmark.Suite();

suite
  .add(&apos;plus&apos;, function () {
    plus(strings);
  })
  .add(&apos;join&apos;, function () {
    join(strings);
  })
  .add(&apos;concat&apos;, function () {
    concat(strings);
  })
  .on(&apos;cycle&apos;, function (event) {
    console.log(String(event.target));
  })
  .on(&apos;complete&apos;, function () {
    console.log(&apos;Fastest is &apos; + this.filter(&apos;fastest&apos;).map(&apos;name&apos;));
  })
  .run();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Running this on my machine gives the following output:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ node benchmark.mjs
plus x 20,171,223 ops/sec ±0.41% (94 runs sampled)
join x 10,288,969 ops/sec ±0.19% (101 runs sampled)
concat x 17,782,613 ops/sec ±0.18% (98 runs sampled)
Fastest is plus
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Clear output. All options are fast, but we have a winner.&lt;/p&gt;
&lt;h3&gt;Tinybench&lt;/h3&gt;
&lt;p&gt;Tinybench is the new kid on the block. You can use it stand-alone, and it also
comes &lt;a href=&quot;https://vitest.dev/guide/features.html#benchmarking-experimental&quot;&gt;shipped with Vitest&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The API of Tinybench is similar to Benchmark.js:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;import { Bench } from &apos;tinybench&apos;;

const suite = new Bench();

suite
  .add(&apos;plus&apos;, function () {
    plus(strings);
  })
  .add(&apos;join&apos;, function () {
    join(strings);
  })
  .add(&apos;concat&apos;, function () {
    concat(strings);
  });

suite.addEventListener(&apos;complete&apos;, function () {
  console.table(suite.table());
});

suite.run();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Running this gives the following output:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ node tinybench.mjs
┌─────────┬───────────┬──────────────┬────────────────────┬──────────┬─────────┐
│ (index) │ Task Name │   ops/sec    │ Average Time (ns)  │  Margin  │ Samples │
├─────────┼───────────┼──────────────┼────────────────────┼──────────┼─────────┤
│    0    │  &apos;plus&apos;   │ &apos;13,188,219&apos; │  75.8252486995182  │ &apos;±0.61%&apos; │ 6594110 │
│    1    │  &apos;join&apos;   │ &apos;7,958,935&apos;  │ 125.64493939618565 │ &apos;±0.54%&apos; │ 3979468 │
│    2    │ &apos;concat&apos;  │ &apos;11,681,195&apos; │ 85.60767506752819  │ &apos;±0.91%&apos; │ 5840598 │
└─────────┴───────────┴──────────────┴────────────────────┴──────────┴─────────┘
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;A CLI, maybe?&lt;/h3&gt;
&lt;p&gt;Wouldn&apos;t it be convenient if we could just export our functions from a file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;const strings = [&apos;aa&apos;, &apos;bb&apos;, &apos;cc&apos;, &apos;dd&apos;, &apos;ee&apos;, &apos;ff&apos;, &apos;gg&apos;, &apos;hh&apos;];

export function plus(strings) {
  let result = &apos;&apos;;
  for (const str of strings) result += str;
  return result;
}

export function join(strings) {
  return strings.join(&apos;&apos;);
}

export function concat(strings) {
  return &apos;&apos;.concat(...strings);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And point our imaginary &lt;code&gt;bench&lt;/code&gt; CLI tool at this file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ bench string-concat.js
plus x 20,171,223 ops/sec ±0.41% (94 runs sampled)
join x 10,288,969 ops/sec ±0.19% (101 runs sampled)
concat x 17,782,613 ops/sec ±0.18% (98 runs sampled)
Fastest is plus
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And, maybe, one day:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ node --bench string-concat.js
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Although the building blocks are there, I think especially in the area of macro
optimizations there&apos;s room for tooling to make our lives easier.&lt;/p&gt;
&lt;p&gt;When it comes to micro benchmarking, it feels a bit odd to recommend a package
last updated in 2017 (Benchmark.js). Let&apos;s watch this space!&lt;/p&gt;
&lt;p&gt;This concludes my perspective on the current state of benchmarking in Node.js,
at the end of 2023. Do you agree?&lt;/p&gt;
</description><pubDate>Thu, 21 Dec 2023 00:00:00 GMT</pubDate></item><item><title>Versioned documentation with Starlight &amp; Vercel</title><link>https://webpro.nl/scraps/versioned-docs-with-starlight-and-vercel</link><guid isPermaLink="true">https://webpro.nl/scraps/versioned-docs-with-starlight-and-vercel</guid><description>&lt;h1&gt;Versioned documentation with Starlight &amp;amp; Vercel&lt;/h1&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Let&apos;s say in our project we&apos;re working on a new major version. This may come
with new features or even breaking changes. Users that did not upgrade yet
reading documentation for the new version might get confused. This is why we
want to serve a separate version of the documentation along with each major
version of our project.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;UPDATE (2024-06):&lt;/strong&gt; The website in this example has been migrated from Vercel
to Netlify. They offer &lt;a href=&quot;https://docs.netlify.com/domains-https/custom-domains/automatic-deploy-subdomains/&quot;&gt;&amp;quot;branch deploys&amp;quot;&lt;/a&gt; so knip.dev has &lt;a href=&quot;https://v3.knip.dev&quot;&gt;v3.knip.dev&lt;/a&gt;
and &lt;a href=&quot;https://v4.knip.dev&quot;&gt;v4.knip.dev&lt;/a&gt; based on the same repository. Easy-peasy!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;UPDATE (2025-03):&lt;/strong&gt; The website in this example no longer needs versioned
documentation. This might change in the future.&lt;/p&gt;
&lt;h2&gt;A Temporary Solution&lt;/h2&gt;
&lt;p&gt;Using Starlight and Vercel, this quick guide shows how I did it in my own
project: serve a separate version of the documentation at its own path. We&apos;re
assuming the documentation is deployed from the &lt;code&gt;main&lt;/code&gt; branch (still at v1) and
we&apos;re working in the &lt;code&gt;v2&lt;/code&gt; branch to prepare the next major release. We&apos;re going
to deploy this version branch and make it accessible at the &lt;code&gt;/v2&lt;/code&gt; path of a
domain we already own.&lt;/p&gt;
&lt;p&gt;This solution requires a separate Vercel project for each major version.&lt;/p&gt;
&lt;p&gt;Hopefully the Starlight team will deliver a &lt;a href=&quot;https://github.com/withastro/starlight/discussions/957&quot;&gt;built-in solution&lt;/a&gt;, but until
then this approach might work for you if you need it.&lt;/p&gt;
&lt;h2&gt;The Plan&lt;/h2&gt;
&lt;p&gt;This guide assumes we have our site (&lt;code&gt;example.org&lt;/code&gt;) running, and want to add the
next version at &lt;code&gt;example.org/v2&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;We start by setting up a new versioned project, make sure it works, and then
integrate the main project with it. It&apos;s a short story in three parts:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Set &lt;code&gt;base&lt;/code&gt; and &lt;code&gt;outDir&lt;/code&gt; options for Starlight in the versioned branch.&lt;/li&gt;
&lt;li&gt;Create a new project in Vercel and deploy this versioned branch.&lt;/li&gt;
&lt;li&gt;Configure &lt;code&gt;rewrites&lt;/code&gt; for Vercel in the &lt;code&gt;main&lt;/code&gt; branch.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Configure the versioned branch&lt;/h2&gt;
&lt;p&gt;Create or switch to the version branch, &lt;code&gt;v2&lt;/code&gt; in our example.&lt;/p&gt;
&lt;p&gt;In your Astro configuration, set both the &lt;code&gt;base&lt;/code&gt; and the &lt;code&gt;outDir&lt;/code&gt; to match the
version in the current branch:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;export default defineConfig({
  site: &apos;https://example.org&apos;,
  base: &apos;/v2&apos;,
  outDir: &apos;./dist/v2&apos;,
  trailingSlash: &apos;never&apos;,
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This example has trailing slashes removed, which makes sense with Vercel&apos;s
&lt;code&gt;cleanUrls&lt;/code&gt; option we&apos;ll set later on. The example site is using the default
&lt;code&gt;static&lt;/code&gt; output.&lt;/p&gt;
&lt;p&gt;Now is the time to verify a local &lt;code&gt;astro dev&lt;/code&gt; serves the site at the &lt;code&gt;/v2&lt;/code&gt; path
properly and links are working fine. Push your version branch to your Git
remote. Make sure not to merge these changes to the &lt;code&gt;main&lt;/code&gt; branch.&lt;/p&gt;
&lt;h2&gt;Create a new project in Vercel&lt;/h2&gt;
&lt;p&gt;Create a new project in the Vercel control panel and connect it with your Git
repository. It makes sense to name the project after the version, something like
&amp;quot;example-org-v2&amp;quot;.&lt;/p&gt;
&lt;p&gt;Go to &lt;em&gt;Settings → Git → Production Branch&lt;/em&gt; to set the version branch:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./production-branch-name.png&quot; alt=&quot;production-branch-name&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the screenshots we see &amp;quot;v4&amp;quot;, because I was at v3 in the &lt;code&gt;main&lt;/code&gt; branch.&lt;/p&gt;
&lt;p&gt;No need to buy or connect a new domain, versioned docs can be served from a free
Vercel subdomain such as &lt;code&gt;example-org-v2.vercel.app&lt;/code&gt;. No worries: users won&apos;t
see this, they will only see the main domain you&apos;ve already set up.&lt;/p&gt;
&lt;p&gt;Go to &lt;em&gt;Settings → Domains&lt;/em&gt;, edit and double-check it does not have redirects
configured and that the Git branch is &lt;code&gt;v2&lt;/code&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./domains.png&quot; alt=&quot;domains&quot;&gt;&lt;/p&gt;
&lt;p&gt;Verify the site is deployed and working at the Vercel domain and version path
(our example would run at &lt;code&gt;https://example-org-v2.vercel.app/v2&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;Need to trigger a deployment? Go to &lt;code&gt;Settings → Git → Deploy Hooks&lt;/code&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./hooks.png&quot; alt=&quot;hooks&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can copy the link and use &lt;code&gt;curl&lt;/code&gt; in a shell to trigger a deployment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;curl https://api.vercel.com/v1/integrations/deploy/prj_rKtw..
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once your versioned site is running smoothly we&apos;re ready for the final step!&lt;/p&gt;
&lt;h2&gt;Configure rewrites for Vercel&lt;/h2&gt;
&lt;p&gt;Switch to the &lt;code&gt;main&lt;/code&gt; branch and create a &lt;code&gt;vercel.json&lt;/code&gt; file in the same folder
as &lt;code&gt;astro.config.mjs&lt;/code&gt; (if it isn&apos;t already there). Add two items to the
&lt;code&gt;rewrites&lt;/code&gt; array as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;cleanUrls&amp;quot;: true,
  &amp;quot;rewrites&amp;quot;: [
    {
      &amp;quot;source&amp;quot;: &amp;quot;/v2/:path*&amp;quot;,
      &amp;quot;destination&amp;quot;: &amp;quot;https://example-org-v2.vercel.app/v2/:path*&amp;quot;
    },
    {
      &amp;quot;source&amp;quot;: &amp;quot;/v1/:path*&amp;quot;,
      &amp;quot;destination&amp;quot;: &amp;quot;/:path*&amp;quot;
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Modify the domain and the paths to match your situation. In this example, &lt;code&gt;v1&lt;/code&gt;
is still the default version. You can basically swap them around once &lt;code&gt;v2&lt;/code&gt; is
the new default in the main branch. You&apos;ll want to omit the &lt;code&gt;cleanUrls&lt;/code&gt; if you
prefer trailing slashes.&lt;/p&gt;
&lt;p&gt;Push this in the &lt;code&gt;main&lt;/code&gt; branch to the remote and wait for the deployment to
finish. Verify everything works as expected: &lt;code&gt;example.org&lt;/code&gt; , &lt;code&gt;example.org/v1&lt;/code&gt;
and &lt;code&gt;example.org/v2&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Awesome! 🎉&lt;/p&gt;
&lt;p&gt;We&apos;ve completed all steps and this process can be repeated for new versions in
the future.&lt;/p&gt;
&lt;h2&gt;Navigation&lt;/h2&gt;
&lt;p&gt;What&apos;s left to do is to show the user the current version of the documentation
and a way to navigate to a different version. There&apos;s multiple ways to do this,
so this is left as an exercise to the reader.&lt;/p&gt;
&lt;p&gt;Feel free to check out my basic solution using a dropdown in &lt;a href=&quot;https://github.com/webpro/knip/blob/4dec0e2dce4870557f43783e6e071dd07721ee03/packages/docs/src/components/Header.astro#L8-L18&quot;&gt;a custom
&lt;code&gt;Header.astro&lt;/code&gt; component&lt;/a&gt; with &lt;a href=&quot;https://github.com/webpro/knip/blob/main/packages/docs/config.ts&quot;&gt;minimal configuration&lt;/a&gt; to keep track of
available versions. The repo contains the &lt;code&gt;astro.config.ts&lt;/code&gt; and &lt;code&gt;vercel.json&lt;/code&gt;
configuration files in the &lt;code&gt;main&lt;/code&gt; branch too. This solution is currently running
at &lt;a href=&quot;https://knip.dev&quot;&gt;https://knip.dev&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Good luck and have a great day!&lt;/p&gt;
</description><pubDate>Wed, 20 Dec 2023 00:00:00 GMT</pubDate></item><item><title>How to use a compiled bin in a TypeScript monorepo with pnpm</title><link>https://webpro.nl/scraps/compiled-bin-in-typescript-monorepo</link><guid isPermaLink="true">https://webpro.nl/scraps/compiled-bin-in-typescript-monorepo</guid><description>&lt;h1&gt;How to use a compiled bin in a TypeScript monorepo with pnpm&lt;/h1&gt;
&lt;p&gt;Today&apos;s scrap has a very long title and is about pnpm workspaces that contain a
compiled executable in a TypeScript monorepo.&lt;/p&gt;
&lt;h2&gt;Problem&lt;/h2&gt;
&lt;p&gt;When running &lt;code&gt;pnpm install&lt;/code&gt; in a monorepo, the local &lt;code&gt;bin&lt;/code&gt; file of a workspace
may not exist yet. This happens when that file needs to be generated first (e.g.
when using TypeScript). Then &lt;code&gt;pnpm&lt;/code&gt; is unable to link the missing file. This
also results in errors when trying to execute the &lt;code&gt;bin&lt;/code&gt; from another workspace.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;tl/dr;&lt;/em&gt; Make sure the referenced file in the &lt;code&gt;bin&lt;/code&gt; field of &lt;code&gt;package.json&lt;/code&gt;
exists, and import the generated file from there.&lt;/p&gt;
&lt;h2&gt;Solution&lt;/h2&gt;
&lt;p&gt;So how to safely use a compiled bin? Let&apos;s assume this situation:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The entry script for the CLI tool is at &lt;code&gt;src/cli.ts&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;This source file is compiled to &lt;code&gt;lib/cli.js&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Compiled by the &lt;code&gt;build&lt;/code&gt; script that runs &lt;code&gt;tsc&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here are some relevant bits in the &lt;code&gt;package.json&lt;/code&gt; file of the workspace that
wants to expose the &lt;code&gt;bin&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;name&amp;quot;: &amp;quot;@org/my-cli-tool&amp;quot;,
  &amp;quot;bin&amp;quot;: {
    &amp;quot;my-command&amp;quot;: &amp;quot;bin/my-command.js&amp;quot;
  },
  &amp;quot;scripts&amp;quot;: {
    &amp;quot;build&amp;quot;: &amp;quot;tsc&amp;quot;,
    &amp;quot;prepack&amp;quot;: &amp;quot;pnpm run build&amp;quot;
  },
  &amp;quot;files&amp;quot;: [&amp;quot;bin&amp;quot;, &amp;quot;lib&amp;quot;]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Use &lt;code&gt;&amp;quot;type&amp;quot;: &amp;quot;module&amp;quot;&lt;/code&gt; to publish as ESM in &lt;code&gt;package.json&lt;/code&gt;. Import the generated
file from &lt;code&gt;bin/my-command.js&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;#!/usr/bin/env node
import &apos;../lib/cli.js&apos;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Publishing as CommonJS? Then use &lt;code&gt;require&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;#!/usr/bin/env node
require(&apos;../lib/cli.js&apos;);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Make sure to include the shebang (that first line starting with &lt;code&gt;#!&lt;/code&gt;), or
consumers of your package will see errors like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;bin/my-command: line 1: syntax error near unexpected token `&apos;../lib/index.js&apos;&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Publishing&lt;/h2&gt;
&lt;p&gt;In case the package is supposed to be published, use the &lt;code&gt;prepack&lt;/code&gt; (or
&lt;code&gt;prepublishOnly&lt;/code&gt;) script and make sure to include both the &lt;code&gt;bin&lt;/code&gt; and &lt;code&gt;lib&lt;/code&gt;
folders in the &lt;code&gt;files&lt;/code&gt; field (like in the example above).&lt;/p&gt;
&lt;h2&gt;A note about &lt;code&gt;postinstall&lt;/code&gt; scripts&lt;/h2&gt;
&lt;p&gt;Using a &lt;code&gt;postinstall&lt;/code&gt; script to create the file works since &lt;a href=&quot;https://github.com/pnpm/pnpm/releases/tag/v8.6.6&quot;&gt;pnpm v8.6.6&lt;/a&gt;,
but &lt;code&gt;postinstall&lt;/code&gt; scripts should be avoided when possible:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Can perform malicious acts (security scanners don&apos;t like them)&lt;/li&gt;
&lt;li&gt;Can be disabled by the consumer using &lt;code&gt;--ignore-scripts&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Can be disabled if the consumer uses &lt;code&gt;pnpm.onlyBuiltDependencies&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Bun does not execute arbitrary lifecycle scripts for installed dependencies.&lt;/p&gt;
&lt;p&gt;That&apos;s why this little guide doesn&apos;t promote it, and this scrap got longer than
I wanted!&lt;/p&gt;
&lt;h2&gt;Additional notes&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;This scrap is based on &lt;a href=&quot;https://github.com/pnpm/pnpm/issues/1801#issuecomment-798423695&quot;&gt;this GitHub comment&lt;/a&gt; in the pnpm repository.&lt;/li&gt;
&lt;li&gt;I&apos;ve seen and tried workarounds to (&lt;code&gt;mkdir&lt;/code&gt; and) &lt;code&gt;touch&lt;/code&gt; the file from
&lt;code&gt;postinstall&lt;/code&gt; scripts, but that&apos;s flaky at best and not portable.&lt;/li&gt;
&lt;li&gt;The same issue might occur when using npm, Bun and/or Yarn. True or not, it&apos;s
better to be safe than sorry.&lt;/li&gt;
&lt;li&gt;If you are using only JavaScript (or JavaScript with TypeScript in JSDoc) then
you can target the &lt;code&gt;src/cli.js&lt;/code&gt; file directly from the &lt;code&gt;bin&lt;/code&gt; field.&lt;/li&gt;
&lt;/ul&gt;
</description><pubDate>Thu, 14 Sep 2023 00:00:00 GMT</pubDate><category>compiled</category><category>bin</category><category>monorepo</category><category>pnpm</category><category>npm</category><category>bun</category><category>yarn</category><category>typescript</category><category>postinstall</category></item><item><title>Using OpenAI with JavaScript</title><link>https://webpro.nl/articles/using-openai-with-javascript</link><guid isPermaLink="true">https://webpro.nl/articles/using-openai-with-javascript</guid><description>&lt;h1&gt;Using OpenAI with JavaScript&lt;/h1&gt;
&lt;p&gt;When trying to find my way around in the buzzing lands of OpenAI and vector
databases, the dots were not always easy to connect. In this guide I&apos;m sharing
what I&apos;ve learned during my journey to make yours even better. You might find a
trick or a treat!&lt;/p&gt;
&lt;p&gt;Most of OpenAI tooling and examples is based on Python, but this guide uses
JavaScript exclusively.&lt;/p&gt;
&lt;p&gt;We&apos;ll begin with a brief explanation of some core concepts, before diving into
more and more code. Towards the finish we&apos;ll discuss some strategies for token
management and maintaining a conversation.&lt;/p&gt;
&lt;h2&gt;Overview&lt;/h2&gt;
&lt;p&gt;Here are the topics we will be discussing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;#openai-endpoints&quot;&gt;OpenAI endpoints&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#key-concepts&quot;&gt;Key concepts&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#ingestion&quot;&gt;Ingestion&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#query&quot;&gt;Query&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#user-interface&quot;&gt;User Interface&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#conversation&quot;&gt;Conversation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#tokens&quot;&gt;Tokens&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#parameters&quot;&gt;Parameters&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#markdown--code-blocks&quot;&gt;Markdown &amp;amp; code blocks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#next-steps&quot;&gt;Next steps&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#closing-remarks&quot;&gt;Closing remarks&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;OpenAI endpoints&lt;/h2&gt;
&lt;p&gt;In this guide, we will work with two OpenAI REST endpoints.&lt;/p&gt;
&lt;h3&gt;Chat Completions&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;POST https://api.openai.com/v1/chat/completions
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;a href=&quot;https://platform.openai.com/docs/api-reference/chat/create&quot;&gt;Create chat completion&lt;/a&gt; endpoint generates a human-like text completion
for a provided prompt. We&apos;ll use it to start and keep the conversation going
between the end-user and OpenAI&apos;s Large Language Models (LLMs) such as GPT-3.5
and GPT-4.&lt;/p&gt;
&lt;h3&gt;Create Embeddings&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;POST https://api.openai.com/v1/embeddings
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With the &lt;a href=&quot;https://platform.openai.com/docs/api-reference/embeddings&quot;&gt;embeddings&lt;/a&gt; endpoint, we can create embeddings from plain text. We
will use these embeddings to store and query a vector database. Embeddings?
Vector database? No worries, we have you covered.&lt;/p&gt;
&lt;h3&gt;The &lt;code&gt;openai&lt;/code&gt; package&lt;/h3&gt;
&lt;p&gt;We&apos;re going to use these endpoints directly, and not &lt;a href=&quot;https://www.npmjs.com/package/openai&quot;&gt;OpenAI&apos;s npm package&lt;/a&gt;.
This package targets Node.js, but eventually you might want to deploy your own
endpoint on an environment without Node.js, such as a serverless or edge
platform like Cloudlare Workers, Netlify Edge or Deno. Now that &lt;code&gt;fetch&lt;/code&gt; is
ubiquitous I think the REST APIs are just as easy to use without any
dependencies. I like being &amp;quot;closer to the metal&amp;quot; and stay flexible.&lt;/p&gt;
&lt;h2&gt;Key concepts&lt;/h2&gt;
&lt;p&gt;We&apos;ve already introduced a few concepts that may be new to you. Let&apos;s discuss
&lt;a href=&quot;#embeddings&quot;&gt;embeddings&lt;/a&gt;, &lt;a href=&quot;#vector-databases&quot;&gt;vector databases&lt;/a&gt; and &lt;a href=&quot;#prompts&quot;&gt;prompts&lt;/a&gt; briefly before diving
into any code.&lt;/p&gt;
&lt;p&gt;If you&apos;re familiar with them, feel free to skip straight to &lt;a href=&quot;#ingestion&quot;&gt;ingestion&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Embeddings&lt;/h3&gt;
&lt;p&gt;Vector embeddings are numerical representations of textual data in a
high-dimensional space. They are generated using large language models (LLMs).
Embeddings allow for efficient storage and search of content that is
semantically related to a user&apos;s query. Semantically similar text is mapped
close together in the vector space, and we can find relevant content using a
vector embedding created from user input.&lt;/p&gt;
&lt;p&gt;For comparison, a lexical or &amp;quot;full text&amp;quot; search looks for literal matches of the
query words and phrases, without understanding the overall meaning of the query.&lt;/p&gt;
&lt;h3&gt;Vector databases&lt;/h3&gt;
&lt;p&gt;Why do we need a vector database? Can&apos;t we just query OpenAI and get a response?&lt;/p&gt;
&lt;p&gt;Yes, we can use the &lt;a href=&quot;https://chat.openai.com&quot;&gt;ChatGPT UI&lt;/a&gt; or even the OpenAI chat completions
endpoint directly. However, the response will be limited to what the OpenAI
models are trained on. The response may not be up-to-date, accurate, or specific
enough for your needs.&lt;/p&gt;
&lt;p&gt;What if you want to have OpenAI generate responses based solely on your own
domain-specific content? For users to &amp;quot;chat with your content&amp;quot;. Sounds
interesting! But how to go about this?&lt;/p&gt;
&lt;p&gt;Unlike ChatGPT, the OpenAI APIs are not storing any of your content and they do
not store state or a session of the conversation(s). This is where vector
databases come in. Adding a vector database in the mix has interesting
advantages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Store and maintain domain-specific knowledge.&lt;/li&gt;
&lt;li&gt;Support semantic search across your content.&lt;/li&gt;
&lt;li&gt;Control your own data and keep it up-to-date and relevant.&lt;/li&gt;
&lt;li&gt;Reduce the number of calls to OpenAI.&lt;/li&gt;
&lt;li&gt;Store the user&apos;s conversational history.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Setting up a vector database might be easier than you think. I&apos;ve been trying
out managed solutions like &lt;a href=&quot;https://www.pinecone.io&quot;&gt;Pinecone&lt;/a&gt; and &lt;a href=&quot;https://supabase.com&quot;&gt;Supabase&lt;/a&gt; without any issues.
There are more options though, and I don&apos;t feel like I&apos;m in a position to
recommend one over another. I do like that I can use Pinecone without
dependencies using only &lt;code&gt;fetch&lt;/code&gt; and their REST API.&lt;/p&gt;
&lt;h3&gt;Prompts&lt;/h3&gt;
&lt;p&gt;A prompt is the textual input we send to the chat completions endpoint to have
it generate a relevant &amp;quot;completion&amp;quot;. You could say a prompt is a question, and a
completion is an answer.&lt;/p&gt;
&lt;p&gt;Prompts are plain text and we can provide extra details and information to
improve the results. The more context we provide, the better the response will
be.&lt;/p&gt;
&lt;p&gt;Requests to the chat completions endpoint are essentially stateless: not your
content, no session, no state. The challenge is to optimize and include the
right information with each request. We&apos;ll be discussing prompts throughout this
guide, and ways to optimize them.&lt;/p&gt;
&lt;h2&gt;Ingestion&lt;/h2&gt;
&lt;p&gt;Armed with this knowledge, let&apos;s begin building a chat application with a vector
database.&lt;/p&gt;
&lt;p&gt;We&apos;ll need to get content into this database. Content is stored as vector
embeddings, and we can create those from textual content by using the
&lt;a href=&quot;#create-embeddings&quot;&gt;embeddings endpoint&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Metadata&lt;/h3&gt;
&lt;p&gt;Before creating the database table or index, it&apos;s important to consider what we
will do with the results of semantic search queries.&lt;/p&gt;
&lt;p&gt;Vector embeddings are a compressed representation of semantics for efficient
storage and querying. It&apos;s not possible to translate them back to the original
text. This is the reason we need to store the original text along with the
embeddings in the database.&lt;/p&gt;
&lt;p&gt;The text can be stored as metadata and can include more useful things to display
in the application, such as document or section titles and URL&apos;s to link back to
the original source.&lt;/p&gt;
&lt;h3&gt;Tools&lt;/h3&gt;
&lt;p&gt;There are tools that can help with this. I have seen a few solutions that offer
easy content ingestion, but you don&apos;t have much freedom such as to choose where
the content will be stored:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://markprompt.com&quot;&gt;Markprompt&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.chaindesk.ai&quot;&gt;Chaindesk&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/imartinez/privateGPT&quot;&gt;privateGPT&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.kapa.ai&quot;&gt;kapa.ai&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.justbonfire.com&quot;&gt;Bonfire&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;7-docs&lt;/h3&gt;
&lt;p&gt;As I wanted to start out with command-line tools and learn more about the OpenAI
APIs, embeddings and vector database, I decided to develop a tool myself.&lt;/p&gt;
&lt;p&gt;This work ended up as &lt;a href=&quot;https://github.com/7-docs/7-docs&quot;&gt;7-docs&lt;/a&gt; and comes with the &lt;code&gt;7d&lt;/code&gt; command-line tool to
ingest content from plain text, Markdown and PDF files into a vector database.
It ingests content from local files, GitHub repositories and also HTML from
public websites. Currently it supports &amp;quot;upserting&amp;quot; vectors into Pinecone indexes
and Supabase tables.&lt;/p&gt;
&lt;p&gt;To get an idea what ingestion using &lt;code&gt;7d&lt;/code&gt; looks like, here are some examples that
demonstrate how to ingest Markdown files:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;7d ingest --files &apos;*.md&apos; --db pinecone --namespace my-docs
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;7d ingest --source github --repo reactjs/react.dev \
  --files &apos;src/content/reference/react/*.md&apos; \
  --db supabase \
  --namespace react
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Query&lt;/h2&gt;
&lt;p&gt;When the embeddings and metadata are in the database, we can query it. We&apos;ll
look at some example code to implement this 4-step strategy:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create a vector embedding from the user&apos;s textual input.&lt;/li&gt;
&lt;li&gt;Query the database with this vector for related chunks of content.&lt;/li&gt;
&lt;li&gt;Build the prompt from the search results and the user&apos;s input.&lt;/li&gt;
&lt;li&gt;Ask the model to generate a chat completion based on this prompt.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The next examples show working code, but contain no error handling or
optimizations. Just plain JavaScript without dependencies.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;(Don&apos;t want to implement this yourself, or just want to see examples? Visit
&lt;a href=&quot;https://github.com/7-docs&quot;&gt;7-docs&lt;/a&gt; for available demos and starterkits to hit the ground running.)&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;1. Create a vector embedding&lt;/h3&gt;
&lt;p&gt;The first function we&apos;ll need creates a vector embedding based on the user&apos;s
input:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;export const createEmbeddings = async ({ token, model, input }) =&amp;gt; {
  const response = await fetch(&apos;https://api.openai.com/v1/embeddings&apos;, {
    headers: {
      &apos;Content-Type&apos;: &apos;application/json&apos;,
      Authorization: `Bearer ${token}`,
    },
    method: &apos;POST&apos;,
    body: JSON.stringify({ input, model }),
  });

  const { error, data, usage } = await response.json();

  return data;
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This function can be called like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const vector = await createEmbeddings({
  token: &apos;[OPENAI_API_TOKEN]&apos;,
  model: &apos;text-embedding-ada-002&apos;,
  input: &apos;What is an embedding?&apos;,
});
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;2. Query the database&lt;/h3&gt;
&lt;p&gt;In the second step we are going to query the database with the &lt;code&gt;vector&lt;/code&gt;
embedding we just created. Below is an example that queries a Pinecone index for
vectors with related content using &lt;code&gt;fetch&lt;/code&gt;. The rows returned from this query
are mapped to the metadata that&apos;s stored with the vector in the same row. We
need this metadata in the next step.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;export const query = async ({ token, vector, namespace }) =&amp;gt; {
  const response = await fetch(&apos;https://[my-index].pinecone.io/query&apos;, {
    headers: {
      &apos;Content-Type&apos;: &apos;application/json&apos;,
      &apos;Api-Key&apos;: token,
    },
    method: &apos;POST&apos;,
    body: JSON.stringify({
      vector,
      namespace,
      topK: 10,
      includeMetadata: true,
    }),
  });

  const data = await response.json();
  return data.matches.map(match =&amp;gt; match.metadata);
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This &lt;code&gt;query&lt;/code&gt; function can be invoked with the &lt;code&gt;vector&lt;/code&gt; we received from
&lt;code&gt;createEmbeddings()&lt;/code&gt; like so:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const metadata = await query({
  token: &apos;[PINECONE_API_KEY]&apos;,
  vector: vector, //  Here&apos;s the vector we received from `createEmbeddings()`
  namespace: &apos;my-knowledge-base&apos;,
});
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;3. Build the prompt&lt;/h3&gt;
&lt;p&gt;The third step builds the prompt. There are multiple ways to go about this and
the content of the template probably requires customization on your end, but
here is an example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const template = `Answer the question as truthfully and accurately as possible using the provided context.
If the answer is not contained within the text below, say &amp;quot;Sorry, I don&apos;t have that information.&amp;quot;.

Context: {CONTEXT}

Question: {QUERY}

Answer: `;

const getPrompt = (context, query) =&amp;gt; {
  return template.replace(&apos;{CONTEXT}&apos;, context).replace(&apos;{QUERY}&apos;, query);
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And here is how we can create the prompt with context from the &lt;code&gt;metadata&lt;/code&gt;
returned from the database query:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;// Create a concatenated string from search results metadata
const context = metadata.map(metadata =&amp;gt; metadata.content).join(&apos; &apos;);

// Build the complete prompt including the context and the question
const prompt = getPrompt(context, &apos;What is an embedding?&apos;);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Later in this guide, we will also look at example code to &lt;a href=&quot;#conversation&quot;&gt;maintain a
conversation&lt;/a&gt; instead of merely asking one-shot questions.&lt;/p&gt;
&lt;h3&gt;4. Generate chat completion&lt;/h3&gt;
&lt;p&gt;We are ready for the last step: ask the model for a chat completion with our
prompt. Here&apos;s an example function to call this endpoint:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;export const chatCompletions = async ({ token, body }) =&amp;gt; {
  const response = await fetch(&apos;https://api.openai.com/v1/chat/completions&apos;, {
    method: &apos;POST&apos;,
    headers: {
      Authorization: `Bearer ${token}`,
      &apos;Content-Type&apos;: &apos;application/json&apos;,
    },
    body: JSON.stringify(body),
  });

  return response;
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And here&apos;s how to make the request with the prompt:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const messages = [];
messages.push({
  role: &apos;user&apos;,
  content: prompt, // This is the `prompt` we received from `getPrompt()`
});

const response = await chatCompletions({
  token: &apos;[OPENAI_API_TOKEN]&apos;,
  body: {
    model: &apos;gpt-3.5-turbo&apos;,
    messages,
  },
});

const data = await response.json();
const text = data.choices[0].message.content;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;text&lt;/code&gt; contains the human-readable answer from OpenAI.&lt;/p&gt;
&lt;p&gt;Excellent, this is the essence of generating chat completions based on your own
vector database. Now, how do we combine these four steps and integrate them into
a user interface? You can create a function that abstracts this away, or use the
&lt;a href=&quot;https://www.npmjs.com/package/@7-docs/edge&quot;&gt;@7-docs/edge&lt;/a&gt; package to do this for you. Keep reading to see an example.&lt;/p&gt;
&lt;p&gt;In the next part of this guide, we will explore a UI component featuring a basic
form for users to submit their queries. This component will also render the
streaming response generated by the &lt;a href=&quot;#function&quot;&gt;function&lt;/a&gt; in the next section.&lt;/p&gt;
&lt;h2&gt;User Interface&lt;/h2&gt;
&lt;p&gt;Let&apos;s put our 4-step strategy into action and build &lt;a href=&quot;#function&quot;&gt;function&lt;/a&gt; and
&lt;a href=&quot;#form&quot;&gt;form&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;(Don&apos;t want to implement this yourself, or just want to see examples? Visit
&lt;a href=&quot;https://github.com/7-docs&quot;&gt;7-docs&lt;/a&gt; for available demos and starterkits to hit the ground running.)&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;Function&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;/api/completion&lt;/code&gt; endpoint will listen to incoming requests and respond
using all of the query logic from the previous section.&lt;/p&gt;
&lt;p&gt;We&apos;re going to use the &lt;code&gt;@7-docs/edge&lt;/code&gt; package, which abstracts away the 4-step
strategy and some boring boilerplate. We need to pass the &lt;code&gt;OPENAI_API_KEY&lt;/code&gt; and a
&lt;code&gt;query&lt;/code&gt; function from a database adapter, Pinecone in this example. We pass it
to &lt;code&gt;getCompletionHandler&lt;/code&gt; so it can query the database when it needs to. We
would pass a different function if we wanted to used a different type of
database (like Supabase or Milvus).&lt;/p&gt;
&lt;p&gt;Let&apos;s bring this together in a serverless or edge function handler in just a few
lines of code:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;import { getCompletionHandler, pinecone } from &apos;@7-docs/edge&apos;;
import { createClient } from &apos;@supabase/supabase-js&apos;;

const OPENAI_API_KEY = process.env.OPENAI_API_KEY;
const PINECONE_URL = process.env.PINECONE_URL;
const PINECONE_API_KEY = process.env.PINECONE_API_KEY;
const namespace = &apos;my-knowledge-base&apos;;

const query: QueryFn = (vector: number[]) =&amp;gt;
  pinecone.query({
    url: PINECONE_URL,
    token: PINECONE_API_KEY,
    vector,
    namespace,
  });

export default getCompletionHandler({ OPENAI_API_KEY, query });
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This pattern can be used anywhere from traditional servers to edge functions,
since there are no dependencies on modules only available in Node.js.&lt;/p&gt;
&lt;h3&gt;Form&lt;/h3&gt;
&lt;p&gt;Now we still need a UI component to render an input field, send the input to the
&lt;code&gt;/api/completion&lt;/code&gt; endpoint, and render the streaming response.&lt;/p&gt;
&lt;p&gt;This minimal example uses a little React and JSX for an easy read, but it could
just as well be plain JavaScript or any other framework.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;import { useState } from &apos;react&apos;;

export default function Page() {
  const [query, setQuery] = useState(&apos;&apos;);
  const [outputStream, setOutputStream] = useState(&apos;&apos;);

  function startStream(query) {
    const searchParams = new URLSearchParams();
    searchParams.set(&apos;query&apos;, encodeURIComponent(query));
    searchParams.set(&apos;embedding_model&apos;, &apos;text-embedding-ada-002&apos;);
    searchParams.set(&apos;completion_model&apos;, &apos;gpt-3.5-turbo&apos;);
    const url = &apos;/api/completion?&apos; + searchParams.toString();

    const source = new EventSource(url);
    source.addEventListener(&apos;message&apos;, event =&amp;gt; {
      if (event.data.trim() === &apos;[DONE]&apos;) {
        source.close();
      } else {
        const data = JSON.parse(event.data);
        const text = data.choices[0].delta.content;
        if (text) setOutputStream(v =&amp;gt; v + text);
      }
    });
  }

  const onSubmit = event =&amp;gt; {
    if (event) event.preventDefault();
    startStream(query);
  };

  return (
    &amp;lt;&amp;gt;
      &amp;lt;form onSubmit={onSubmit}&amp;gt;
        &amp;lt;label&amp;gt;
          How can I help you?
          &amp;lt;input
            type=&amp;quot;search&amp;quot;
            value={query}
            onChange={event =&amp;gt; setQuery(event.target.value)}
          /&amp;gt;
        &amp;lt;/label&amp;gt;
        &amp;lt;input type=&amp;quot;submit&amp;quot; value=&amp;quot;Send&amp;quot; /&amp;gt;
      &amp;lt;/form&amp;gt;

      &amp;lt;div&amp;gt;{outputStream}&amp;lt;/div&amp;gt;
    &amp;lt;/&amp;gt;
  );
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now all the components in a &amp;quot;chat with your content&amp;quot; have come together:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ingest content as vector embeddings into a database&lt;/li&gt;
&lt;li&gt;Create a function to query the content using the 4-step strategy&lt;/li&gt;
&lt;li&gt;Build a UI to accept user input and render a streaming response&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The following sections will build on to make everything even more interesting!&lt;/p&gt;
&lt;h2&gt;Conversation&lt;/h2&gt;
&lt;p&gt;To start a chat, we&apos;ve seen how to &lt;a href=&quot;#3-build-the-prompt&quot;&gt;build a basic prompt&lt;/a&gt;. This is good
enough for one-shot questions, but we need more to build a meaningful
conversation. The chat completions endpoint accepts an array of messages, so a
pattern to fill this array could look like this:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Add a &lt;code&gt;system&lt;/code&gt; message that explains the model (i.e. the &lt;code&gt;assistant&lt;/code&gt;) how to
behave and respond.&lt;/li&gt;
&lt;li&gt;Add the conversation history with &lt;code&gt;user&lt;/code&gt; and &lt;code&gt;assistant&lt;/code&gt; messages.&lt;/li&gt;
&lt;li&gt;Add the &lt;code&gt;user&lt;/code&gt; prompt, containing the context and the query.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Here is an example building on the initial &lt;a href=&quot;#3-build-the-prompt&quot;&gt;prompt example&lt;/a&gt; that extends the
&lt;code&gt;messages&lt;/code&gt; array to build the conversation:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;// Create a concatenated string from search results metadata from step 2: query the database
const context = metadata.map(metadata =&amp;gt; metadata.content).join(&apos; &apos;);

const system = `Answer the question as truthfully as possible using the provided context.
If the answer is not contained within the text below, say &amp;quot;Sorry, I don&apos;t have that information.&amp;quot;.`;

// In a real application, the conversation `history` can be sent
// with every request from the client, or by using some kind of storage.
const history = [
  [&apos;What is an embedding?&apos;, &apos;An embedding is...&apos;],
  [&apos;Can you give an example?&apos;, &apos;Here is an example...&apos;],
];

const prompt = getPrompt(context, &apos;Can I restore the original text?&apos;);

const messages = [];

messages.push({
  role: &apos;system&apos;,
  content: system,
});

history.forEach(([question, answer]) =&amp;gt; {
  messages.push({
    role: &apos;user&apos;,
    content: question,
  });

  messages.push({
    role: &apos;assistant&apos;,
    content: answer,
  });
});

messages.push({
  role: &apos;user&apos;,
  content: prompt,
});

const response = await chatCompletions({
  token: &apos;[OPENAI_API_TOKEN]&apos;,
  model: &apos;gpt-3.5-turbo&apos;,
  messages,
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The actual &lt;code&gt;history&lt;/code&gt; can come from the client. For instance, this could be
stored in UI component state, or browser session storage. In that case, it will
need to be sent with every request to the &lt;a href=&quot;#function&quot;&gt;function&lt;/a&gt;. Other ways of storing
and retrieving the conversation history is outside the scope of this guide.&lt;/p&gt;
&lt;p&gt;See the starter kits for examples to handle this in the user interface in tandem
with the &lt;code&gt;@7-docs/edge&lt;/code&gt; package.&lt;/p&gt;
&lt;h2&gt;Tokens&lt;/h2&gt;
&lt;p&gt;Tokens (not characters) are the unit used by OpenAI for limits and usage. There
are limits to the number of tokens that can be sent to and received from the API
endpoints with each request.&lt;/p&gt;
&lt;h3&gt;Embeddings&lt;/h3&gt;
&lt;p&gt;The maximum number of input tokens to create embeddings with the
&lt;code&gt;text-embedding-ada-002&lt;/code&gt; model is 8191.&lt;/p&gt;
&lt;p&gt;The price is &lt;code&gt;$ 0.0004&lt;/code&gt; per 1k tokens, which comes down to a maximum of
&lt;code&gt;$ 0.0032&lt;/code&gt; per request when sending 8k tokens. That&apos;s roughly 6.000 words that
can be sent at once to create vector embeddings. We can send as many requests as
we want.&lt;/p&gt;
&lt;p&gt;During content ingestion you may need this endpoint for a short period in
bursts, depending on the amount of content. Remember that we also need it to
create an embedding from the user&apos;s input to query the vector database.
Depending on the user&apos;s input this request is usually smaller, but may occur
frequently for a longer period depending on application traffic.&lt;/p&gt;
&lt;h3&gt;Chat completions&lt;/h3&gt;
&lt;p&gt;For the chat completions endpoint, the &lt;code&gt;max_tokens&lt;/code&gt; value represents the number
of tokens the model is allowed to use when generating the completion. The models
have their own limit (context length) and pricing:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Model&lt;/th&gt;
&lt;th style=&quot;text-align:right&quot;&gt;Context Length&lt;/th&gt;
&lt;th style=&quot;text-align:right&quot;&gt;$/1k prompt&lt;/th&gt;
&lt;th style=&quot;text-align:right&quot;&gt;$/1k completion&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;gpt-3.5-turbo&lt;/td&gt;
&lt;td style=&quot;text-align:right&quot;&gt;4.096&lt;/td&gt;
&lt;td style=&quot;text-align:right&quot;&gt;$ 0.002&lt;/td&gt;
&lt;td style=&quot;text-align:right&quot;&gt;$ 0.002&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;gpt-4&lt;/td&gt;
&lt;td style=&quot;text-align:right&quot;&gt;8.192&lt;/td&gt;
&lt;td style=&quot;text-align:right&quot;&gt;$ 0.03&lt;/td&gt;
&lt;td style=&quot;text-align:right&quot;&gt;$ 0.06&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;gpt-4-32k&lt;/td&gt;
&lt;td style=&quot;text-align:right&quot;&gt;32.768&lt;/td&gt;
&lt;td style=&quot;text-align:right&quot;&gt;$ 0.06&lt;/td&gt;
&lt;td style=&quot;text-align:right&quot;&gt;$ 0.12&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The sum of the tokens for the prompt plus the &lt;code&gt;max_tokens&lt;/code&gt; for completion cannot
exceed the model&apos;s context length. For &lt;code&gt;gpt-3.5-turbo&lt;/code&gt; this means:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;num_tokens(prompt) + max_tokens &amp;lt;= 4096
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To see what this means in practice, we&apos;ll discuss tokenization first and then
look at an example calculation.&lt;/p&gt;
&lt;h3&gt;Tokenization&lt;/h3&gt;
&lt;p&gt;The number of tokens for a given text can be calculated using a tokenizer (such
as &lt;a href=&quot;https://github.com/latitudegames/GPT-3-Encoder&quot;&gt;GPT-3-Encoder&lt;/a&gt;). Tokenization can be slow on larger chunks, and npm
packages for Node.js may not work in other environments such as the browser or
Deno.&lt;/p&gt;
&lt;p&gt;The alternative is to make an estimate: use 4 characters per token or 0.75 words
per token. That&apos;s 75 words per 100 tokens. This is a very rough estimate for the
English language and varies per language. You should probably also add a small
safety margin to stay within the limits and prevent erors.&lt;/p&gt;
&lt;p&gt;OpenAI provides an online &lt;a href=&quot;https://platform.openai.com/tokenizer&quot;&gt;Tokenizer&lt;/a&gt;. For Python there&apos;s &lt;a href=&quot;https://github.com/openai/tiktoken&quot;&gt;tiktoken&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Example&lt;/h3&gt;
&lt;p&gt;Let&apos;s say you&apos;re using the &lt;code&gt;gpt-3.5-turbo&lt;/code&gt; model. If you want to preserve 25%
for the completion, use &lt;code&gt;max_tokens: 1024&lt;/code&gt;. The rest of the model&apos;s context can
be occupied by the prompt. That&apos;s &lt;code&gt;3072&lt;/code&gt; tokens (&lt;code&gt;4096-1024&lt;/code&gt;), which comes down
to an estimated 2304 words (&lt;code&gt;3072*0.75&lt;/code&gt;) or 12.288 characters (&lt;code&gt;3072*4&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;The length of the prompt is the combined length of all &lt;code&gt;content&lt;/code&gt; in the
&lt;code&gt;messages&lt;/code&gt; (i.e. the combined messages of the &lt;code&gt;system&lt;/code&gt;, &lt;code&gt;user&lt;/code&gt; and &lt;code&gt;assistant&lt;/code&gt;
roles in &lt;a href=&quot;#conversation&quot;&gt;Conversation&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;If the prompt has the maximum length and the model would use all completion
tokens, using &lt;code&gt;4096&lt;/code&gt; tokens would cost $ 0.008 (&lt;code&gt;4*$0.002&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;Using the &lt;code&gt;gpt-4&lt;/code&gt; model, the same roundtrip would cost $ 0.15 (&lt;code&gt;3*$0.03&lt;/code&gt; for the
prompt + &lt;code&gt;1*$0.06&lt;/code&gt; for the completion).&lt;/p&gt;
&lt;h3&gt;Strategies&lt;/h3&gt;
&lt;p&gt;To optimize for your end-user, you&apos;ll need to find the right balance between
input (prompt) and output (completion).&lt;/p&gt;
&lt;p&gt;When adding context and conversation history to the chat completion request it
may become a challenge to keep everything within the model&apos;s limit. More context
and more conversation history (input) means less room for the completion
(output).&lt;/p&gt;
&lt;p&gt;There are a few ways I can think of to help mitigate this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Limit the number of &lt;code&gt;messages&lt;/code&gt; to keep in the conversation history.&lt;/li&gt;
&lt;li&gt;Truncate or leave out previous answers from the &lt;code&gt;assistant&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Send some sort of summary of the conversation history. That would likely
require additional effort and requests.&lt;/li&gt;
&lt;li&gt;Use a solution like &lt;a href=&quot;https://github.com/zilliztech/GPTCache&quot;&gt;GPTCache&lt;/a&gt; to cache query results.&lt;/li&gt;
&lt;li&gt;Some form of &amp;quot;compression&amp;quot; could work in certain cases. An example using GPT-4
can be found at &lt;a href=&quot;https://github.com/itamargol/openai/blob/main/gpt4_compression.md&quot;&gt;gpt4_compression.md&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Another thing to consider is the amount of context to send with the prompt. This
context comes from the semantic search results when querying the vector
database. You may want to create smaller vector embeddings during ingestion to
eventually have more options and wiggle room when building the context for the
chat completion. On the other hand, including smaller but more varied pieces of
context may result in less &amp;quot;focused&amp;quot; completions.&lt;/p&gt;
&lt;p&gt;Overall, I think what matters most is to not lose the first and last question
throughout the conversation. Keep in mind that the model does not store state or
session.&lt;/p&gt;
&lt;h3&gt;Usage&lt;/h3&gt;
&lt;p&gt;When using OpenAI endpoints, the token &lt;code&gt;usage&lt;/code&gt; for the request is included in
the response (with separate &lt;code&gt;prompt_tokens&lt;/code&gt; and &lt;code&gt;completion_tokens&lt;/code&gt;).
Unfortunately, &lt;code&gt;usage&lt;/code&gt; is not included for streaming chat completion responses
(&lt;code&gt;stream: true&lt;/code&gt;).&lt;/p&gt;
&lt;h2&gt;Parameters&lt;/h2&gt;
&lt;p&gt;A quick overview of some common parameters you may want to tweak for better chat
completions.&lt;/p&gt;
&lt;h3&gt;&lt;code&gt;temperature&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;temperature&lt;/code&gt; parameter is a number between &lt;code&gt;0&lt;/code&gt; and &lt;code&gt;2&lt;/code&gt; (default: &lt;code&gt;1&lt;/code&gt;). A
low number like &lt;code&gt;0.2&lt;/code&gt; makes the output more focused and deterministic. You want
this when the output should be generated based on the context sent within the
prompt. A higher value like &lt;code&gt;0.8&lt;/code&gt; makes the output more random.&lt;/p&gt;
&lt;h3&gt;&lt;code&gt;presence_penalty&lt;/code&gt; and &lt;code&gt;frequency_penalty&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;A number between &lt;code&gt;-2&lt;/code&gt; and &lt;code&gt;2&lt;/code&gt; to decrease or increase the presence and frequency
of tokens. The default value is &lt;code&gt;0&lt;/code&gt; and this is fine for most situations. If you
want to reduce repetition, try numbers between &lt;code&gt;0.1&lt;/code&gt; and &lt;code&gt;1&lt;/code&gt;. Negative numbers
increase the likelihood of repetition.&lt;/p&gt;
&lt;h3&gt;&lt;code&gt;name&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;As we&apos;ve seen when creating the &lt;code&gt;messages&lt;/code&gt; array, each message is assigned a
&lt;code&gt;role&lt;/code&gt; (&lt;code&gt;system&lt;/code&gt;, &lt;code&gt;user&lt;/code&gt; or &lt;code&gt;assistant&lt;/code&gt;). You can make the conversation more
personal and send a &lt;code&gt;name&lt;/code&gt; with each message.&lt;/p&gt;
&lt;h2&gt;Markdown &amp;amp; code blocks&lt;/h2&gt;
&lt;p&gt;If you ingest Markdown content, you likely also want the completion to include
Markdown and code blocks when relevant. Here&apos;s a list of things to remember
during ingestion and building the client application:&lt;/p&gt;
&lt;h3&gt;Ingestion&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Don&apos;t strip out code blocks from the Markdown during ingestion.&lt;/li&gt;
&lt;li&gt;Try to prevent splitting text in the middle of code blocks.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Client&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Include something like &amp;quot;Use Markdown&amp;quot; and &amp;quot;Try to include a code example in
language-specific fenced code blocks&amp;quot; in the prompt, ideally in the &lt;code&gt;system&lt;/code&gt;
message.&lt;/li&gt;
&lt;li&gt;Use a Markdown renderer (e.g. &lt;a href=&quot;https://github.com/remarkjs/react-markdown&quot;&gt;react-markdown&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Use a syntax highlighter (e.g. &lt;a href=&quot;https://github.com/react-syntax-highlighter/react-syntax-highlighter&quot;&gt;react-syntax-highlighter&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Next steps&lt;/h2&gt;
&lt;p&gt;After figuring out how to connect the dots, it&apos;s exciting to tinker and continue
the journey to improve the user experience. Here are a few pointers that may
inspire you:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Consider the integration of the conversation in the user interface, as well as
the place and the role of the chat box.&lt;/li&gt;
&lt;li&gt;Keep refining the prompt to better align with your content and your target
audience.&lt;/li&gt;
&lt;li&gt;Improve chat completions by further tweaking the parameters, vector embedding
sizes, and context in the prompt.&lt;/li&gt;
&lt;li&gt;Empowere users with more control by providing affordances to adjust the prompt
or by incorporating multiple prompts.&lt;/li&gt;
&lt;li&gt;Combine multiple sources of content, such as searching a database with source
code or a table with more generic content.&lt;/li&gt;
&lt;li&gt;Generate multiple chat completions in a single response.&lt;/li&gt;
&lt;li&gt;Use the &lt;a href=&quot;https://platform.openai.com/docs/api-reference/moderations&quot;&gt;Moderations&lt;/a&gt; endpoint to make sure the input text does not
violate OpenAI&apos;s content policy.&lt;/li&gt;
&lt;li&gt;Last but not least, listen to your customers. What are their needs?&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Closing remarks&lt;/h2&gt;
&lt;p&gt;We&apos;ve explored many aspects of using OpenAI with JavaScript to create useful
applications. We&apos;ve covered everything from ingesting content to building a user
interface with your own serverless or edge function. Hopefully, this guide is
helpful in your own journey. Good luck!&lt;/p&gt;
&lt;p&gt;I would love to hear about your thoughts and what you are building, please
&lt;a href=&quot;https://bsky.app/profile/webpro.nl&quot;&gt;share with me on Bluesky&lt;/a&gt;!&lt;/p&gt;
&lt;p&gt;Special thanks goes out to &lt;a href=&quot;https://github.com/enobayram&quot;&gt;Enis Bayramoğlu&lt;/a&gt; for a great review.&lt;/p&gt;
</description><pubDate>Wed, 03 May 2023 00:00:00 GMT</pubDate><category>openai</category><category>javascript</category><category>vector</category><category>embedding</category><category>database</category><category>ingestion</category><category>search</category><category>query</category><category>chat</category><category>completion</category><category>prompt</category><category>tokens</category><category>conversation</category></item><item><title>Using Git bisect to divide &amp; conquer</title><link>https://webpro.nl/scraps/using-git-bisect-to-divide-and-conquer</link><guid isPermaLink="true">https://webpro.nl/scraps/using-git-bisect-to-divide-and-conquer</guid><description>&lt;h1&gt;Using Git bisect to divide &amp;amp; conquer&lt;/h1&gt;
&lt;p&gt;When you have a series of commits and want to find where a bug or a change of
behavior was introduced, &lt;code&gt;git bisect&lt;/code&gt; is your friend. With a command that
understands what is &amp;quot;bad&amp;quot; and what is &amp;quot;good&amp;quot;, this process can be fully
automated. For instance, use &lt;code&gt;npm test&lt;/code&gt; and report back the first commit where
the tests fail.&lt;/p&gt;
&lt;p&gt;Here&apos;s how to start the process:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;git bisect start
git bisect bad HEAD
git bisect good v5.1.0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Usually the &lt;code&gt;HEAD&lt;/code&gt; is a bad commit, and &lt;code&gt;v5.1.0&lt;/code&gt; is a tag or commit you are sure
is good. Create a file to run the commands. Here&apos;s an arbitrary example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;npm run build
npm test
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Make this file executable (&lt;code&gt;chmod +x bisect.sh&lt;/code&gt;), and run with it:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;git bisect run ./bisect.sh
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In case there is no script to automate this, then you can do this manually. Just
say &lt;code&gt;git bisect good&lt;/code&gt; or &lt;code&gt;bad&lt;/code&gt;, and Git will check out the next commit for you,
you verify whether it&apos;s good or bad, and so on. Git uses a binary search
algorithm to do this efficiently.&lt;/p&gt;
&lt;p&gt;Note that this technique is often used to find which changeset introduced a bug,
but other ideas include finding a performance regression, the output of some
program changes, etcetera. You can even use different terms (instead of &amp;quot;bad&amp;quot;
and &amp;quot;good&amp;quot;) to support this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;git bisect start --term-old fast --term-new slow
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When you are done, or made a mistake marking &lt;code&gt;good&lt;/code&gt; or &lt;code&gt;bad&lt;/code&gt; commits, the
process has to be reset:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;git bisect reset
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;See the &lt;a href=&quot;https://git-scm.com/docs/git-bisect&quot;&gt;git bisect documentation&lt;/a&gt; for more details.&lt;/p&gt;
</description><pubDate>Mon, 19 Dec 2022 00:00:00 GMT</pubDate><category>git</category><category>bisect</category></item><item><title>Handling errors in Azure pipelines</title><link>https://webpro.nl/scraps/handling-errors-in-azure-pipelines</link><guid isPermaLink="true">https://webpro.nl/scraps/handling-errors-in-azure-pipelines</guid><description>&lt;h1&gt;Handling errors in Azure pipelines&lt;/h1&gt;
&lt;p&gt;Put mildly, Azure pipelines don&apos;t always behave as expected. Sometimes a
pipeline does not fail when it should. This scrap shows a few solutions to make
pipelines fail for script and task errors.&lt;/p&gt;
&lt;h2&gt;Contents&lt;/h2&gt;
&lt;p&gt;We&apos;re going to look at three cases and ways to make pipelines fail:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;#fail-on-errors-written-to-stderr-in-scripts&quot;&gt;Fail on errors written to &lt;code&gt;stderr&lt;/code&gt; in scripts (&lt;code&gt;failOnStderr&lt;/code&gt;)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#fail-on-errors-written-to-stderr-in-tasks&quot;&gt;Fail on errors written to &lt;code&gt;stderr&lt;/code&gt; in tasks (&lt;code&gt;failOnStandardError&lt;/code&gt;)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#fail-on-script-errors&quot;&gt;Fail on script errors (&lt;code&gt;set -e&lt;/code&gt;)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And we&apos;ll also look at a way to &lt;a href=&quot;#continue-on-error&quot;&gt;make a pipeline continue&lt;/a&gt;, even if there are
errors.&lt;/p&gt;
&lt;h2&gt;An unpleasant surprise&lt;/h2&gt;
&lt;p&gt;Let&apos;s dive straight into our topic and take a look at an example &lt;code&gt;script&lt;/code&gt; task
that tries to tag a pipeline run:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;- script: |
    az pipelines runs tag add --run-id $(Build.BuildId) --tags my-container
    echo &amp;quot;Tagged build for my-container&amp;quot;
  displayName: Tag successful build
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this example the &lt;code&gt;az&lt;/code&gt; command fails due to some missing extension. This
results in output like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Generating script.
========================== Starting Command Output ===========================
/bin/bash --noprofile --norc /agent/_work/_temp/3ecc72e6-92f7-4de6-96c3-35ae602c7620.sh
ERROR: The command requires the extension azure-devops. Unable to prompt for extension install confirmation as no tty available. Run &apos;az config set extension.use_dynamic_install=yes_without_prompt&apos; to allow installing extensions without prompt.
Tagged build for my-container

Finishing: Tag successful build
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The command fails and prints an &lt;code&gt;ERROR&lt;/code&gt; (to &lt;code&gt;stderr&lt;/code&gt;). But both the task and the
pipeline still succeed:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./pipeline-success.webp&quot; alt=&quot;pipeline success&quot;&gt;&lt;/p&gt;
&lt;p&gt;Why does this not make the task fail? It&apos;s because the &lt;code&gt;az&lt;/code&gt; command does not
exit with a non-zero code.&lt;/p&gt;
&lt;p&gt;This is often not the desired behavior. Fortunately, when we want to fail the
pipeline we do have some options:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use the &lt;code&gt;failOnStderr&lt;/code&gt; task option&lt;/li&gt;
&lt;li&gt;Or use &lt;code&gt;set -e&lt;/code&gt; inside the script&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let&apos;s look what happens when either of these are used.&lt;/p&gt;
&lt;h2&gt;Fail on errors written to stderr in scripts&lt;/h2&gt;
&lt;p&gt;Here we can add &lt;code&gt;failOnStderr&lt;/code&gt; as a task configuration option:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;- script: |
    az pipelines runs tag add --run-id $(Build.BuildId) --tags my-container
    echo &amp;quot;Tagged build for my-container&amp;quot;
  displayName: Tag successful build
  failOnStderr: true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will execute the whole script, but make the task fail, since the &lt;code&gt;az&lt;/code&gt;
command prints the error to &lt;code&gt;stderr&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Generating script.
========================== Starting Command Output ===========================
/bin/bash --noprofile --norc /agent/_work/_temp/3ecc72e6-92f7-4de6-96c3-35ae602c7620.sh
ERROR: The command requires the extension azure-devops. Unable to prompt for extension install confirmation as no tty available. Run &apos;az config set extension.use_dynamic_install=yes_without_prompt&apos; to allow installing extensions without prompt.
Tagged build for my-container
##[error]Bash wrote one or more lines to the standard error stream.
##[error]ERROR: The command requires the extension azure-devops. Unable to prompt for extension install confirmation as no tty available. Run &apos;az config set extension.use_dynamic_install=yes_without_prompt&apos; to allow installing extensions without prompt.

Finishing: Tag successful build
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The pipeline fails:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./pipeline-failed.webp&quot; alt=&quot;pipeline failed&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Fail on errors written to stderr in tasks&lt;/h2&gt;
&lt;p&gt;When we want to do the same for a &lt;strong&gt;task&lt;/strong&gt; (as opposed to a script), this
requires a different setting. For tasks the &lt;code&gt;failOnStandardError&lt;/code&gt; option needs
to be set as part of the &lt;code&gt;inputs&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;- task: AzureCLI@2
  displayName: Deploy my-container
  inputs:
    failOnStandardError: true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Alright, so we have:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Script → &lt;code&gt;failOnStderr&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Task → &lt;code&gt;failOnStandardError&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And we still have another option left: &lt;code&gt;set -e&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;Fail on script errors&lt;/h2&gt;
&lt;p&gt;To make the script fail on errors, use &lt;code&gt;set -e&lt;/code&gt; at the start of the script:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;- script: |
    set -e
    az pipelines runs tag add --run-id $(Build.BuildId) --tags my-container
    echo &amp;quot;Tagged build for my-container&amp;quot;
  displayName: Tag successful build
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will fail the script immediately:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Generating script.
========================== Starting Command Output ===========================
/bin/bash --noprofile --norc /agent/_work/_temp/a64b21a0-0a8e-4e6b-a0b4-271980ef4d05.sh
ERROR: The command requires the extension azure-devops. Unable to prompt for extension install confirmation as no tty available. Run &apos;az config set extension.use_dynamic_install=yes_without_prompt&apos; to allow installing extensions without prompt.
##[error]Bash exited with code &apos;2&apos;.
Finishing: Tag successful build
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And again, the pipeline fails as intended:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./pipeline-failed.webp&quot; alt=&quot;pipeline failed&quot;&gt;&lt;/p&gt;
&lt;p&gt;But notice the difference in behavior: the &amp;quot;Tagged build for my-container&amp;quot;
message is not printed here.&lt;/p&gt;
&lt;p&gt;Depending on the use case one or the other is the better choice, although I
think in general failing immediately is the better option.&lt;/p&gt;
&lt;h2&gt;Continue on error&lt;/h2&gt;
&lt;p&gt;Last but not least, sometimes the pipeline should continue even if the task
failed. For this use case, there is &lt;code&gt;continueOnError&lt;/code&gt; to the rescue:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;- script: |
    set -e
    az pipelines runs tag add --run-id $(Build.BuildId) --tags my-container
    echo &amp;quot;Tagged build for my-container&amp;quot;
  continueOnError: true
  displayName: Tag successful build
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will result in a green pipeline, but also a warning sign for the stage with
the failed task:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./pipeline-warning.webp&quot; alt=&quot;pipeline warning&quot;&gt;&lt;/p&gt;
&lt;p&gt;Compare this to the initial situation where everything is naively green. At
least now we can see something is off.&lt;/p&gt;
&lt;h2&gt;Azure DevOps documentation links&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/command-line&quot;&gt;Command Line task&lt;/a&gt; covers &lt;code&gt;failOnStderr&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-cli&quot;&gt;Azure CLI task&lt;/a&gt; covers &lt;code&gt;failOnStandardError&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/devops/pipelines/process/tasks&quot;&gt;Task types &amp;amp; usage&lt;/a&gt; covers &lt;code&gt;continueOnError&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
</description><pubDate>Fri, 09 Sep 2022 00:00:00 GMT</pubDate><category>azure</category><category>pipelines</category><category>errors</category><category>failing</category><category>tasks</category><category>script</category><category>commands</category><category>failOnStderr</category><category>continueOnError</category></item><item><title>The value of abstractions</title><link>https://webpro.nl/articles/the-value-of-abstractions</link><guid isPermaLink="true">https://webpro.nl/articles/the-value-of-abstractions</guid><description>&lt;h1&gt;The value of abstractions&lt;/h1&gt;
&lt;p&gt;In software systems, maintenance quickly becomes harder as more components are
added. Given a well-designed system, when a component deteriorates, it should be
possible to refactor or replace it without major impact on other component in
the system. Following this principle, components should separate concerns
clearly with well-designed interfaces.&lt;/p&gt;
&lt;p&gt;This article discusses some considerations in the process of designing a complex
system and its components.&lt;/p&gt;
&lt;h2&gt;Minimize the cost of change&lt;/h2&gt;
&lt;p&gt;Over time, implementations and underlying dependencies will change, but the
interfaces should remain stable. Changes to a component&apos;s implementation should
have minimal impact on other components in the system.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;An interface should depend on the code that calls it, not its implementation.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Let&apos;s take an example component &amp;quot;C&amp;quot; in our system. The deeper and more
frequently C is integrated into the system, the more important its interface
becomes. When considering the cost of C, it may help to ask ourselves questions
like this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Is it hard to refactor without affecting other components?&lt;/li&gt;
&lt;li&gt;Is it hard to replace within the structure?&lt;/li&gt;
&lt;li&gt;Is it hard to replace its underlying dependencies?&lt;/li&gt;
&lt;li&gt;Is it hard to build another feature with an alternative of C?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Answering &amp;quot;yes&amp;quot; means more coupling of other components towards C. This means
the project is harder to maintain, increasing the cost of C.&lt;/p&gt;
&lt;p&gt;This may also indicate a leaky abstraction, if C fails to encapsulate and hide
its underlying implementation details.&lt;/p&gt;
&lt;p&gt;In short: there&apos;s value in having the right abstraction for C.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The more expensive it is to refactor or replace a component in the system, the
more value it has to design an interface to abstract the implementation away.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;One of the hardest part of system design is to work out the modularity of the
system, and seeing what components benefit the most from an abstraction.&lt;/p&gt;
&lt;p&gt;Is a component large and complex, perhaps resembling more of a framework? If you
know upfront a potential refactoring is hardly feasible then an abstraction is
likely not worth it. In that case, maybe you need to take a step back and
reconsider the modularity of the system as a whole. Is it possible to increase
its modularity, and lower the cost of changes that will inevitably be necessary
over time? What are the trade-offs when going all-in on the framework?&lt;/p&gt;
&lt;p&gt;On the other hand, excessive component fragmentation leads to an over-engineered
system with too many moving parts and interrelationships.&lt;/p&gt;
&lt;p&gt;This balancing act between under- and over-engineering can be a tough cookie.
When the overall structure is largely in place, the process of interface design
comes down to iteration until all questions above are answered with &amp;quot;no&amp;quot;.&lt;/p&gt;
&lt;p&gt;Perhaps ironically, this article itself is abstract, and the actual process of
software design itself may feel distant or overrated. Yet taking the time to
think and design before and during ambitious projects pays off in the long run
as it will reduce maintenance complexity and facilitate system evolution.&lt;/p&gt;
&lt;h2&gt;Enforce module boundaries&lt;/h2&gt;
&lt;p&gt;Going from less abstract to more concrete at some point means moving into lower
level module boundaries. Here, it becomes essential to enforce and guard module
boundaries. Even the best interfaces and abstractions may go unnoticed by fellow
developers. How to prevent boundaries from being crossed? Depending on the
technology stack of choice, tooling might be available to guard certain layers
of modularity. At the level of code and configuration, linters may prove very
effective.&lt;/p&gt;
&lt;h3&gt;Examples: ESLint &amp;amp; Nx&lt;/h3&gt;
&lt;p&gt;Within the JavaScript and Node.js ecosystem, a great example of such a linter is
ESLint. Relevant rules include the built-in &lt;a href=&quot;https://eslint.org/docs/latest/rules/no-restricted-imports&quot;&gt;no-restricted-imports&lt;/a&gt; and Nx&apos;s
&lt;a href=&quot;https://nx.dev/nx-api/eslint-plugin/documents/enforce-module-boundaries&quot;&gt;@nrwl/nx/enforce-module-boundaries&lt;/a&gt; rule.&lt;/p&gt;
&lt;p&gt;They help to &lt;a href=&quot;https://nx.dev/features/enforce-module-boundaries&quot;&gt;enforce module boundaries&lt;/a&gt; and prevent direct imports of
underlying modules or dependencies, and suggest to use the provided abstraction
instead.&lt;/p&gt;
&lt;p&gt;When properly configured, tools like this effectively encourage developers to
think about the system and its components, and consider the usage and value of
abstractions.&lt;/p&gt;
&lt;h2&gt;Further reading&lt;/h2&gt;
&lt;p&gt;Resources about related programming principles:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/webpro/programming-principles#interface-segregation-principle&quot;&gt;Interface Segregation Principle&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/SOLID&quot;&gt;SOLID&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Leaky_abstraction&quot;&gt;Leaky abstraction&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Separation_of_concerns&quot;&gt;Separation of concerns&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://refactoring.fm/p/the-true-meaning-of-technical-debt&quot;&gt;The True Meaning of Technical Debt&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description><pubDate>Fri, 19 Aug 2022 00:00:00 GMT</pubDate><category>value</category><category>abstraction</category><category>interface</category><category>refactoring</category><category>system</category><category>design</category><category>implementation</category></item><item><title>Using CSS Grid to Stack Elements</title><link>https://webpro.nl/scraps/using-css-grid-to-stack-elements</link><guid isPermaLink="true">https://webpro.nl/scraps/using-css-grid-to-stack-elements</guid><description>&lt;p&gt;import &apos;./styles.css&apos;;&lt;/p&gt;
&lt;p&gt;To stack elements using CSS, we previously had to turn to absolute positioning and &lt;code&gt;z-index&lt;/code&gt; tricks. Yet with CSS Grid,
there&apos;s a new way to do this. To show what I mean, we&apos;re going to build this example component:&lt;/p&gt;
&lt;p&gt;:::section&lt;/p&gt;
&lt;p&gt;:::div{.grid}&lt;/p&gt;
&lt;p&gt;::div{.back}&lt;/p&gt;
&lt;p&gt;:::div{.front}&lt;/p&gt;
&lt;p&gt;What A Badge&lt;/p&gt;
&lt;p&gt;:::&lt;/p&gt;
&lt;p&gt;First, we need a few HTML elements:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-html&quot;&gt;&amp;lt;div class=&amp;quot;grid&amp;quot;&amp;gt;
  &amp;lt;div class=&amp;quot;back&amp;quot; /&amp;gt;
  &amp;lt;div class=&amp;quot;front&amp;quot;&amp;gt;What A Badge&amp;lt;/div&amp;gt;
&amp;lt;/div&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;.back&lt;/code&gt; element could be an image or something more interesting. The following style declarations show how to stack
the &lt;code&gt;.back&lt;/code&gt; and &lt;code&gt;.front&lt;/code&gt; elements on top of each other:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-css&quot;&gt;.grid {
  display: grid;
  grid-template-rows: auto min-content 16px;
}

.back {
  grid-area: 1 / 1 / 4 / 2;
}

.front {
  grid-area: 2 / 1 / 3 / 2;
  margin: 0 -20px;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The trick is to use overlapping &lt;code&gt;grid-area&lt;/code&gt; values for the elements you want to stack. Use &lt;code&gt;grid-template-rows&lt;/code&gt; (or
&lt;code&gt;grid-template-columns&lt;/code&gt;) to lay out the elements. Additionally, you can use (negative) &lt;code&gt;margin&lt;/code&gt; to position the stacked
element relative to the grid for even more flexibility.&lt;/p&gt;
&lt;p&gt;The stacking order follows the order in the DOM: the last element will be on top of the previous element(s).&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;min-content&lt;/code&gt; value for the second row of the grid ensures the row takes the height of the stacked &lt;code&gt;.front&lt;/code&gt; element,
while the &lt;code&gt;auto&lt;/code&gt; value for the first row makes it occupy the rest of the available space.&lt;/p&gt;
&lt;p&gt;This idea is certainly not new. For instance, it was presented in &lt;a href=&quot;https://css-tricks.com/how-to-stack-elements-in-css/&quot;&gt;How to Stack Elements in CSS&lt;/a&gt;. This scrap presents
the same idea in a more focused way, and with a different example use case.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://caniuse.com/css-grid&quot;&gt;Support for CSS Grid&lt;/a&gt; is currently great across browsers, so you can go ahead and use all of this today!&lt;/p&gt;
</description><pubDate>Mon, 23 May 2022 00:00:00 GMT</pubDate><category>CSS</category><category>stack</category><category>grid</category></item><item><title>The JavaScript block statement</title><link>https://webpro.nl/scraps/javascript-block-statement</link><guid isPermaLink="true">https://webpro.nl/scraps/javascript-block-statement</guid><description>&lt;h1&gt;The JavaScript block statement&lt;/h1&gt;
&lt;p&gt;In the first scrap of this blog I&apos;d like to make a case for a great little
feature that I rarely see used in the wild.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Want to organize your code a little bit better?&lt;/li&gt;
&lt;li&gt;Have a hard time coming up with another name for the same variable?&lt;/li&gt;
&lt;li&gt;Want to run the same code multiple times in the console or a REPL?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Use blocks! Let&apos;s take a bogus unit test:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;function test() {
  const type = &apos;some&apos;;

  const thing = getThing(type, 1);
  assert.equal(thing, 1);

  // Ehm... how to call this `thing` now...?
  const thing2 = getThing(type, 2);
  assert.equal(thing2, 2);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Specifically for unit tests you could argue that this particular case should be
separated in two unit tests, but sometimes that&apos;s just not what you want. You
can use blocks instead:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;function test() {
  const type = &apos;some&apos;;

  {
    const thing = getThing(type, 1);
    assert.equal(thing, 1);
  }

  {
    const thing = getThing(type, 2);
    assert.equal(thing, 2);
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A pair of curly braces are the delimiters of the blocks, and define a new scope
for &lt;code&gt;let&lt;/code&gt;, &lt;code&gt;const&lt;/code&gt; and &lt;code&gt;function&lt;/code&gt; declarations.&lt;/p&gt;
&lt;p&gt;Neat!&lt;/p&gt;
</description><pubDate>Fri, 06 May 2022 00:00:00 GMT</pubDate><category>javascript</category><category>block</category></item><item><title>How to add search to your static site</title><link>https://webpro.nl/articles/how-to-add-search-to-your-static-site</link><guid isPermaLink="true">https://webpro.nl/articles/how-to-add-search-to-your-static-site</guid><description>&lt;h1&gt;How to add search to your static site&lt;/h1&gt;
&lt;p&gt;Static websites are popular nowadays. There are many static site generators, but
not all have search built-in. Recently I&apos;ve added a static search option to a
few websites, including the one you&apos;re reading. In this article I would like to
share how I did this, as it might take less efforts than you think!&lt;/p&gt;
&lt;h2&gt;Re-search&lt;/h2&gt;
&lt;p&gt;When searching to find a good library for a static full-text search, I came
across popular solutions such as &lt;a href=&quot;https://github.com/olivernn/lunr.js&quot;&gt;Lunr.js&lt;/a&gt;, &lt;a href=&quot;https://github.com/nextapps-de/flexsearch&quot;&gt;FlexSearch&lt;/a&gt; and &lt;a href=&quot;https://github.com/krisk/fuse&quot;&gt;Fuse.js&lt;/a&gt;.
To my surprise, these libraries are not very well maintained anymore, while they
all have quite some open issues. To me, this does not relate to the popularity
of static websites in general.&lt;/p&gt;
&lt;p&gt;I&apos;ve used Fuse.js before to implement a simple but fast search engine (on
&lt;a href=&quot;https://www.lejan.com.br&quot;&gt;lejan.com.br&lt;/a&gt;), but this was over a year ago, and I&apos;m always looking for
better options. I&apos;ve tried them all again, and each still has at least a few
minor glitches. Another reason is that both &lt;a href=&quot;https://github.com/webpro/markdown-rambler&quot;&gt;markdown-rambler&lt;/a&gt; and this
website itself use ES Modules and other modern JavaScript features, which
usually improves developer experience, maintenance and/or performance.&lt;/p&gt;
&lt;h2&gt;MiniSearch&lt;/h2&gt;
&lt;p&gt;Later, I was lucky enough to find &lt;a href=&quot;https://github.com/lucaong/minisearch&quot;&gt;MiniSearch&lt;/a&gt; when &lt;a href=&quot;https://www.npmjs.com/search?q=search%20index&quot;&gt;looking for &amp;quot;search
index&amp;quot;&lt;/a&gt; in the npm registry. This package happens to be easy to use, while
performance and file size feel good. To be honest, I didn&apos;t do an actual file
size and performance comparison between the various options, but this
&lt;a href=&quot;https://lucaongaro.eu/blog/2019/01/30/minisearch-client-side-fulltext-search-engine.html&quot;&gt;MiniSearch blogpost&lt;/a&gt; has a good overview.&lt;/p&gt;
&lt;p&gt;The fuzzy search and prefix search are optional, and work very well for the
websites where I integrated it. This always requires a bit of fine-tuning,
depending on the size and density of documents. I did not try the
auto-suggestion feature yet. Overall a pleasant experience, much like the
alternatives. What stood out initially, was simply better search results.&lt;/p&gt;
&lt;p&gt;In my experience so far, the contents of Markdown documents can be used just
fine as input for the index. The minimal syntax of Markdown does not seem to
negatively impact the index. This makes solutions like MiniSearch great for
static websites, as they are often powered by Markdown files.&lt;/p&gt;
&lt;h2&gt;Getting started&lt;/h2&gt;
&lt;p&gt;So, how to integrate MiniSearch in your static website? There are roughly four
steps here:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&quot;#building-the-index&quot;&gt;Build the index&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#serving-the-index&quot;&gt;Serve the index&lt;/a&gt; with the rest of the static site&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#connecting-the-search-component&quot;&gt;Connect the index&lt;/a&gt; with a DOM element (such as an input field)&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#searching-and-rendering-results&quot;&gt;Search and render results&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Building the index&lt;/h2&gt;
&lt;p&gt;Here&apos;s a fragment from markdown-rambler, but the concept can be applied in any
JavaScript build system. The idea is to map existing files or pages to be
indexed to an object containing the data necessary for both the index and the
fields to eventually be displayed in the search results.&lt;/p&gt;
&lt;p&gt;The result (&lt;code&gt;documents&lt;/code&gt;) can be provided to a &lt;code&gt;MiniSearch&lt;/code&gt; instance using
&lt;code&gt;minisearch.addAll(documents)&lt;/code&gt;, which creates and returns the actual index. The
final step here is to store this JSON file to disk:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const documents = files
  .filter(files =&amp;gt; file.type === &apos;article&apos;)
  .map((file, index) =&amp;gt; ({
    id: index,
    title: file.data.meta.title,
    description: file.data.meta.description,
    pathname: file.data.meta.pathname,
    content: file.data.markdown,
  }));

const miniSearch = new MiniSearch({
  fields: [&apos;title&apos;, &apos;content&apos;],
  storeFields: [&apos;title&apos;, &apos;pathname&apos;],
});

await miniSearch.addAllAsync(documents);

await fs.writeFile(&apos;search-index.json&apos;, JSON.stringify(miniSearch.toJSON()));
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This example uses the &lt;code&gt;title&lt;/code&gt; and &lt;code&gt;content&lt;/code&gt; fields of each document to index the
full-text search. The &lt;code&gt;title&lt;/code&gt; and &lt;code&gt;pathname&lt;/code&gt; (&lt;code&gt;storeFields&lt;/code&gt;) will be available
to render the search results later.&lt;/p&gt;
&lt;h2&gt;Serving the index&lt;/h2&gt;
&lt;p&gt;Next step is to make sure the index is served with the rest of the static
website. This could be the root of the &amp;quot;dist&amp;quot; or &amp;quot;public&amp;quot; folder.&lt;/p&gt;
&lt;h2&gt;Connecting the search component&lt;/h2&gt;
&lt;p&gt;Depending on the requirements and the type of (static) website, the next part
could be implemented in many ways. Here I&apos;m going to try and keep it very
concise. Let&apos;s add &lt;code&gt;search.js&lt;/code&gt; to the static site with the following snippet.
This will attach an event listener to an existing &lt;code&gt;&amp;lt;input type=&amp;quot;search&amp;quot;&amp;gt;&lt;/code&gt; in the
DOM:&lt;/p&gt;
&lt;p&gt;&amp;lt;!-- prettier-ignore --&amp;gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;(async () =&amp;gt; {
  await import(&apos;https://cdn.jsdelivr.net/npm/minisearch@4.0.3/dist/umd/index.min.js&apos;);
  const searchIndex = await fetch(&apos;/search-index.json&apos;).then(response =&amp;gt; response.text() );
  const index = MiniSearch.loadJSON(searchIndex, { fields: [&apos;title&apos;, &apos;content&apos;] });

  const input = document.querySelector(&apos;input[type=search]&apos;);

  const search = query =&amp;gt; {
    const results = index.search(query, { prefix: true, fuzzy: 0.3 });
    console.log(results);
  };
  input.addEventListener(&apos;input&apos;, event =&amp;gt; {
    search(event.target.value);
  });
})();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here we already have the basics of our search component, in only a few lines of
code. Note that the indexed fields &lt;code&gt;title&lt;/code&gt; and &lt;code&gt;content&lt;/code&gt; should be provided
again when loading the index. For brevity, this example logs the search results
in the browser console. Combined with very little styling, this is everything
this website uses for the static search.&lt;/p&gt;
&lt;h2&gt;Searching and rendering results&lt;/h2&gt;
&lt;p&gt;How to render search results depends on the type of static website or which
framework is being used. For the sake of completeness, here&apos;s a minimal example
using vanilla JavaScript. This extends the &lt;code&gt;search&lt;/code&gt; function from the previous
example:&lt;/p&gt;
&lt;p&gt;&amp;lt;!-- prettier-ignore --&amp;gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const container = document.createElement(&apos;div&apos;);
container.setAttribute(&apos;id&apos;, &apos;search-results&apos;);

const search = query =&amp;gt; {
  if (query.length &amp;gt; 1) {
    const results = index.search(query, { prefix: true, fuzzy: 0.3 });
    const list = document.createElement(&apos;ol&apos;);
    results.slice(0, 10).forEach(result =&amp;gt; {
      const item = document.createElement(&apos;li&apos;);
      const link = document.createElement(&apos;a&apos;);
      link.setAttribute(&apos;href&apos;, result.pathname);
      link.appendChild(document.createTextNode(result.title));
      item.appendChild(link);
      list.append(item);
    });
    container.replaceChildren(list);
    input.after(container);
  } else {
    container.parentNode.removeChild(container);
  }
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is roughly the code used on this website, and appends a &lt;code&gt;container&lt;/code&gt; element
to the DOM as a sibling of the &lt;code&gt;input&lt;/code&gt; element. This way, the search results can
be rendered relative to this input field.&lt;/p&gt;
&lt;p&gt;The search results (with the &lt;code&gt;title&lt;/code&gt; and &lt;code&gt;pathname&lt;/code&gt; fields we stored in the
index before) are appended the container element as an ordered list. Ordered,
since MiniSearch provides the results sorted by relevance score.&lt;/p&gt;
&lt;h2&gt;Final notes&lt;/h2&gt;
&lt;h3&gt;React&lt;/h3&gt;
&lt;p&gt;If you are using React, you might be interested in &lt;a href=&quot;https://github.com/lucaong/react-minisearch&quot;&gt;react-minisearch&lt;/a&gt;,
providing React integration for MiniSearch.&lt;/p&gt;
&lt;h3&gt;Index size&lt;/h3&gt;
&lt;p&gt;The search index is a relatively large static asset, as it includes both the
index and the data to show in the search results. Loading this file on page
load, as shown above, could degrade the performance of your website. It does not
block the main render thread as it uses a dynamic import, but for larger
websites this may impact overall performance. One way to mitigate this is to
only load the index when the user actually uses the search, for instance on the
&lt;code&gt;focus&lt;/code&gt; event of the input field. To get an idea of the file size, currently the
index of this website with its first 11 articles, the size is 113Kb
uncompressed, and 20Kb gzipped. This is generally not really an issue, but
definitely something to keep an eye on when a website is large or growing. After
the first load, the browser will cache the static search index on subsequent
page loads.&lt;/p&gt;
&lt;h3&gt;Multiple search indices&lt;/h3&gt;
&lt;p&gt;Depending of the site contents, another interesting feature might be to create
multiple indices. This would be straight-forward following the steps in this
article.&lt;/p&gt;
</description><pubDate>Sat, 30 Apr 2022 00:00:00 GMT</pubDate><category>search</category><category>static</category><category>site</category><category>generator</category></item><item><title>Using Nx Affected in Azure Pipelines</title><link>https://webpro.nl/articles/using-nx-affected-in-azure-pipelines</link><guid isPermaLink="true">https://webpro.nl/articles/using-nx-affected-in-azure-pipelines</guid><description>&lt;h1&gt;Using Nx Affected in Azure Pipelines&lt;/h1&gt;
&lt;p&gt;When trying to combine the concepts of an &lt;a href=&quot;https://nx.dev/using-nx/affected&quot;&gt;Affected Nx&lt;/a&gt; projects and building
and deploying them in &lt;a href=&quot;https://azure.microsoft.com/en-us/services/devops/pipelines/&quot;&gt;Azure Pipelines&lt;/a&gt;, there is no plugin or anything
readily available to do so. Since it wasn&apos;t trivial to find and compose all the
bits and pieces together, I decided to write this down. Maybe it&apos;ll help you, or
maybe you can help me improve it.&lt;/p&gt;
&lt;h2&gt;One Step Further&lt;/h2&gt;
&lt;p&gt;By default, many solutions use the diff between &lt;code&gt;HEAD&lt;/code&gt; and &lt;code&gt;HEAD~1&lt;/code&gt; to calculate
the affected projects. Such as Nrwl&apos;s own &lt;a href=&quot;https://github.com/nrwl/nx-azure-build&quot;&gt;Example of setting up distributed
Azure build for Nx workspace&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Although this may work well, I think this isn&apos;t always optimal. Mostly because
the latest run(s) may have failed, which Nx isn&apos;t aware of. This may require to
manually re-run a pipeline, or it may take an unknown amount of time before the
container will re-build.&lt;/p&gt;
&lt;p&gt;However, Azure has an API to &lt;a href=&quot;https://docs.microsoft.com/en-us/cli/azure/pipelines/runs/tag&quot;&gt;set and list pipeline run tags&lt;/a&gt;. We can use the
Azure CLI to add a tag for each pipeline run having a successful Nx project
build.&lt;/p&gt;
&lt;p&gt;The main steps in this guide include:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Find the latest successful build and the corresponding SHA-1.&lt;/li&gt;
&lt;li&gt;Use this SHA-1 as the &lt;code&gt;--base&lt;/code&gt; for the &lt;code&gt;nx affected&lt;/code&gt; command.&lt;/li&gt;
&lt;li&gt;Store the affected Nx project names in output variables.&lt;/li&gt;
&lt;li&gt;Use these output variables to conditionally execute the corresponding jobs or
stages to build and deploy Nx projects.&lt;/li&gt;
&lt;li&gt;After a successful build, tag the current pipeline run (for step #1 in the
next run).&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Find The Latest Successful Build&lt;/h2&gt;
&lt;p&gt;When the list of pipeline runs is filtered by the Nx project&apos;s tag and sorted by
time, we need only the latest result and we can further simplify the output by
returning only the SHA-1 (&lt;code&gt;sourceVersion&lt;/code&gt;) in the most concise TSV format:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;az pipelines runs list
  --branch main
  --pipeline-ids $(System.DefinitionId)
  --tags &amp;quot;my-app&amp;quot;
  --query-order FinishTimeDesc
  --query &apos;[].[sourceVersion]&apos;
  --top 1
  --out tsv
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will return the SHA-1 associated to the latest pipeline run tagged with
&lt;code&gt;my-app&lt;/code&gt;. This is the run we are looking for, as we have tagged it only after a
successful build. Now, &lt;code&gt;nx affected&lt;/code&gt; can determine whether this Nx project is
currently affected or not compared with the latest SHA-1 a successful build for
this Nx project was made from.&lt;/p&gt;
&lt;p&gt;Later we will see how to set this tag for a successful build.&lt;/p&gt;
&lt;h2&gt;Write It Down For Later&lt;/h2&gt;
&lt;p&gt;To set an output variable for use in a later stage in Azure pipelines, we need
use the &lt;code&gt;task.setvariable&lt;/code&gt; logging command (Azure docs: &lt;a href=&quot;https://docs.microsoft.com/en-us/azure/devops/pipelines/process/set-variables-scripts&quot;&gt;Set variables in
scripts&lt;/a&gt;). This writes the value &lt;code&gt;AFFECTED&lt;/code&gt; to the output variable
&lt;code&gt;BUILD_MY_APP&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;echo &amp;quot;##vso[task.setvariable variable=BUILD_MY_APP;isOutput=true]AFFECTED&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Putting It Together&lt;/h2&gt;
&lt;p&gt;With the above ingredients, we can write a script to write the output variables.
Initially I wrote a &lt;a href=&quot;https://gist.github.com/webpro/ec2c5e1a198b9557f68cc119d1c904c5#file-is-affected-sh&quot;&gt;Bash script is-affected.sh&lt;/a&gt; as that made sense at the
time. Here&apos;s the gist:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;
is-affected() {
  local SHA=$(az pipelines runs list --branch main --pipeline-ids $(System.DefinitionId) --tags &amp;quot;$2&amp;quot; --query-order FinishTimeDesc --query &apos;[].[sourceVersion]&apos; --top 1 --out tsv)
  local WRITE_VARIABLE=&amp;quot;##vso[task.setvariable variable=$3;isOutput=true]&amp;quot;;
  local AFFECTED=$(npx nx print-affected --type=${1} --select=projects --plain --base=$SHA --head=HEAD)
  if [[ &amp;quot;$AFFECTED&amp;quot; == *&amp;quot;$2&amp;quot;* ]]; then
    echo &amp;quot;${WRITE_VARIABLE}AFFECTED&amp;quot;
    echo &amp;quot;##[warning]$2 is affected (base: $SHA)&amp;quot;
  fi
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As I think Bash scripts are not very robust and not easy to maintain, I ported
this to a Node.js script &lt;a href=&quot;https://gist.github.com/webpro/ec2c5e1a198b9557f68cc119d1c904c5#file-is-affected-js&quot;&gt;is-affected.js&lt;/a&gt; with JSDoc/TypeScript annotations.
The idea stays the same, and with both scripts the output looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;##[warning]my-app is NOT affected (base: 62ed6e5d1dd73564a088be879a47634456a07676)
##[warning]container5 is affected (base: 62ed6e5d1dd73564a088be879a47634456a07676)
##[warning]some-lib was not previously tagged
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When system diagnostics are enabled, also the other &lt;code&gt;echo&lt;/code&gt; commands that
actually set the variables are printed.&lt;/p&gt;
&lt;p&gt;To see this script in perspective, here&apos;s an example &amp;quot;Prepare&amp;quot; stage:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;stages:
  - stage: Prepare
    pool:
      vmImage: ubuntu-latest
    jobs:
      - job: Determine_Affected
        displayName: Determine Affected Nx Projects
        steps:
          - task: NodeTool@0
            displayName: Use Node.js v16.17.1
            inputs:
              versionSpec: 16.17.1

          - script: npm install nx
            displayName: Install Nx

          # Required for `az pipelines runs`
          - bash: |
              az config set extension.use_dynamic_install=yes_without_prompt
              az devops configure --defaults organization=$(System.TeamFoundationCollectionUri) project=&amp;quot;$(System.TeamProject)&amp;quot;
            displayName: Set default Azure DevOps organization and project

          - bash: |
              node is-affected.js --pipelineId $(System.DefinitionId) --app my-app --app container5 --lib some-lib
            name: AffectedNxProjects
            displayName: Determine affected Nx projects
            env:
              AZURE_DEVOPS_EXT_PAT: $(System.AccessToken)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conditional Builds&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;condition&lt;/code&gt; for the job in another (build) stage is based on the variable
that was written with the Bash script in an earlier stage. The pattern to read
it:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;stageDependencies.[[STAGE]].[[JOB]].outputs[&apos;[[STEP_NAME]].BUILD_MY-APP&apos;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Also see Azure docs to &lt;a href=&quot;https://docs.microsoft.com/en-us/azure/devops/pipelines/process/set-variables-scripts&quot;&gt;Set a variable for future stages&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;When this variable has a value of &lt;code&gt;AFFECTED&lt;/code&gt; or &lt;code&gt;TAG_NOT_FOUND&lt;/code&gt; the condition
will evaluate to &lt;code&gt;true&lt;/code&gt; and the job to build the Nx project will run. For
brevity, here is only the relevant part:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;jobs:
  - job: BUILD_MY_APP
    displayName: Build my-app
    condition: |
      in(stageDependencies.[[STAGE]].[[JOB]].outputs[&apos;[[STEP]].BUILD_MY-APP&apos;], &apos;AFFECTED&apos;, &apos;TAG_NOT_FOUND&apos;)
    steps:
      - task: ...build container...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We can move the build job(s) to a template and reuse it with &lt;code&gt;nxProjectName&lt;/code&gt; as
a parameter. Here&apos;s an example how to do that:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;- stage: Build
  dependsOn: Prepare
  jobs:
    - template: build-container.yaml
      parameters:
        nxProjectName: my-app
    - template: build-container.yaml
      parameters:
        nxProjectName: container5
    - template: build-container.yaml
      parameters:
        nxProjectName: some-lib
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And then in the &lt;code&gt;build-container.yaml&lt;/code&gt; template:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;parameters:
  - name: nxProjectName
    type: string

jobs:
  - job: BUILD_${{ upper(replace(parameters.nxProjectName, &apos;-&apos;, &apos;_&apos;)) }}
    displayName: Build ${{ parameters.nxProjectName }}
    condition: |
      in(stageDependencies.Prepare.Determine_Affected.outputs[&apos;AffectedNxProjects.BUILD_${{ upper(replace(parameters.nxProjectName, &apos;-&apos;, &apos;_&apos;)) }}&apos;], &apos;AFFECTED&apos;, &apos;TAG_NOT_FOUND&apos;)
    steps:
      - task: ...build container...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that stages referred to in &lt;code&gt;stageDependencies&lt;/code&gt; must be part of the
&lt;code&gt;dependsOn&lt;/code&gt; option (&lt;code&gt;Prepare&lt;/code&gt; in this example). Otherwise, the value will
silently resolve to &lt;code&gt;Null&lt;/code&gt; without warning.&lt;/p&gt;
&lt;h2&gt;Tag Successful Builds&lt;/h2&gt;
&lt;p&gt;This task should follow the build step(s) from the job above. We can tag
successful build runs with the name of the Nx project (&lt;code&gt;[nx-project-name]&lt;/code&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;az pipelines runs tag add --run-id $(Build.BuildId) --tags [nx-project-name]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now this command from the Bash script should find the latest tag for this Nx
project in the next pipeline run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;az pipelines runs list --tags &amp;quot;[nx-project-name]&amp;quot; [...]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Again, you may need to set the &lt;code&gt;organization&lt;/code&gt; and &lt;code&gt;project&lt;/code&gt; first. Here&apos;s a
complete step:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yml&quot;&gt;- script: |
    az config set extension.use_dynamic_install=yes_without_prompt
    az devops configure --defaults organization=$(System.TeamFoundationCollectionUri) project=&amp;quot;$(System.TeamProject)&amp;quot;
    az pipelines runs tag add --run-id $(Build.BuildId) --tags ${{ parameters.nxProjectName }}
    echo &amp;quot;##vso[task.setvariable variable=DEPLOY_${{ upper(replace(parameters.nxProjectName, &apos;-&apos;, &apos;_&apos;)) }};isOutput=true]true&amp;quot;
    echo &amp;quot;##[warning]Tagged build for ${{ parameters.nxProjectName }} (BUILD_${{ upper(replace(parameters.nxProjectName, &apos;-&apos;, &apos;_&apos;)) }})&amp;quot;
  condition: succeeded()
  displayName: Tag successful build
  name: TagBuild
  env:
    AZURE_DEVOPS_EXT_PAT: $(System.AccessToken)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This writes the &lt;code&gt;DEPLOY_MY_APP&lt;/code&gt; variable. In a later (deployment) stage, the
same idea can be applied to read this variable and conditionally deploy the
build to any environment. An example chunk of the &amp;quot;production&amp;quot; stage:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;- stage: Production
  dependsOn: Build
  condition: succeeded(&apos;Build&apos;)
  jobs:
    - template: deploy-container.yaml
      parameters:
        nxProjectName: my-app
        environment: production
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And then in &lt;code&gt;deploy-container.yaml&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;parameters:
  - name: nxProjectName
    displayName: The Nx project key
    type: string
  - name: environment
    displayName: Environment
    type: string
    values:
      - test
      - staging
      - production

jobs:
  - deployment: |
      Deploy_${{ replace(parameters.nxProjectName, &apos;-&apos;, &apos;_&apos;) }}_${{ parameters.environment }}
    displayName:
      Deploy ${{ parameters.nxProjectName }} to ${{ parameters.environment }}
    environment: ${{ parameters.environment }}
    condition: |
      eq(stageDependencies.Build.BUILD_${{ upper(replace(parameters.nxProjectName, &apos;-&apos;, &apos;_&apos;))}}.outputs[&apos;TagBuild.DEPLOY_${{ upper(replace(parameters.nxProjectName, &apos;-&apos;, &apos;_&apos;)) }}&apos;], &apos;true&apos;)
    strategy:
      runOnce:
        deploy:
          steps:
            - ...deploy container?...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Again, stages referred to in &lt;code&gt;stageDependencies&lt;/code&gt; must be part of the &lt;code&gt;dependsOn&lt;/code&gt;
option (&lt;code&gt;Build&lt;/code&gt; in this case).&lt;/p&gt;
&lt;p&gt;I&apos;ll update this article as I find improvements. Hopefully this guide has been
of some help or inspiration.&lt;/p&gt;
</description><pubDate>Thu, 17 Mar 2022 00:00:00 GMT</pubDate><category>nx</category><category>affected</category><category>azure</category><category>pipelines</category></item><item><title>How to build a great theme toggle switch</title><link>https://webpro.nl/articles/how-to-build-a-great-theme-toggle-switch</link><guid isPermaLink="true">https://webpro.nl/articles/how-to-build-a-great-theme-toggle-switch</guid><description>&lt;h1&gt;How to build a great theme toggle switch&lt;/h1&gt;
&lt;p&gt;Today, &amp;quot;dark mode&amp;quot; is everywhere. Personally I love to use it wherever I can.
This guide shows how to build your own accessible switch to toggle dark and
light mode on your own website, and offer your visitors their preference. This
website has a switch at the top right corner, which serves as an example.&lt;/p&gt;
&lt;p&gt;A great solution ticks the following boxes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Using only CSS, apply the default theme based on the setting of the operating
system (OS) setting automatically.&lt;/li&gt;
&lt;li&gt;When JavaScript is enabled, progressively enhances by showing a switch to
override this default theme, which will be stored for subsequent visits.&lt;/li&gt;
&lt;li&gt;Never shows flashing styles from one theme to another during page load.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Let&apos;s get going&lt;/h2&gt;
&lt;p&gt;So that&apos;s what we&apos;re after. Our solution depends on the &lt;code&gt;prefers-color-scheme&lt;/code&gt;
media query, reflecting the OS setting. Perhaps 10 steps sounds like a lot of
work, but I promise each is small and fun!&lt;/p&gt;
&lt;p&gt;If you want to quickly see the final solution, feel free to scroll to the of
this page and find how to put it all together.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The Foundation&lt;/li&gt;
&lt;li&gt;Prepare The Switch&lt;/li&gt;
&lt;li&gt;Add The Switch&lt;/li&gt;
&lt;li&gt;Activate The Switch&lt;/li&gt;
&lt;li&gt;Hide The Switch&lt;/li&gt;
&lt;li&gt;Remember The Switch&lt;/li&gt;
&lt;li&gt;Check The Switch&lt;/li&gt;
&lt;li&gt;Sync The Switch&lt;/li&gt;
&lt;li&gt;Extra: Swapping Stylesheets&lt;/li&gt;
&lt;li&gt;Putting It All Together&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;The foundation&lt;/h2&gt;
&lt;p&gt;The stylesheet should contain the theme-related variables and the media query to
override them for the other theme. This way, the stylesheet automatically
responds to changes in the OS setting. Let&apos;s use &lt;code&gt;dark&lt;/code&gt; as the default theme,
and override the variables when the OS setting is &lt;code&gt;light&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-css&quot;&gt;:root {
  --bg-color: rgb(42, 42, 42);
  --font-color: rgb(250, 250, 250);
}

@media (prefers-color-scheme: light) {
  :root {
    --bg-color: rgb(250, 250, 250);
    --font-color: rgb(82, 82, 82);
  }
}

html {
  background-color: var(--bg-color);
  color: var(--font-color);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With only CSS, our styles with media queries respond properly to the OS setting.
You can see this in action by opening this website and changing the OS setting.
In macOS, this can be found in &amp;quot;System Preferences&amp;quot; and then &amp;quot;General&amp;quot;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./os-preferences-mode.webp&quot; alt=&quot;macOS System Preferences&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Prepare the switch&lt;/h2&gt;
&lt;p&gt;We are going to need a switch for the user to override the default theme. First,
we need two classes, matching our themes (&lt;code&gt;.dark&lt;/code&gt; and &lt;code&gt;.light&lt;/code&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-css&quot;&gt;:root,
html.dark {
  --bg-color: rgb(42, 42, 42);
  --font-color: rgb(250, 250, 250);
}

@media (prefers-color-scheme: light) {
  :root {
    --bg-color: rgb(250, 250, 250);
    --font-color: rgb(82, 82, 82);
  }
}

html.light {
  --bg-color: rgb(250, 250, 250);
  --font-color: rgb(82, 82, 82);
}

html {
  background-color: var(--bg-color);
  color: var(--font-color);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The styles for the &amp;quot;light&amp;quot; theme, unfortunately, are duplicated. This is
required to override a &amp;quot;dark&amp;quot; OS setting, while the user prefers &amp;quot;light&amp;quot; on this
website. To my knowledge, it is currently not possible to define these variables
only once (e.g. by combining the media query with the &lt;code&gt;html.light&lt;/code&gt; selector in
CSS).&lt;/p&gt;
&lt;h2&gt;Add the switch&lt;/h2&gt;
&lt;p&gt;The UI element to switch the theme could be as simple or as fancy as you please.
Let&apos;s take this website&apos;s switch as an example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-html&quot;&gt;&amp;lt;label class=&amp;quot;theme-switch&amp;quot;&amp;gt;
  &amp;lt;button id=&amp;quot;theme-toggle&amp;quot; role=&amp;quot;switch&amp;quot; aria-checked=&amp;quot;false&amp;quot;&amp;gt;&amp;lt;/button&amp;gt;
&amp;lt;/label&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It could also be a checkbox as it has two states: checked or unchecked. Feel
free to borrow the markup and styles from this website&apos;s switch (a slight
variation of what&apos;s in this article), or find your own. There&apos;s plenty of great
looking switches out there.&lt;/p&gt;
&lt;h2&gt;Activate the switch&lt;/h2&gt;
&lt;p&gt;When the user switches the toggle, the theme should follow suit. Let&apos;s make this
happen by adding an event listener to our input element:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const toggle = document.querySelector(&apos;#theme-toggle&apos;);
const classList = document.documentElement.classList;

toggle.addEventListener(&apos;click&apos;, () =&amp;gt; {
  const isChecked = toggle.getAttribute(&apos;aria-checked&apos;) !== &apos;true&apos;;
  const theme = isChecked ? &apos;light&apos; : &apos;dark&apos;;
  classList.remove(toggle.checked ? &apos;dark&apos; : &apos;light&apos;);
  classList.add(theme);
  toggle.setAttribute(&apos;aria-checked&apos;, isChecked);
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will swap the &lt;code&gt;light&lt;/code&gt; and &lt;code&gt;dark&lt;/code&gt; classes on the &lt;code&gt;&amp;lt;html&amp;gt;&lt;/code&gt; tag when the user
uses the &lt;code&gt;&amp;lt;input&amp;gt;&lt;/code&gt; element. This will set the values of the corresponding CSS
variables, effectively applying the theme. Now we have a functional theme
switch! Yet there&apos;s a few more things we can do to make it even better.&lt;/p&gt;
&lt;h2&gt;Hide the switch&lt;/h2&gt;
&lt;p&gt;Without JavaScript, the switch can&apos;t do anything. So let&apos;s hide the switch, and
only show it when JavaScript is enabled:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-css&quot;&gt;.theme-switch {
  display: none;
}

.js .theme-switch {
  display: flex;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We can inform CSS that JavaScript is enabled with only one line of JavaScript:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;document.documentElement.classList.add(&apos;js&apos;);
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Remember the switch&lt;/h2&gt;
&lt;p&gt;Using the switch, visitors can override the default theme. To also remember this
setting for returning visitors, we can use JavaScript and &lt;code&gt;localStorage&lt;/code&gt;. Let&apos;s
write the theme value to &lt;code&gt;localStorage&lt;/code&gt; when the user toggles the switch:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;toggle.addEventListener(&apos;click&apos;, () =&amp;gt; {
  localStorage.setItem(&apos;theme&apos;, toggle.checked ? &apos;light&apos; : &apos;dark&apos;);
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When the user comes back to visit your website later, we can read from
&lt;code&gt;localStorage&lt;/code&gt; and apply the theme by adding it as a class to the &lt;code&gt;&amp;lt;html&amp;gt;&lt;/code&gt;
element:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const theme = localStorage.getItem(&apos;theme&apos;);
if (theme) document.documentElement.classList.add(theme);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Ideally, we place this as an inline &lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt; tag just before the stylesheets
containing the theme variables. This will make sure we will not see a flash of
styling changes when the theme in &lt;code&gt;localStorage&lt;/code&gt; does not match the user&apos;s OS
theme setting.&lt;/p&gt;
&lt;h2&gt;Check the switch&lt;/h2&gt;
&lt;p&gt;Now, we have a remaining issue. Since the &lt;code&gt;&amp;lt;input&amp;gt;&lt;/code&gt; is initially unchecked, it
may initially not match the OS setting. So we need to check the checkbox to keep
things in check:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const prefersLight = matchMedia(&apos;(prefers-color-scheme: light)&apos;);
const classList = document.documentElement.classList;
if (prefersLight.matches || classList.contains(&apos;light&apos;)) {
  document.querySelector(&apos;#theme-switch&apos;).checked = true;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This script is ideally executed before showing the switch, so before adding the
&lt;code&gt;js&lt;/code&gt; class to the &lt;code&gt;&amp;lt;html&amp;gt;&lt;/code&gt; element.&lt;/p&gt;
&lt;h2&gt;Sync the switch&lt;/h2&gt;
&lt;p&gt;A fancy feature is to also sync the switch when the OS setting is changed. We
can listen to changes to the media query, and switch the toggle, unless the
theme was explicitly overridden and stored in &lt;code&gt;localStorage&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const toggle = document.querySelector(&apos;#theme-switch&apos;);
const preferDark = window.matchMedia(&apos;(prefers-color-scheme: dark)&apos;);
preferDark.addEventListener(&apos;change&apos;, event =&amp;gt; {
  if (!localStorage.getItem(&apos;theme&apos;)) {
    toggle.checked = !event.matches;
  }
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can see this in action by changing the OS setting, and find the theme and
the switch have been toggled accordingly.&lt;/p&gt;
&lt;h2&gt;Extra: swapping stylesheets&lt;/h2&gt;
&lt;p&gt;In addition to applying theme styles based on media queries or classes, we can
also swap entire stylesheets to match the theme. This website swaps the
stylesheet related to syntax highlighting. There are multiple ways to achieve
this. We can extend the event listener from above, and find the related
stylesheet element to update its &lt;code&gt;href&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const highlightSheet = document.querySelector(&apos;link[href*=hljs]&apos;);
const highlightSheets = {
  light: &apos;/css/hljs.github.min.css&apos;,
  dark: &apos;/css/hljs.github-dark-dimmed.min.css&apos;,
};

toggle.addEventListener(&apos;click&apos;, () =&amp;gt; {
  const theme = toggle.checked ? &apos;light&apos; : &apos;dark&apos;;
  if (highlightSheet) highlightSheet.href = highlightSheets[theme];
});
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Putting it all together&lt;/h2&gt;
&lt;p&gt;Let&apos;s put all the bits and pieces together.&lt;/p&gt;
&lt;p&gt;When we look at how the browser executes things, this is what we need:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Read the &lt;code&gt;theme&lt;/code&gt; from &lt;code&gt;localStorage&lt;/code&gt; and apply this class to &lt;code&gt;&amp;lt;html&amp;gt;&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Load the stylesheet containing the media query and CSS variables&lt;/li&gt;
&lt;li&gt;Render a hidden toggle switch&lt;/li&gt;
&lt;li&gt;Load the JavaScript containing:
&lt;ol&gt;
&lt;li&gt;Event handler for toggle switches&lt;/li&gt;
&lt;li&gt;Event handler for OS setting changes&lt;/li&gt;
&lt;li&gt;Toggle the switch to match the theme initially&lt;/li&gt;
&lt;li&gt;Show the switch by adding the &lt;code&gt;js&lt;/code&gt; class to &lt;code&gt;&amp;lt;html&amp;gt;&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Alternatively, as this page serves as a working example, we can &amp;quot;view source&amp;quot;
into these three elements:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;High in the &lt;code&gt;&amp;lt;head&amp;gt;&lt;/code&gt; is an inline &lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt; tag (to read and apply the
stored &lt;code&gt;theme&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;There is a &lt;code&gt;&amp;lt;link&amp;gt;&lt;/code&gt; to &lt;a href=&quot;../../../css/stylesheet.css&quot;&gt;stylesheet.css&lt;/a&gt; containing the styles.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;&amp;lt;body&amp;gt;&lt;/code&gt; has &lt;a href=&quot;../../../components/ThemeSwitch.astro&quot;&gt;theme-switch.js&lt;/a&gt; for the rest of the functionality.&lt;/li&gt;
&lt;/ol&gt;
</description><pubDate>Sat, 12 Mar 2022 00:00:00 GMT</pubDate><category>theme</category><category>toggle</category><category>switch</category><category>dark</category><category>light</category><category>mode</category></item><item><title>Migrate from getInitialProps to getServerSideProps in Next.js</title><link>https://webpro.nl/articles/migrate-from-getinitialprops-to-getserversideprops-in-nextjs</link><guid isPermaLink="true">https://webpro.nl/articles/migrate-from-getinitialprops-to-getserversideprops-in-nextjs</guid><description>&lt;h1&gt;Migrate from getInitialProps to getServerSideProps in Next.js&lt;/h1&gt;
&lt;p&gt;Pages in Next.js can use either &lt;code&gt;getInitialProps&lt;/code&gt; or &lt;code&gt;getServerSideProps&lt;/code&gt; to
fetch data. This article will not repeat their documentation, but instead list
relevant differences, and shows example code to migrate from one to another.
This may provide some guidance when choosing or migrating between both methods.&lt;/p&gt;
&lt;p&gt;Advantages of &lt;code&gt;getServerSideProps&lt;/code&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;No need to implement isomorphic code.&lt;/li&gt;
&lt;li&gt;Has &lt;code&gt;resolvedUrl&lt;/code&gt; (no need to mangle with &lt;code&gt;req.originalUrl&lt;/code&gt; and &lt;code&gt;asPath&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Types can be inferred using &lt;code&gt;InferGetServerSidePropsType&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Features like &lt;code&gt;notFound&lt;/code&gt; and &lt;code&gt;redirect&lt;/code&gt; are available.&lt;/li&gt;
&lt;li&gt;Preview Mode is available.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;On the other hand, it has a few downsides as well:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;basePath&lt;/code&gt; is not available.&lt;/li&gt;
&lt;li&gt;Return value must be serializable to JSON.&lt;/li&gt;
&lt;li&gt;Requires and exposes the &lt;code&gt;/_next/data&lt;/code&gt; endpoint.&lt;/li&gt;
&lt;li&gt;This &lt;code&gt;/_next/data&lt;/code&gt; path includes a &lt;code&gt;.json&lt;/code&gt; extension, which may result in
unexpected caching (in another layer, like a CDN or API Gateway).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;An example &lt;code&gt;Page.tsx&lt;/code&gt; using &lt;code&gt;getInitialProps&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;interface PageProps {
  content: unknown;
  statusCode: number;
  error?: Error;
  path: string;
}

const Page: NextPage&amp;lt;PageProps&amp;gt; = ({ content, statusCode, error, path }) =&amp;gt; {
  return &amp;lt;div&amp;gt;{content}&amp;lt;/div&amp;gt;;
};

Page.getInitialProps = async ({ req, res, asPath }): Promise&amp;lt;PageProps&amp;gt; =&amp;gt; {
  // @ts-ignore We can count on `originalUrl`
  const resolvedUrl = req ? req.originalUrl : asPath;

  const { content, statusCode, error } = await fetchContent({ resolvedUrl });

  return { content, statusCode, error };
};

export default Page;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The same &lt;code&gt;Page.tsx&lt;/code&gt; refactored to use &lt;code&gt;getServerSideProps&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;type PageProps = InferGetServerSidePropsType&amp;lt;typeof getServerSideProps&amp;gt;;

const Page: NextPage&amp;lt;PageProps&amp;gt; = ({ content, statusCode, error }) =&amp;gt; {
  return &amp;lt;div&amp;gt;{content}&amp;lt;/div&amp;gt;;
};

export const getServerSideProps: GetServerSideProps = async ({
  req,
  res,
  resolvedUrl,
}) =&amp;gt; {
  const { content, statusCode, error } = await fetchContent({ resolvedUrl });

  return {
    notFound: statusCode === 404,
    props: {
      content,
      statusCode,
      hasError: boolean(error),
    },
  };
};

export default Page;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice the differences:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;No need to type &lt;code&gt;PageProps&lt;/code&gt; separately.&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;resolvedUrl&lt;/code&gt; directly (no isomorphic logic).&lt;/li&gt;
&lt;li&gt;Move the return value to &lt;code&gt;props&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;hasError&lt;/code&gt; is serializable (&lt;code&gt;error&lt;/code&gt; is not).&lt;/li&gt;
&lt;li&gt;&lt;code&gt;getServerSideProps&lt;/code&gt; is a separate export (not a member of &lt;code&gt;Page&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;
</description><pubDate>Thu, 10 Mar 2022 00:00:00 GMT</pubDate><category>nextjs</category><category>getInitialProps</category><category>getServerSideProps</category></item><item><title>Introducing the terminal to developers</title><link>https://webpro.nl/articles/introducing-the-terminal-to-developers</link><guid isPermaLink="true">https://webpro.nl/articles/introducing-the-terminal-to-developers</guid><description>&lt;h1&gt;Introducing the terminal to developers&lt;/h1&gt;
&lt;blockquote&gt;
&lt;p&gt;The article I wish I had read when I had to open the terminal for the first
time.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Being a developer can be quite overwhelming these days. Getting familiar with a
codebase and the framework(s) and libraries it uses is not the whole story.
There is also a real demand of additional skills to get your job done, such as
using Git, package managers, and build tooling.&lt;/p&gt;
&lt;p&gt;Many of these tools are to be used in the terminal, which may be new and a bit
frightening. But don&apos;t worry, we&apos;ve all been there. This article provides an
overview, along with practical examples, to get you up to speed with the
terminal and some essential tooling.&lt;/p&gt;
&lt;h2&gt;Why do I need the terminal?&lt;/h2&gt;
&lt;p&gt;For certain tasks you can get away without using the terminal at all. For
example, there are great GUI tools for working with Git. However, getting
familiar with tools like Git from the terminal gives you more power and
flexibility. In the end, a GUI is a graphical shell in front of a command-line
tool. Often limited by screen estate or a minimalistic design, a GUI may feature
only a subset of the underlying command-line interface. Being &amp;quot;closer to the
metal&amp;quot;, it can also help you to get out of trouble in case a GUI is stuck or
messed up.&lt;/p&gt;
&lt;p&gt;Frameworks and libraries for programming languages (such as JavaScript or PHP)
come and go, but knowing your way around in the terminal is a skill you can use
always and everywhere.&lt;/p&gt;
&lt;h2&gt;What actually is a terminal?&lt;/h2&gt;
&lt;p&gt;A terminal is text-based, and serves as the command-line interface (CLI) you can
type your commands in. A shell takes these commands, and tells the operating
system to execute them.&lt;/p&gt;
&lt;p&gt;On macOS, the default terminal application is named &amp;quot;Terminal&amp;quot;, and the default
shell is Bash. You will also find a &amp;quot;terminal&amp;quot; in Linux distributions like
Debian and Ubuntu. On Windows 10, you can &lt;a href=&quot;https://learn.microsoft.com/en-us/windows/wsl/install&quot;&gt;install a Linux environment&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For the rest of this article, I&apos;ll be assuming you are using Bash or a similar
shell.&lt;/p&gt;
&lt;h2&gt;Opening a terminal&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;On macOS, you can use Spotlight (&lt;code&gt;⌘+Space&lt;/code&gt;), and type &amp;quot;Terminal&amp;quot; to find and
open it.&lt;/li&gt;
&lt;li&gt;On Linux, search for the &amp;quot;terminal&amp;quot; app and open it. You can also try the
&lt;code&gt;Ctrl+Alt+T&lt;/code&gt; combo.&lt;/li&gt;
&lt;li&gt;On Windows 10, start the &amp;quot;Bash on Ubuntu on Windows&amp;quot; application.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The first thing you will always get is a prompt. It&apos;s what &amp;quot;prompts&amp;quot; you to type
a command. The prompt might include information such as the computer and/or user
name, and usually ends with a &lt;code&gt;$&lt;/code&gt;. Behind the &lt;code&gt;$&lt;/code&gt; is the cursor, where you can
type commands.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;Last login: Sun Sep 17 21:20:17 on ttys000
lars ~ $
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If your terminal has a white background color, but you prefer to have a black
background color instead: this article contains instructions on how to do this.&lt;/p&gt;
&lt;h2&gt;Finding your way&lt;/h2&gt;
&lt;p&gt;In an application like Finder or File Explorer you can navigate your files and
folders. In the terminal we can do the same with commands like &lt;code&gt;ls&lt;/code&gt; and &lt;code&gt;cd&lt;/code&gt; (to
&lt;strong&gt;l&lt;/strong&gt;i&lt;strong&gt;s&lt;/strong&gt;t files and &lt;strong&gt;c&lt;/strong&gt;hange &lt;strong&gt;d&lt;/strong&gt;irectories, respectively). For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;Last login: Sun Sep 17 21:20:17 on ttys000
$ cd Documents
$ ls
tutorial.pdf Keynote.pdf
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The last line is the default output of &lt;code&gt;ls&lt;/code&gt;. You can pass it extra arguments (or
&amp;quot;options&amp;quot;) to change its behavior. For instance, there is &lt;code&gt;-a&lt;/code&gt; to show all files
(including hidden ones), and &lt;code&gt;-l&lt;/code&gt; to show one file per line:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ ls -a -l
total 1160
drwx------   6 lars staff     204 Sep 12 19:52  .
drwxr-xr-x  64 lars staff    2176 Sep 12 22:29  ..
-rw-r--r--   7 lars staff      42 Sep 12 19:52  .DS_Store
-rw-r--r--   7 lars staff   81238 Sep 12 19:52  tutorial.pdf
-rw-r--r--   1 lars staff   11086 Jul  5 22:35  Keynote.pdf
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The arguments can also be combined into one: &lt;code&gt;ls -al&lt;/code&gt; means the same.&lt;/p&gt;
&lt;p&gt;Here are some essential commands to manage files and directories:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ls
cat    # Show file content
cd     # Change directory
mv     # Move or rename a file (or dir)
cp     # Copy file (or dir)
mkdir  # Make directory
rm     # Remove file (or dir)
pwd    # Output the current directory
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you want more information about any of these commands, you could either use
the &lt;code&gt;--help&lt;/code&gt; argument, or perhaps google the command. Both ways are not ideal:
the former is correct and complete, but often hard to read. And the latter takes
more time and needs you to switch context between the terminal and a web
browser. Maybe we can improve on this. We all like readable documentation,
right?&lt;/p&gt;
&lt;h2&gt;Package managers&lt;/h2&gt;
&lt;p&gt;Let&apos;s use this as a an opportunity to install a package manager, and then
install a package with it. In this case, one for simplified documentation.&lt;/p&gt;
&lt;p&gt;In a nutshell, package managers are an essential tool to install system-wide (or
&amp;quot;global&amp;quot;) software, or local dependencies within a project. You may have heard
of Homebrew, npm, Maven, RubyGems, or apt-get. In this article, we&apos;ll be looking
at Homebrew and npm.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;On macOS, Homebrew is &amp;quot;the missing package manager&amp;quot;. Find it at &lt;a href=&quot;https://brew.sh&quot;&gt;brew.sh&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Most Linux distributions (including Linux on Windows), come with a
pre-installed package manager, such as &lt;code&gt;apt-get&lt;/code&gt; or &lt;code&gt;yum&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Are you working with JavaScript? You need npm (the package manager for
JavaScript), which comes with &lt;a href=&quot;https://nodejs.org&quot;&gt;Node.js&lt;/a&gt;. You can &lt;a href=&quot;https://nodejs.org/en/download/package-manager/&quot;&gt;install Node.js via many
package managers&lt;/a&gt;. This will also make npm available on your system.&lt;/p&gt;
&lt;p&gt;Now we can install a package named &amp;quot;tldr&amp;quot; with either one of them. Using npm:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ npm install --global tdlr
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Or, with Hombrew:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ brew install tldr
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you can try for instance &lt;code&gt;tdlr cd&lt;/code&gt; to see what I mean with readable
documentation:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ tldr cd

  cd

  Change the current working directory.
  More information: https://man.archlinux.org/man/cd.n.

  - Go to the given directory:
    cd path/to/directory

  - Go to home directory of current user:
    cd

  - Go up to the parent of the current directory:
    cd ..

  - Go to the previously chosen directory:
    cd -
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We&apos;ll dive into npm in a minute. However, if you have questions or got stuck
installing a package manager, then you might want to check out one of the
following links:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.howtogeek.com/117579/htg-explains-how-software-installation-package-managers-work-on-linux/&quot;&gt;How Software Installation &amp;amp; Package Managers Work On Linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.brew.sh/FAQ.html&quot;&gt;Homebrew FAQ&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Git&lt;/h2&gt;
&lt;p&gt;Git is a program that many projects use for version control. Basically, it
allows people to work on code together, while keeping track of changes.
Everybody should not be editing the same files directly, and sometimes changes
need to be reverted.&lt;/p&gt;
&lt;p&gt;Collaborating with Git involves cloning (&amp;quot;downloading&amp;quot;) the code repository of a
project, making changes to fix bugs or develop new features, and then pushing
back (&amp;quot;uploading&amp;quot;) the changes. It is common to create a separate branch first,
so others can review the changes before they are merged back into the master
branch.&lt;/p&gt;
&lt;p&gt;Let&apos;s install Git (with a package manager, obviously)! Here are some options for
various operating systems:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For macOS with Homebrew: &lt;code&gt;brew install git&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;For Linux (e.g. Debian, Ubuntu): &lt;code&gt;apt-get install git&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;For Windows: use either the &lt;a href=&quot;https://gitforwindows.org&quot;&gt;Git for Windows installer&lt;/a&gt; or &lt;a href=&quot;https://chocolatey.org&quot;&gt;Chocolatey&lt;/a&gt;:
&lt;code&gt;choco install git&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Below is a screenshot after running a few commands as an example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ mkdir my-first-repository
$ cd my-first-repository/
$ git init
Initialized empty Git repository in /Users/lars/Projects/my-first-repository/.git/
$ git checkout -b cool-feature
Switched to a new branch &apos;cool-feature&apos;
$ echo &amp;quot;Some content&amp;quot; &amp;gt; somefile.txt
$ ls
somefile.txt
$ git add somefile.txt
$ git status
On branch cool-feature

No commits yet

Changes to be committed:
  (use &amp;quot;git rm --cached &amp;lt;file&amp;gt;...&amp;quot; to unstage)
	new file:   somefile.txt

$ git commit -m &amp;quot;Add some file&amp;quot;
[cool-feature (root-commit) b706c43] Add some file
 1 file changed, 1 insertion(+)
 create mode 100644 somefile.txt
$
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For more information on working with Git, here are a two excellent tutorials:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://rogerdudler.github.io/git-guide/&quot;&gt;git - the simple guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://product.hubspot.com/blog/git-and-github-tutorial-for-beginners&quot;&gt;An Intro to Git and GitHub for Beginners (Tutorial)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;npm&lt;/h2&gt;
&lt;p&gt;npm is the package manager for JavaScript. Even though Node.js itself is a
server-side application runtime, and npm was originally built for Node modules,
npm proves to be a great dependency manager for JavaScript in general. Next to
JavaScript modules, the npm repository contains many tools built on top of
Node.js, such as Grunt, Webpack, Babel, UglifyJS, and many more.&lt;/p&gt;
&lt;p&gt;To manage JavaScript dependencies with npm, a project has a &lt;code&gt;package.json&lt;/code&gt; file.
It contains meta data about the project, and always includes a project name and
version. Here&apos;s an example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;name&amp;quot;: &amp;quot;my-awesome-project&amp;quot;,
  &amp;quot;version&amp;quot;: &amp;quot;1.2.6&amp;quot;,
  &amp;quot;description&amp;quot;: &amp;quot;My awesome project!&amp;quot;,
  &amp;quot;license&amp;quot;: &amp;quot;MIT&amp;quot;,
  &amp;quot;repository&amp;quot;: &amp;quot;git@github.com:webpro/my-awesome-project.git&amp;quot;,
  &amp;quot;dependencies&amp;quot;: {
    &amp;quot;lodash&amp;quot;: &amp;quot;4.17.4&amp;quot;,
    &amp;quot;react&amp;quot;: &amp;quot;15.6.1&amp;quot;
  },
  &amp;quot;devDependencies&amp;quot;: {
    &amp;quot;mocha&amp;quot;: &amp;quot;3.5.3&amp;quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let&apos;s clone this project with Git, and install its dependencies:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cd Projects/
$ git clone git@github.com:webpro/my-awesome-project.git
Cloning into &apos;my-awesome-project&apos;...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 3 (delta 0), reused 3 (delta 0), pack-reused 0
Receiving objects: 100% (3/3), done.
$ cd my-awesome-project/
$ ls -a
.  ..  .git  package.json
$ npm install
added 56 packages, and audited 57 packages in 4s
[...]
$ ls node_modules/
asap		      fbjs		 js-tokens		 loose-envify	   react
balanced-match	      fs.realpath	 json3			 minimatch	   react-is
brace-expansion       glob		 lodash			 minimist	   safer-buffer
browser-stdout	      graceful-readlink  lodash._baseassign	 mkdirp		   setimmediate
commander	      growl		 lodash._basecopy	 mocha		   supports-color
concat-map	      has-flag		 lodash._basecreate	 ms		   ua-parser-js
core-js		      he		 lodash._getnative	 node-fetch	   whatwg-fetch
create-react-class    iconv-lite	 lodash._isiterateecall  object-assign	   wrappy
debug		      inflight		 lodash.create		 once
diff		      inherits		 lodash.isarguments	 path-is-absolute
encoding	      is-stream		 lodash.isarray		 promise
escape-string-regexp  isomorphic-fetch	 lodash.keys		 prop-types
$
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The command &lt;code&gt;npm install&lt;/code&gt; installed the dependencies &lt;code&gt;lodash&lt;/code&gt; and &lt;code&gt;react&lt;/code&gt; (and
their dependencies) locally in the &lt;code&gt;node_modules&lt;/code&gt; directory.&lt;/p&gt;
&lt;p&gt;Note that this installs the dependencies &lt;em&gt;locally&lt;/em&gt; to the project. This is very
different from installing packages &lt;em&gt;globally&lt;/em&gt;, as we did with the &lt;code&gt;tldr&lt;/code&gt; package
(using the &lt;code&gt;--global&lt;/code&gt; or &lt;code&gt;-g&lt;/code&gt; argument). In general, local packages are
dependencies for a single project and global packages are used from the command
line.&lt;/p&gt;
&lt;p&gt;If you want to learn more, I can recommend this article about npm dependencies
and scripts: &lt;a href=&quot;https://firstdoit.com/no-need-for-globals-using-npm-dependencies-in-npm-scripts-3dfb478908&quot;&gt;Using npm dependencies in npm scripts&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Tips &amp;amp; tricks&lt;/h2&gt;
&lt;p&gt;In this section I&apos;d like to present a few tips and tricks to make your life in
the terminal easier right away.&lt;/p&gt;
&lt;h3&gt;Change the terminal&apos;s theme&lt;/h3&gt;
&lt;p&gt;In macOS and some Linux distributions, the default background color of the
terminal application is white, and the window might be slightly transparent. Yet
many people prefer a black background in the terminal (like the screenshots in
this article).&lt;/p&gt;
&lt;p&gt;Here&apos;s you can change this for macOS: from the Terminal.app, go to Preferences
(&lt;code&gt;⌘,&lt;/code&gt;), go to Profiles, and select the desired profile (e.g. the &amp;quot;Pro&amp;quot; theme).
Make sure to press the &amp;quot;Default&amp;quot; button to store this for later sessions as
well. In the same screen, you can go into the &amp;quot;Background Color&amp;quot; modal and set
opacity to 100% to remove the transparency.&lt;/p&gt;
&lt;h3&gt;Shortcuts &amp;amp; commands you should know&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;^&lt;/code&gt; (caret) below represents the &amp;quot;Control&amp;quot; key:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style=&quot;text-align:center&quot;&gt;Shortcut&lt;/th&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:center&quot;&gt;&lt;code&gt;↑&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Show the previous command&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:center&quot;&gt;&lt;code&gt;↓&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Show the next command&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:center&quot;&gt;&lt;code&gt;^a&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Move the cursor to the start of the line&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:center&quot;&gt;&lt;code&gt;^e&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Move the cursor to the end of the line&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:center&quot;&gt;&lt;code&gt;⇥&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;(tab) Auto-complete commands, and directory and file names&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:center&quot;&gt;&lt;code&gt;!!&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Run the previous command again&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:center&quot;&gt;&lt;code&gt;^l&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Clear the screen (or &lt;code&gt;clear&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:center&quot;&gt;&lt;code&gt;^c&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Cancel the current process (if it hangs or becomes unusable)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:center&quot;&gt;&lt;code&gt;^d&lt;/code&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Exit the current terminal (or &lt;code&gt;exit&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;Aliases and functions&lt;/h3&gt;
&lt;p&gt;Over time, you will see that you are using the same commands and parameters over
and over again. This is where aliases and functions come in. An alias can be
used as an abbreviation for a more complex command. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;alias ll=&amp;quot;ls -lA --color&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you can use &lt;code&gt;ll&lt;/code&gt; as an alias and it will execute the &lt;code&gt;ls&lt;/code&gt; command including
the extra arguments. You can also still add extra arguments to &lt;code&gt;ll&lt;/code&gt;. Use alias
without arguments to get a list of all active aliases.&lt;/p&gt;
&lt;p&gt;Functions can contain logic, and a main difference with aliases is that you can
pass it arguments. Here&apos;s a trivial example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;say_hi () {
  echo Hello, $1
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can invoke this function with &lt;code&gt;say_hi John&lt;/code&gt;, and it would print
&lt;code&gt;Hello, John&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ say_hi John
Hello, John
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is a very short and superficial introduction to this topic. If you are
interested in learning more, please check out the following resources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.digitalocean.com/community/tutorials/an-introduction-to-useful-bash-aliases-and-functions&quot;&gt;An Introduction to Useful Bash Aliases and Functions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://scriptingosx.com/2017/05/configuring-bash-with-aliases-and-functions/&quot;&gt;Configuring bash with aliases and functions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Wrapping up&lt;/h2&gt;
&lt;p&gt;After reading this article I hope you feel slightly more comfortable to use the
terminal. I&apos;ve compiled a short list of great resources to get some directions
to learn more. What&apos;s next for you?&lt;/p&gt;
&lt;p&gt;If you feel there&apos;s something important missing in this article, feel free to
let me know (in a comment here, or &lt;a href=&quot;https://bsky.app/profile/webpro.nl&quot;&gt;on Bluesky&lt;/a&gt;). Thanks for reading!&lt;/p&gt;
&lt;h2&gt;Further reading&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.learnenough.com/command-line-tutorial&quot;&gt;Learn Enough Command Line to Be Dangerous&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/jlevy/the-art-of-command-line#the-art-of-command-line&quot;&gt;The Art of Command Line&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;./getting-started-with-dotfiles/index.md&quot;&gt;Getting started with dotfiles&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description><pubDate>Mon, 18 Sep 2017 00:00:00 GMT</pubDate><category>terminal</category></item><item><title>Why and how I’m using SVG sprites over fonts for icons</title><link>https://webpro.nl/articles/why-and-how-im-using-svg-sprites-over-fonts-for-icons</link><guid isPermaLink="true">https://webpro.nl/articles/why-and-how-im-using-svg-sprites-over-fonts-for-icons</guid><description>&lt;h1&gt;Why and how I&apos;m using SVG sprites over fonts for icons&lt;/h1&gt;
&lt;p&gt;In a recent project, I&apos;ve been doing some research and testing to find the best
solution for icons and small images in a web application.&lt;/p&gt;
&lt;p&gt;The requirements included support for customization of both the background and
the font color of the application. Obviously, crisp images and performance are
important as well.&lt;/p&gt;
&lt;p&gt;After trying a bit and reading up on some related articles, I came to the
conclusion I wanted to go for SVG sprites, instead of an icon font. Some of the
main advantages of SVG over an icon font include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Slightly more control when styling SVG elements, since fonts are text.&lt;/li&gt;
&lt;li&gt;Fonts might be less crisp due to anti-aliasing, or off by half a pixel.&lt;/li&gt;
&lt;li&gt;Less trickery to make it work cross-browser.&lt;/li&gt;
&lt;li&gt;It&apos;s easier to change SVG shapes.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It&apos;s a bummer that, given the requirements, we couldn&apos;t use SVG sprites in CSS
as background images. The reason being that you can&apos;t change the fill color
dynamically when the background image is set.&lt;/p&gt;
&lt;p&gt;So, we need to resort to inline SVG images. One way is to simply use inline
&lt;code&gt;&amp;lt;svg&amp;gt;&lt;/code&gt; elements in the HTML, but this means duplication of potentially quite
some SVG images in the page. Fortunately, there is a great way to reuse shapes
from a single SVG file across the page!&lt;/p&gt;
&lt;p&gt;The single SVG sprite file (say, &amp;quot;defs.svg&amp;quot;) looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-xml&quot;&gt;&amp;lt;svg display=&amp;quot;none&amp;quot; width=&amp;quot;0&amp;quot; height=&amp;quot;0&amp;quot; version=&amp;quot;1.1&amp;quot; xmlns=&amp;quot;http://www.w3.org/2000/svg&amp;quot;&amp;gt;
    &amp;lt;defs&amp;gt;
        &amp;lt;symbol id=&amp;quot;icon-delete&amp;quot; viewBox=&amp;quot;0 0 1024 1024&amp;quot;&amp;gt;
            &amp;lt;title&amp;gt;delete&amp;lt;/title&amp;gt;
            &amp;lt;path class=&amp;quot;path1&amp;quot; d=&amp;quot;M810.667 273.707l...&amp;quot;&amp;gt;&amp;lt;/path&amp;gt;
        &amp;lt;/symbol&amp;gt;
        &amp;lt;symbol id=&amp;quot;icon-info&amp;quot; viewBox=&amp;quot;0 0 1024 1024&amp;quot;&amp;gt;
            &amp;lt;title&amp;gt;info&amp;lt;/title&amp;gt;
            &amp;lt;path class=&amp;quot;path1&amp;quot; d=&amp;quot;M448 304c0-26.4...&amp;quot;&amp;gt;&amp;lt;/path&amp;gt;
            &amp;lt;path class=&amp;quot;path2&amp;quot; d=&amp;quot;M640 768h-256v...&amp;quot;&amp;gt;&amp;lt;/path&amp;gt;
            &amp;lt;path class=&amp;quot;path3&amp;quot; d=&amp;quot;M512 0c-282.77...&amp;quot;&amp;gt;&amp;lt;/path&amp;gt;
        &amp;lt;/symbol&amp;gt;
        &amp;lt;symbol id=&amp;quot;icon-arrow-left&amp;quot; viewBox=&amp;quot;0 0 1024 1024&amp;quot;&amp;gt;
            &amp;lt;title&amp;gt;arrow-left&amp;lt;/title&amp;gt;
            &amp;lt;path class=&amp;quot;path1&amp;quot; d=&amp;quot;M1024 512c0-282.752...&amp;quot;&amp;gt;&amp;lt;/path&amp;gt;
        &amp;lt;/symbol&amp;gt;
    &amp;lt;/defs&amp;gt;
&amp;lt;/svg&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that I&apos;ve cut off some path definitions for brevity.&lt;/p&gt;
&lt;p&gt;You can use a service like the &lt;a href=&quot;https://icomoon.io/app&quot;&gt;IcoMoon App&lt;/a&gt; and/or create custom icons using
e.g. Illustrator. Then paste the SVG shapes (&lt;code&gt;&amp;lt;path&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;polygon&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;rect&amp;gt;&lt;/code&gt;,
&lt;code&gt;&amp;lt;circle&amp;gt;&lt;/code&gt;, etc.) into a &lt;code&gt;&amp;lt;symbol&amp;gt;&lt;/code&gt; as shown in this SVG sprite.&lt;/p&gt;
&lt;p&gt;Now in the HTML, use &lt;code&gt;&amp;lt;svg&amp;gt;&lt;/code&gt; images like so:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-xml&quot;&gt;&amp;lt;svg role=&amp;quot;img&amp;quot; title=&amp;quot;delete&amp;quot;&amp;gt;
    &amp;lt;use href=&amp;quot;defs.svg#icon-delete&amp;quot;&amp;gt;&amp;lt;/use&amp;gt;
&amp;lt;/svg&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This results in the browser downloading the file once, and using a cached
instance afterwards. We need only a little bit of styling as a basis:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-css&quot;&gt;svg {
  background-color: transparent;
  fill: currentColor;
  width: 24px;
  height: 24px;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now we are able to set font and background colors in any way we want!&lt;/p&gt;
&lt;p&gt;The good thing is that this works great in most browsers. We only need to inject
&lt;a href=&quot;https://github.com/jonathantneal/svg4everybody&quot;&gt;SVG for Everybody&lt;/a&gt; into our page to support Internet Explorer. I&apos;ve tried it
down to IE9, but there&apos;s even support for IE6–8. The script is only 1KB
minified, and leaves the other browsers unharmed.&lt;/p&gt;
&lt;p&gt;The maintenance process isn&apos;t perfect, as we need to manually edit the sprite
file, but I&apos;m still happy with it. It shouldn&apos;t be hard to write a script that
concatenates a bunch of SVG files into one sprite, though. See for example
&lt;a href=&quot;https://github.com/frexy/svg-sprite-generator&quot;&gt;svg-sprite-generator&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Happy styling!&lt;/p&gt;
&lt;p&gt;Resources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://css-tricks.com/icon-fonts-vs-svg/&quot;&gt;https://css-tricks.com/icon-fonts-vs-svg/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://css-tricks.com/svg-use-external-source/&quot;&gt;https://css-tricks.com/svg-use-external-source/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://ianfeather.co.uk/ten-reasons-we-switched-from-an-icon-font-to-svg/&quot;&gt;http://ianfeather.co.uk/ten-reasons-we-switched-from-an-icon-font-to-svg/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/jonathantneal/svg4everybody&quot;&gt;https://github.com/jonathantneal/svg4everybody&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/frexy/svg-sprite-generator&quot;&gt;https://github.com/frexy/svg-sprite-generator&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description><pubDate>Tue, 24 Mar 2015 00:00:00 GMT</pubDate></item><item><title>Managing your dotfiles</title><link>https://webpro.nl/articles/managing-your-dotfiles</link><guid isPermaLink="true">https://webpro.nl/articles/managing-your-dotfiles</guid><description>&lt;h1&gt;Managing your dotfiles&lt;/h1&gt;
&lt;p&gt;Once you&apos;ve &lt;a href=&quot;./getting-started-with-dotfiles/index.md&quot;&gt;started out enjoying dotfiles&lt;/a&gt;, you may wonder about the best
way to organize and manage them. Do you want to keep it small and simple? Or do
you want to manage as many packages, applications and their settings as
possible? Or are you an administrator and need to orchestrate many systems?&lt;/p&gt;
&lt;p&gt;Here are some questions and pointers to consider during your dotfiles journey.&lt;/p&gt;
&lt;h2&gt;Where to &lt;strong&gt;store&lt;/strong&gt; dotfiles?&lt;/h2&gt;
&lt;p&gt;Many prefer to store them in a public repository, such as GitHub or Bitbucket.
This way, you make them accessible for others to get inspired or steal from.
Good repositories can also easily be forked and customized to fit your
particular needs. Alternatively, you can store your dotfiles in a private
repository or a personal cloud storage, such as Dropbox or Google Drive.&lt;/p&gt;
&lt;h2&gt;How to install dotfiles?&lt;/h2&gt;
&lt;p&gt;You can copy and sync your dotfiles to their designated location, or create
symlinks. Another option is to use one of the &lt;a href=&quot;https://github.com/webpro/awesome-dotfiles#tools&quot;&gt;many tools&lt;/a&gt; for this. Some
&lt;a href=&quot;https://github.com/webpro/awesome-dotfiles#dotfiles-repos&quot;&gt;frameworks&lt;/a&gt; have this built-in. Many repositories include an installation
script to ease this process.&lt;/p&gt;
&lt;h2&gt;Start from &lt;strong&gt;scratch or framework&lt;/strong&gt;?&lt;/h2&gt;
&lt;p&gt;You may want to be in full control and like to know exactly what&apos;s going on in
your system. Then you can start from scratch, and borrow and steal bits and
pieces you like from others. On the other hand, you can put your trust in a
community behind a large framework. This allows you to make a head start and
quickly have all the goodness installed (including many sensible default
settings).&lt;/p&gt;
&lt;h2&gt;Which shell should I use?&lt;/h2&gt;
&lt;p&gt;It&apos;s a good idea to make up your mind regarding the shell you want to use. Most
commands and packages run fine on common shells, but not all. The more you
customize, the more likely it is to run into compatibility issues. For example,
Bash and Zsh are popular &lt;a href=&quot;http://en.wikipedia.org/wiki/Unix_shell&quot;&gt;shells&lt;/a&gt;. My advice is to pick one, and make
yourself at home.&lt;/p&gt;
&lt;h2&gt;Need to &lt;strong&gt;orchestrate&lt;/strong&gt; multiple setups and/or machines?&lt;/h2&gt;
&lt;p&gt;Depending on the environment, you might be better of with more robust solutions
for configuration management like Puppet, Chef, or Ansible.&lt;/p&gt;
</description><pubDate>Wed, 22 Oct 2014 00:00:00 GMT</pubDate><category>dotfiles</category></item><item><title>Getting started with dotfiles</title><link>https://webpro.nl/articles/getting-started-with-dotfiles</link><guid isPermaLink="true">https://webpro.nl/articles/getting-started-with-dotfiles</guid><description>&lt;h1&gt;Getting started with dotfiles&lt;/h1&gt;
&lt;blockquote&gt;
&lt;p&gt;You&apos;re the king of your castle!&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;tl/dr; You can set up a new system using dotfiles and an installation script in
minutes. It&apos;s not hard to create your own repository, and you&apos;ll learn a ton
along the road. This is truly more about the journey than the destination!&lt;/p&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Dotfiles are used to customize your system. The &amp;quot;dotfiles&amp;quot; name is derived from
the configuration files in Unix-like systems that start with a dot (e.g.
&lt;code&gt;.bash_profile&lt;/code&gt; and &lt;code&gt;.gitconfig&lt;/code&gt;). For normal users, this indicates these are
not regular documents, and by default are hidden in directory listings. For
power users, however, they are a core tool belt.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./dotfiles.svg&quot; alt=&quot;dotfiles&quot;&gt;&lt;/p&gt;
&lt;p&gt;There is a large dotfiles community. And with it comes a large number of
repositories and registries containing many organized dotfiles, advanced
installation scripts, dotfile managers, and mashups of things people collect in
their own repositories.&lt;/p&gt;
&lt;p&gt;This article will try to give an introduction to dotfiles in general, by means
of creating a basic dotfiles repository with an installation script. It is only
meant to provide some inspiration, some pointers to what is possible and where
to look for when creating your own.&lt;/p&gt;
&lt;p&gt;Note that this writeup has a focus on Linux and macOS based systems.&lt;/p&gt;
&lt;h2&gt;Automate all the things!&lt;/h2&gt;
&lt;p&gt;Ideally, you store your personal files not on your machine only. If you have
your files on either local drives (e.g. USB drive, NAS) or in the cloud
(Dropbox, Google Docs, iCloud, etc., etc.), you save yourself from the risks of
machine theft, damage, or hardware failure.&lt;/p&gt;
&lt;p&gt;Now your documents, photos, etc. are kind of safe. Still, if you ever have to
setup a system, you need to install every single application again. I can&apos;t
count the times I needed to find the application&apos;s download page, download,
install. Next. Next. Again. You forgot one. One more. And I did not even mention
the plethora of system preferences and other configurations, which I usually
can&apos;t remember when I need them. Again, I need to search.&lt;/p&gt;
&lt;p&gt;So, how awesome is it that we can automate all this? You may not realize it, but
most system tools, applications and settings can be installed in an automated
fashion. I don&apos;t know about you, but this is like music to my ears!&lt;/p&gt;
&lt;p&gt;Today, I could literally throw my laptop out of the window, buy a new one, and
be up and running in a matter of minutes (not hours!).&lt;/p&gt;
&lt;p&gt;Without breaking a sweat (apart from the $$$).&lt;/p&gt;
&lt;h2&gt;Getting started&lt;/h2&gt;
&lt;p&gt;It&apos;s pretty simple to get started. You need to organize your dotfiles in some
directory. You could do this practically anywhere, like a USB drive or
something. Since version control is great, a hosted git repository like GitHub
is a great option to store your dotfiles.&lt;/p&gt;
&lt;h2&gt;An example dotfiles repository&lt;/h2&gt;
&lt;p&gt;For this example, I&apos;m just going to use a subset of &lt;a href=&quot;https://github.com/webpro/dotfiles&quot;&gt;my own dotfiles repo&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Structure&lt;/h3&gt;
&lt;p&gt;Below is the structure of my dotfiles repo. It&apos;s also what we&apos;ll use in our
walk-through below.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;.
├── git
│ ├── .gitconfig
│ └── .gitignore_global
├── install.sh
├── osxdefaults.sh
├── runcom
│ ├── .bash_profile
│ └── .inputrc
├── system
├── .alias
├── .env
├── .function
├── .path
└── .prompt
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;The dotfiles&lt;/h3&gt;
&lt;p&gt;We&apos;ll be taking a look at the following example dotfiles:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;.bash_profile
.inputrc
.alias
.functions
.env
.prompt
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Startup script&lt;/h3&gt;
&lt;p&gt;In a Bash shell, this file (or &lt;code&gt;.profile&lt;/code&gt;) in your home directory is loaded
first. What to put in the &lt;code&gt;.bash_profile&lt;/code&gt; and other dotfiles is truly worth a
book alone, but we&apos;re going to give it a quick shot here anyway. I like to use a
small &lt;code&gt;.bash_profile&lt;/code&gt; that links to several others that have a dedicated
purpose, i.e. one file for the aliases, one for the functions, etc. Here&apos;s an
example of how you can include (or actually &amp;quot;source&amp;quot; or execute) all files in a
folder:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;for DOTFILE in `find /Users/lars/Projects/.dotfiles`
do
    [ -f &amp;quot;$DOTFILE&amp;quot; ] &amp;amp;&amp;amp; source &amp;quot;$DOTFILE&amp;quot;
done
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Full examples include &lt;a href=&quot;https://github.com/webpro/dotfiles/blob/master/runcom/.bash_profile&quot;&gt;my own &lt;code&gt;.bash_profile&lt;/code&gt;&lt;/a&gt;, &lt;a href=&quot;https://github.com/mathiasbynens/dotfiles/blob/main/.bash_profile&quot;&gt;Mathias&apos;s
&lt;code&gt;.bash_profile&lt;/code&gt;&lt;/a&gt;. Some people like to put most of their startup configuration
in one file. This is perfectly fine, as long as you keep it sane and dense.&lt;/p&gt;
&lt;p&gt;If you want to dive into startup scripts a bit more, Peter Ward explains about
&lt;a href=&quot;https://blog.flowblok.id.au/2013-02/shell-startup-scripts.html&quot;&gt;Shell startup scripts&lt;/a&gt;, and here&apos;s another about &lt;a href=&quot;https://shreevatsa.wordpress.com/2008/03/30/zshbash-startup-files-loading-order-bashrc-zshrc-etc/&quot;&gt;startup script loading
order&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Keybindings&lt;/h3&gt;
&lt;p&gt;The behavior of line input editing and keybindings is stored in a &lt;code&gt;.inputrc&lt;/code&gt;
file. Here&apos;s an excerpt of my own:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;set completion-ignore-case on
# List all matches in case multiple possible completions are possible
set show-all-if-ambiguous on
# Flip through autocompletion matches with Shift-Tab.
&amp;quot;\e[Z&amp;quot;: menu-complete
# Filtered history search
&amp;quot;\e[A&amp;quot;: history-search-backward
&amp;quot;\e[B&amp;quot;: history-search-forward
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Full example: &lt;a href=&quot;https://github.com/webpro/dotfiles/blob/master/runcom/.inputrc&quot;&gt;my .inputrc&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Aliases&lt;/h3&gt;
&lt;p&gt;Aliases allow you to define shortcuts for commands, to add default arguments,
and/or to abbreviate longer one-liners. Here are some examples:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;alias l=&amp;quot;ls -la&amp;quot;       # List in long format, include dotfiles
alias ld=&amp;quot;ls -ld */&amp;quot;   # List in long format, only directories
alias ..=&amp;quot;cd ..&amp;quot;
alias ...=&amp;quot;cd ../..&amp;quot;
alias ....=&amp;quot;cd ../../..&amp;quot;

# Recursively remove .DS_Store files
alias cleanupds=&amp;quot;find . -type f -name &apos;*.DS_Store&apos; -ls -delete&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Full examples: &lt;a href=&quot;https://github.com/webpro/dotfiles/blob/master/system/.alias&quot;&gt;my .alias&lt;/a&gt;, &lt;a href=&quot;https://github.com/mathiasbynens/dotfiles/blob/main/.aliases&quot;&gt;Mathias&apos;s .aliases&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Functions&lt;/h3&gt;
&lt;p&gt;Commands that are too complex for an alias (and perhaps too small for a
stand-alone script) can be defined in a function. Functions can take arguments,
making them more powerful.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# Create a new directory and enter it
function mk() {
  mkdir -p &amp;quot;$@&amp;quot; &amp;amp;&amp;amp; cd &amp;quot;$@&amp;quot;
}
# Open man page as PDF
function manpdf() {
  man -t &amp;quot;${1}&amp;quot; | open -f -a /Applications/Preview.app/
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Full example: &lt;a href=&quot;https://github.com/mathiasbynens/dotfiles/blob/main/.functions&quot;&gt;Mathias&apos;s .functions&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Environment variables&lt;/h3&gt;
&lt;p&gt;Environment variables can go in another dotfile:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;export PATH=&amp;quot;/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:$DOTFILES_DIR/bin&amp;quot;
export EDITOR=&amp;quot;subl -w&amp;quot;
export CLICOLOR=1
export LSCOLORS=gxfxcxdxbxegedabagacad
# Tell grep to highlight matches
export GREP_OPTIONS=&apos;-color=auto&apos;
# Case-insensitive globbing (used in pathname expansion)
shopt -s nocaseglob
# Autocorrect typos in path names when using `cd`
shopt -s cdspell
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Full example: &lt;a href=&quot;https://github.com/webpro/dotfiles/blob/master/system/.env&quot;&gt;my .env&lt;/a&gt; &lt;a href=&quot;https://github.com/mathiasbynens/dotfiles/blob/main/.exports&quot;&gt;Mathias&apos;s .exports&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Prompt&lt;/h3&gt;
&lt;p&gt;A custom prompt can be convenient. You could, for example, show where you are in
the directory tree, and/or which git branch you&apos;re currently working with.
There&apos;s plenty of options here, but personally I&apos;d like to keep this a bit easy
on the eyes. Here&apos;s my prompt:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;lars ~/Projects/blog main ❯
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Examples: &lt;a href=&quot;https://github.com/webpro/dotfiles/blob/master/system/.prompt&quot;&gt;my .prompt&lt;/a&gt;, &lt;a href=&quot;https://wiki.archlinux.org/title/Bash/Prompt_customization&quot;&gt;Color Bash Prompt&lt;/a&gt;, &lt;a href=&quot;https://twolfson.com/2013-08-15-sexy-bash-prompt&quot;&gt;Sexy Bash Prompt&lt;/a&gt;,
&lt;a href=&quot;https://www.digitalocean.com/community/tutorials/how-to-customize-your-bash-prompt-on-a-linux-vps&quot;&gt;How to Customize your Bash Prompt&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Other dotfiles&lt;/h3&gt;
&lt;p&gt;Many packages store their settings in a dotfile, e.g.:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;.gitconfig&lt;/code&gt; for Git&lt;/li&gt;
&lt;li&gt;&lt;code&gt;.vimrc&lt;/code&gt; for Vim&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Because these are basically simple text files, they are perfect to store in your
dotfiles repo!&lt;/p&gt;
&lt;h3&gt;Installing the dotfiles&lt;/h3&gt;
&lt;p&gt;To &amp;quot;activate&amp;quot; the dotfiles, you can either copy or symlink them from the home
directory. Otherwise they&apos;re just sitting there being useless.&lt;/p&gt;
&lt;p&gt;Beware you probably already have a &lt;code&gt;.bash_profile&lt;/code&gt; and &lt;code&gt;.gitconfig&lt;/code&gt; in the user
folder. So please be careful here. With great power comes great responsibility.
Probably it&apos;s best to backup important files before you&apos;re moving them around.&lt;/p&gt;
&lt;p&gt;Let&apos;s assume you have the relevant dotfiles together in &lt;code&gt;~/.dotfiles&lt;/code&gt;. You can
create a symlink from here to the directory where they are expected (usually
your home directory):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ln -sv &amp;quot;~/.dotfiles/runcom/.bash_profile&amp;quot; ~
ln -sv &amp;quot;~/.dotfiles/runcom/.inputrc&amp;quot; ~
ln -sv &amp;quot;~/.dotfiles/git/.gitconfig&amp;quot; ~
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We already have the core of a dotfiles setup.&lt;/p&gt;
&lt;h3&gt;Installation script&lt;/h3&gt;
&lt;p&gt;You may want to have an installation script to automate symlinking the dotfiles
in the repo to your home directory. But there&apos;s more we can put in a script that
we run once to install a new system. See this &lt;a href=&quot;https://github.com/webpro/dotfiles/blob/master/Makefile&quot;&gt;Makefile&lt;/a&gt; for an example.
Also make sure to check out other people&apos;s scripts for more ideas and
inspiration.&lt;/p&gt;
&lt;p&gt;To install the dotfiles on a new system, we can do so easily by cloning the
repo:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ git clone https://github.com/webpro/dotfiles.git
$ cd dotfiles
$
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then do the symlinking (either manually or with a script), et voilà! Now I would
like to show you some more neat things you can do in your dotfiles.&lt;/p&gt;
&lt;h2&gt;Homebrew and Homebrew Cask&lt;/h2&gt;
&lt;p&gt;Let&apos;s install my favourite combo for package management in macOS, &lt;a href=&quot;https://brew.sh&quot;&gt;Homebrew&lt;/a&gt;
and &lt;a href=&quot;https://github.com/Homebrew/homebrew-cask&quot;&gt;Homebrew Cask&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ /bin/bash -c &amp;quot;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This opens up a giant repository of system tools you can install from the
command line. Here&apos;s a short sample:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ brew install node
$ brew install git
$ brew install wget
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And how about macOS applications? Thanks to Homebrew Cask we have the power to
install GUI applications in macOS from the command line:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ brew install --cask atom
$ brew install --cask dropbox
$ brew install --cask firefox
$ brew install --cask google-chrome
$ brew install --cask spotify
$ brew install --cask sublime-text3
$ brew install --cask virtualbox
$ brew install --cask vlc
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Please note that you can install multiple applications with a single command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ brew cask install alfred dash flux mou
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;macOS defaults&lt;/h2&gt;
&lt;p&gt;Many, many macOS settings can be set from the command line. Here&apos;s just a small
sample to get an idea:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# Finder: show hidden files by default
defaults write com.apple.finder AppleShowAllFiles -bool true
# Automatically hide and show the Dock
defaults write com.apple.dock autohide -bool true
# Save screenshots to the desktop
defaults write com.apple.screencapture location -string &amp;quot;$HOME/Desktop&amp;quot;
# Save screenshots in PNG format (other options: BMP, GIF, JPG, PDF, TIFF)
defaults write com.apple.screencapture type -string &amp;quot;png&amp;quot;
# Display full POSIX path as Finder window title
defaults write com.apple.finder _FXShowPosixPathInTitle -bool true
# Disable the sound effects on boot
sudo nvram SystemAudioVolume=&amp;quot; &amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We must credit &lt;a href=&quot;https://mathiasbynens.be&quot;&gt;Mathias Bynens&lt;/a&gt; here for creating and maintaining an awesome
collection of macOS defaults in &lt;a href=&quot;https://github.com/mathiasbynens/dotfiles&quot;&gt;his dotfiles&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To apply the macOS defaults you&apos;ve stored in e.g. &lt;code&gt;macosdefaults.sh&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ source macosdefaults.sh
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This line is a perfect candidate to include in your installation script.&lt;/p&gt;
&lt;h2&gt;Updating your system&lt;/h2&gt;
&lt;p&gt;It&apos;s fine to run the installer script again, e.g. to fix some symlinks or update
packages (it should be idempotent). But it&apos;s better and faster to run a couple
of update commands separately. Here are some example commands to put in an alias
or function to update macOS, Homebrew, npm, and Ruby packages:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# Update App Store apps
sudo softwareupdate -i -a

# Update Homebrew (Cask) &amp;amp; packages
brew update
brew upgrade

# Update npm &amp;amp; packages
npm install npm -g
npm update -g

# Update Ruby &amp;amp; gems
sudo gem update -system
sudo gem update
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Next steps&lt;/h2&gt;
&lt;p&gt;I have always enjoyed to wade through the various existing dotfiles repos and
find real gems out there. Sometimes it takes real effort to make something work
the way you want it to, but eventually it makes your dotfiles really yours.&lt;/p&gt;
&lt;p&gt;You might have missed tools like Zsh, Vim, and many more. Well, my apologies for
that, but you would never reach the end of this article.&lt;/p&gt;
&lt;p&gt;In any case, there are plenty of great resources and dotfiles covering these as
well. My curated &lt;a href=&quot;https://github.com/webpro/awesome-dotfiles&quot;&gt;awesome-dotfiles&lt;/a&gt; list might be a good start.&lt;/p&gt;
&lt;p&gt;If you have nice ideas to share or want to collaborate, feel free to &lt;a href=&quot;https://bsky.app/profile/webpro.nl&quot;&gt;send me a
message&lt;/a&gt; or &lt;a href=&quot;./dotfiles.svg&quot;&gt;open a PR&lt;/a&gt;!&lt;/p&gt;
</description><pubDate>Wed, 16 Jul 2014 00:00:00 GMT</pubDate><category>dotfiles</category></item><item><title>Getting gulpy</title><link>https://webpro.nl/articles/getting-gulpy</link><guid isPermaLink="true">https://webpro.nl/articles/getting-gulpy</guid><description>&lt;h1&gt;Getting gulpy&lt;/h1&gt;
&lt;h2&gt;Advanced tips for using gulp.js&lt;/h2&gt;
&lt;p&gt;After getting excited about &lt;a href=&quot;http://gulpjs.com&quot;&gt;gulp.js&lt;/a&gt;, at some point you need more than the
shiny but basic examples. This post discusses some common pitfalls when using
gulp.js, plugins and streams in a more advanced and custom way.&lt;/p&gt;
&lt;h2&gt;Basic tasks&lt;/h2&gt;
&lt;p&gt;In a basic setup, gulp has a nice syntax to use streams and plugins to transform
your source files:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;gulp.task(&apos;scripts&apos;, function () {
  return gulp
    .src(&apos;./src/**/*.js&apos;)
    .pipe(uglify())
    .pipe(concat(&apos;all.min.js&apos;))
    .pipe(gulp.dest(&apos;build/&apos;));
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This works just fine in many cases, but once you need something more tailored,
you may soon face tricky situations. This post addresses some of them.&lt;/p&gt;
&lt;h2&gt;Incompatible streams?&lt;/h2&gt;
&lt;p&gt;When using gulp you may have run into the issue of &amp;quot;incompatible streams&amp;quot;. This
mostly has to do with the difference of regular streams versus vinyl file
objects, and gulp plugins that use libraries supporting only buffers (and not
streams).&lt;/p&gt;
&lt;p&gt;For example, you can&apos;t pipe a regular Node stream directly to gulp and/or gulp
plugins. Let&apos;s take a read stream, transform the contents using &lt;a href=&quot;https://www.npmjs.org/package/gulp-uglify&quot;&gt;gulp-uglify&lt;/a&gt;
and &lt;a href=&quot;https://www.npmjs.org/package/gulp-rename&quot;&gt;gulp-rename&lt;/a&gt;, and finally write the result to disk with &lt;code&gt;gulp.dest()&lt;/code&gt;.
Consider this (erroneous) example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;var uglify = require(&apos;gulp-uglify&apos;);
var rename = require(&apos;gulp-rename&apos;);

gulp.task(&apos;bundle&apos;, function () {
  return fs
    .createReadStream(&apos;app.js&apos;)
    .pipe(uglify())
    .pipe(rename(&apos;bundle.min.js&apos;))
    .pipe(gulp.dest(&apos;dist/&apos;));
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Why can&apos;t we pipe a read stream to a gulp plugin? Gulp is the &lt;em&gt;streaming&lt;/em&gt; build
system after all, right? Yes, but the example above ignores the fact that gulp
plugins expect Vinyl file objects. You can&apos;t just pipe a read stream to a
function (plugin) that expects vinyl file object(s).&lt;/p&gt;
&lt;h2&gt;The vinyl file object&lt;/h2&gt;
&lt;p&gt;Gulp uses &lt;a href=&quot;https://github.com/wearefractal/vinyl-fs&quot;&gt;vinyl-fs&lt;/a&gt;, from which it inherits the &lt;code&gt;gulp.src()&lt;/code&gt; and
&lt;code&gt;gulp.dest()&lt;/code&gt; methods. Vinyl-fs uses the &lt;a href=&quot;https://github.com/wearefractal/vinyl&quot;&gt;vinyl&lt;/a&gt; file object, its &amp;quot;virtual
file format&amp;quot;. If we want to use gulp and/or gulp plugins with a regular read
stream, we need to convert the read stream to vinyl first.&lt;/p&gt;
&lt;p&gt;A great option is to use &lt;a href=&quot;https://www.npmjs.org/package/vinyl-source-stream&quot;&gt;vinyl-source-stream&lt;/a&gt;, which does exactly that:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;var source = require(&apos;vinyl-source-stream&apos;);
var marked = require(&apos;gulp-marked&apos;);

fs.createReadStream(&apos;*.md&apos;)
  .pipe(source())
  .pipe(marked())
  .pipe(gulp.dest(&apos;dist/&apos;));
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The next example starts with a &lt;a href=&quot;https://browserify.org&quot;&gt;Browserified&lt;/a&gt; bundle and eventually converts
this to a vinyl stream.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;var browserify = require(&apos;browserify&apos;);
var uglify = require(&apos;gulp-uglify&apos;);
var source = require(&apos;vinyl-source-stream&apos;);

gulp.task(&apos;bundle&apos;, function () {
  return browserify(&apos;./src/app.js&apos;)
    .bundle()
    .pipe(source(&apos;bundle.min.js&apos;))
    .pipe(uglify())
    .pipe(gulp.dest(&apos;dist/&apos;));
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Great. Note that we don&apos;t need to use gulp-rename anymore, since
vinyl-source-stream creates a vinyl file instance with the specified filename
(which gulp.dest will use to write the bundle).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;gulp.dest&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This gulp method creates a write stream, and is really convenient. It reuses the
file names from the read stream, and creates directories (using &lt;a href=&quot;https://www.npmjs.org/package/mkdirp&quot;&gt;mkdirp&lt;/a&gt;) as
necessary. After writing, you can continue piping the stream (e.g. to also gzip
the data and write the result to other files).&lt;/p&gt;
&lt;h2&gt;Streams and buffers&lt;/h2&gt;
&lt;p&gt;Since you&apos;re interested in using gulp, this post simply assumes you have some
basic knowledge of streams. Vinyl works with virtual files containing either a
buffer or a stream (or &lt;code&gt;null&lt;/code&gt;). With a regular read stream you can listen to
emitted chunks of data:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;fs.createReadStream(&apos;/usr/share/dict/words&apos;).on(&apos;data&apos;, function(chunk) {
    console.log(&apos;Read %d bytes of data&apos;, chunk.length);
});

&amp;gt; Read 65536 bytes of data
&amp;gt; Read 65536 bytes of data
&amp;gt; Read 65536 bytes of data
&amp;gt; Read 65536 bytes of data
&amp;gt; ...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In contrast, &lt;code&gt;gulp.src()&lt;/code&gt; emits &lt;code&gt;buffered&lt;/code&gt; vinyl file objects back to the
stream. This means you won&apos;t get chunks, but (virtual) files with buffered
contents. The vinyl file format has a &lt;code&gt;contents&lt;/code&gt; property representing a buffer
or a stream, and gulp is using buffers by default:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;gulp.src(&apos;/usr/share/dict/words&apos;).on(&apos;data&apos;, function(file) {
    console.log(&apos;Read %d bytes of data&apos;, file.contents.length);
});

&amp;gt; Read 2493109 bytes of data
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This clearly shows the data is buffered before the file gets emitted to the
stream as a whole.&lt;/p&gt;
&lt;h2&gt;Gulp uses buffers by default&lt;/h2&gt;
&lt;p&gt;Although in general it&apos;s recommended to stream the data, many plugins have
underlying libraries that work with buffers. Sometimes this is simply necessary
for transformations that require the source contents as a whole. Consider for
instance text-based replacements with regular expressions. You would run the
risk of matching patterns being in separate chunks, failing to find those
matches. Likewise, tools like &lt;a href=&quot;https://lisperator.net/uglifyjs/&quot;&gt;UglifyJS&lt;/a&gt; and the &lt;a href=&quot;https://github.com/google/traceur-compiler&quot;&gt;Traceur compiler&lt;/a&gt; need
complete files as their input (or at least syntactically complete strings of
JavaScript).&lt;/p&gt;
&lt;p&gt;This is why gulp is using buffered streams by default, since they&apos;re just easier
to work with.&lt;/p&gt;
&lt;p&gt;The downside of using buffered content is that they are inefficient for large
files. The file is read completely, before it is emitted back to the stream. The
question is, for which file sizes does this really hurt performance? For regular
text files such as JavaScript, CSS, templates, etcetera there&apos;s likely just
minimal overhead in using buffers.&lt;/p&gt;
&lt;p&gt;In any case, you can tell gulp to pass on a stream for &lt;code&gt;contents&lt;/code&gt; if you set the
&lt;code&gt;buffer&lt;/code&gt; option to &lt;code&gt;false&lt;/code&gt;. Here&apos;s a contrived example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;gulp.src(&apos;/usr/share/dict/words&apos;, {buffer: false}).on(&apos;data&apos;, function(file) {
    var stream = file.contents;
    stream.on(&apos;data&apos;, function(chunk) {
        console.log(&apos;Read %d bytes of data&apos;, chunk.length);
    });
});

&amp;gt; Read 65536 bytes of data
&amp;gt; Read 65536 bytes of data
&amp;gt; Read 65536 bytes of data
&amp;gt; Read 65536 bytes of data
&amp;gt; ...
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;From streams to buffers&lt;/h2&gt;
&lt;p&gt;Depending on the desired input (and output) stream, and depending on the gulp
plugin, you may need to switch from streams to buffers (or vice versa). As said,
most plugins work with buffers (although some of them also support streams).
Examples include &lt;a href=&quot;https://www.npmjs.org/package/gulp-uglify&quot;&gt;gulp-uglify&lt;/a&gt; and &lt;a href=&quot;https://www.npmjs.org/package/gulp-traceur&quot;&gt;gulp-traceur&lt;/a&gt;. You can do the
conversion to buffers using &lt;a href=&quot;https://www.npmjs.org/package/gulp-buffer&quot;&gt;gulp-buffer&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;var source = require(&apos;vinyl-source-stream&apos;);
var buffer = require(&apos;gulp-buffer&apos;);
var uglify = require(&apos;gulp-uglify&apos;);

fs.createReadStream(&apos;./src/app.js&apos;)
  .pipe(source(&apos;app.min.js&apos;))
  .pipe(buffer())
  .pipe(uglify())
  .pipe(gulp.dest(&apos;dist/&apos;));
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Or, another contrived example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;var buffer = require(&apos;gulp-buffer&apos;);
var traceur = require(&apos;gulp-traceur&apos;);

gulp
  .src(&apos;app.js&apos;, { buffer: false })
  .pipe(buffer())
  .pipe(traceur())
  .pipe(gulp.dest(&apos;dist/&apos;));
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;From buffers to streams&lt;/h2&gt;
&lt;p&gt;You can also &amp;quot;streamify&amp;quot; the output of a plugin working with buffers (back) to a
read stream by using &lt;a href=&quot;https://www.npmjs.org/package/gulp-streamify&quot;&gt;gulp-streamify&lt;/a&gt; or &lt;a href=&quot;https://www.npmjs.org/package/gulp-stream&quot;&gt;gulp-stream&lt;/a&gt;. Then plugins
that work (only) with streams can be used before and after the buffer-based
plugin:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;var wrap = require(&apos;gulp-wrap&apos;);
var streamify = require(&apos;gulp-streamify&apos;);
var uglify = require(&apos;gulp-uglify&apos;);
var gzip = require(&apos;gulp-gzip&apos;);

gulp
  .src(&apos;app.js&apos;, { buffer: false })
  .pipe(wrap(&apos;(function(){&amp;lt;%= contents %&amp;gt;}());&apos;))
  .pipe(streamify(uglify()))
  .pipe(gulp.dest(&apos;build&apos;))
  .pipe(gzip())
  .pipe(gulp.dest(&apos;build&apos;));
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;You don&apos;t need a plugin for everything&lt;/h2&gt;
&lt;p&gt;Although there are many plugins out there that are very useful and convenient,
some tasks and transformations can easily be done without Yet Another Plugin™.
Plugins do cause some overhead in that they make you depending on an extra npm
module, a plugin interface, (unresponsive?) maintainer, etc. If it&apos;s very easy
to do the task at hand without a plugin, or to directly use the original module,
then in most cases I would recommend to do so. It&apos;s important to understand the
concepts I&apos;ve described above to make the right decision in your situation.
Let&apos;s take a look at some examples.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;vinyl-source-stream&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In our examples above we&apos;ve already seen an example of using Browserify directly
instead of the (blacklisted) &lt;a href=&quot;https://www.npmjs.org/package/gulp-browserify&quot;&gt;gulp-browserify&lt;/a&gt; plugin. The key here is to
use vinyl-source-stream (or similar) to allow for regular read streams as input
to Vinyl plugins.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Textual transformations&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Another example is string-based transformations. Here is a very basic plugin to
use directly with vinyl buffers:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;function modify(modifier) {
  return through2.obj(function (file, encoding, done) {
    var content = modifier(String(file.contents));
    file.contents = new Buffer(content);
    this.push(file);
    done();
  });
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You could use this plugin like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;gulp.task(&apos;modify&apos;, function () {
  return gulp
    .src(&apos;app.js&apos;)
    .pipe(modify(version))
    .pipe(modify(swapStuff))
    .pipe(gulp.dest(&apos;build&apos;));
});

function version(data) {
  return data.replace(/\_\_VERSION\_\_/, pkg.version);
}

function swapStuff(data) {
  return data.replace(/(\\w+)\\s(\\w+)/, &apos;$2, $1&apos;);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The plugin is unfinished and doesn&apos;t even deal with streams. However, it shows
it&apos;s possibly easy to create new transformations using some basic functions. The
&lt;a href=&quot;https://www.npmjs.org/package/through2&quot;&gt;through2&lt;/a&gt; library is a great wrapper to Node streams and enables transform
functions as shown above.&lt;/p&gt;
&lt;h2&gt;Task orchestration&lt;/h2&gt;
&lt;p&gt;In case you need some custom or dynamic tasks to run, it&apos;s useful to know that
gulp is using the &lt;a href=&quot;https://www.npmjs.org/package/orchestrator&quot;&gt;Orchestrator&lt;/a&gt; module. The gulp.add method is
Orchestrator.add (actually all methods are inherited from the Orchestrator
module). But, why would you need this?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You don&apos;t want to clutter the list of gulp tasks with &amp;quot;private&amp;quot; tasks (i.e.
not exposing them to the CLI tool).&lt;/li&gt;
&lt;li&gt;You need more dynamic and/or reusable sub-tasks.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Closing thoughts&lt;/h2&gt;
&lt;p&gt;Please note that gulp itself (or Grunt) itself is not always the best tool for
the job. If, for instance, you just need to concatenate and uglify a couple of
Javascript files, or you need to compile some SASS files, you may want to
consider using Makefiles or npm run and get &lt;em&gt;a lot&lt;/em&gt; done from the command line.
Less dependencies and less configuration can be truly liberating.&lt;/p&gt;
&lt;p&gt;Read up on &lt;a href=&quot;https://web.archive.org/web/20220531064025/https://github.com/substack/blog/blob/master/npm_run.markdown&quot;&gt;Task automation with npm run&lt;/a&gt; to learn more. Just make sure you
define clearly what you need on a scale of &amp;quot;build customization&amp;quot;, and what would
be the best tool(s) for the job.&lt;/p&gt;
&lt;p&gt;However, I think gulp is a great build system that I love to use and really
introduced to me the power of streams in Node.js.&lt;/p&gt;
&lt;p&gt;Hope this helps! If you have any feedback or additional tips, please let me know
in the comments or &lt;a href=&quot;https://bsky.app/profile/webpro.nl&quot;&gt;on Bluesky&lt;/a&gt;.&lt;/p&gt;
</description><pubDate>Mon, 05 May 2014 00:00:00 GMT</pubDate></item><item><title>The $ object demystified</title><link>https://webpro.nl/articles/the-dollar-sign-object-demystified</link><guid isPermaLink="true">https://webpro.nl/articles/the-dollar-sign-object-demystified</guid><description>&lt;h1&gt;The $ object demystified&lt;/h1&gt;
&lt;h2&gt;Wrap Like An Egyptian&lt;/h2&gt;
&lt;p&gt;Let&apos;s take a quick look at querySelector-based libraries such as jQuery and
Zepto. You&apos;re probably familiar with their syntax:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;var $items = $(&apos;.items&apos;);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once you&apos;ve queried some elements, there&apos;s a lot you can do with those elements,
such as adding classes (e.g. &lt;code&gt;$el.addClass(&apos;active&apos;)&lt;/code&gt;), insert other elements,
add event listeners, and so on.&lt;/p&gt;
&lt;h2&gt;Elements vs. API&lt;/h2&gt;
&lt;p&gt;The elements being returned from the the call to &lt;code&gt;$(selector)&lt;/code&gt; represent an
array of matching DOM elements, while the API methods that come with them are
properties to an object. To combine them, it might seem ideal if any array of
elements would have its &lt;code&gt;prototype&lt;/code&gt; set to the API object. The API prototype
object could then be shared across each wrapped object, which would be very
efficient. However, we can&apos;t just set the &lt;code&gt;prototype&lt;/code&gt; of an array (and it&apos;s not
a good idea to extend that prototype directly with a bunch of mostly unrelated
methods). So how could this wrapping of things be implemented?&lt;/p&gt;
&lt;h2&gt;Implementation options&lt;/h2&gt;
&lt;p&gt;This leaves us with a couple of less optimal options. For example:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Use the array and assign all members of the API as properties to the array.&lt;/li&gt;
&lt;li&gt;Use the array and set its &lt;code&gt;__proto__&lt;/code&gt; member to the API object.&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;Object.create()&lt;/code&gt;, and assign all DOM elements as indexed members to the
object.&lt;/li&gt;
&lt;li&gt;Use a constructor and use the API object as its &lt;code&gt;prototype&lt;/code&gt;. Assign all DOM
elements as indexed members to the object.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Here&apos;s a basic, untested implementation of each:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Array with iteration over API methods&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;function $(selector) {
  var collection = document.querySelectorAll(selector),
    wrapped = [].slice.call(collection);
  for (var method in MyAPI) {
    wrapped[method] = MyAPI[method];
  }
  return wrapped;
}
var $myCollection = $(&apos;.items&apos;);
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Array with &lt;code&gt;__proto__&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;function $(selector) {
  var collection = document.querySelectorAll(selector),
    wrapped = [].slice.call(collection);
  wrapped.__proto___ = MyAPI;
  return wrapped;
}
var $myCollection = $(&apos;.items&apos;);
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;&lt;code&gt;Object.create&lt;/code&gt; with iteration over elements&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;function $(selector) {
  var collection = document.querySelectorAll(selector),
    wrapped = Object.create(MyAPI);
  for (var i = 0, l = collection.length; i &amp;lt; l; i++) {
    wrapped[i] = collection[i];
  }
  return wrapped;
}
var $myCollection = $(&apos;.items&apos;);
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Constructor with iteration over elements&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;function $(selector) {
  var collection = document.querySelectorAll(selector);
  for (var i = 0, l = collection.length; i &amp;lt; l; i++) {
    this[i] = collection[i];
  }
}
$.prototype = MyAPI;
var $myCollection = new $(&apos;.items&apos;);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Each of the options require an iteration over either the elements or the API
members. That&apos;s exactly why they&apos;re less optimal options. Depending on the
length of either the elements or the API, this might end up expensive. But
that&apos;s not even mentioning that it&apos;s generally considered bad practice to either
augment an object with array members, or vice-versa.&lt;/p&gt;
&lt;h2&gt;jQuery and Zepto&lt;/h2&gt;
&lt;p&gt;How are the big guys doing it? Basically, jQuery follows strategy #4, while
Zepto uses the &lt;code&gt;__proto__&lt;/code&gt; (#2).&lt;/p&gt;
&lt;h2&gt;&lt;code&gt;Object.__proto__&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;Let&apos;s consider the &lt;code&gt;__proto__&lt;/code&gt; strategy for a moment. Since an array is also an
object in JavaScript, it makes sense to use &lt;code&gt;Object.prototype.__proto__&lt;/code&gt; (or
ES6&apos;s upcoming &lt;code&gt;Object.setPrototypeOf&lt;/code&gt;). And it actually works in most browsers,
&lt;em&gt;except for Internet Explorer IE10 and below&lt;/em&gt;. Another downside is that it isn&apos;t
fast, especially when combined with the obligatory Array conversion
(&lt;code&gt;Array.slice&lt;/code&gt; or iteration). Because in more real-world scenario, array-like
collections such as &lt;code&gt;NodeList&lt;/code&gt; and &lt;code&gt;ElementList&lt;/code&gt; should be converted to static
collections, as having live &lt;code&gt;NodeLists&lt;/code&gt; might lead to unexpected behavior. So
you&apos;d still need the iteration.&lt;/p&gt;
&lt;h2&gt;Performance&lt;/h2&gt;
&lt;p&gt;During a bit of isolated benchmarking, this gives interesting and wildly varying
results across browsers and number of elements. Actually setting the &lt;code&gt;__proto__&lt;/code&gt;
itself makes the strategy to be performing slightly worse than the others.&lt;/p&gt;
&lt;h2&gt;Wrapping up&lt;/h2&gt;
&lt;p&gt;In most situations I would go with the constructor approach, while making an
iteration over the array of elements (#4). This is a safe option with regards to
browser support, works everywhere today and tomorrow, and in my benchmarking
came out performing very well across browsers. jQuery essentially does the same
thing, and it&apos;s also what I ended up doing myself in DOMtastic.&lt;/p&gt;
&lt;p&gt;Feel free to check out the &lt;a href=&quot;https://github.com/webpro/DOMtastic&quot;&gt;DOMtastic&lt;/a&gt; project if you&apos;d like to see code, run
benchmarks, and/or see their results.&lt;/p&gt;
&lt;h2&gt;Related resources&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/proto&quot;&gt;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/proto&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/jquery/jquery/blob/master/src/core/init.js&quot;&gt;https://github.com/jquery/jquery/blob/master/src/core/init.js&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/webpro/DOMtastic/blob/master/src/selector/index.js&quot;&gt;https://github.com/webpro/DOMtastic/blob/master/src/selector/index.js&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/madrobby/zepto/blob/master/src/zepto.js&quot;&gt;https://github.com/madrobby/zepto/blob/master/src/zepto.js&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/madrobby/zepto/issues/272&quot;&gt;https://github.com/madrobby/zepto/issues/272&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description><pubDate>Fri, 24 Jan 2014 00:00:00 GMT</pubDate></item><item><title>Bubbling events in detached DOM trees</title><link>https://webpro.nl/articles/bubbling-events-in-detached-dom-trees</link><guid isPermaLink="true">https://webpro.nl/articles/bubbling-events-in-detached-dom-trees</guid><description>&lt;h1&gt;Bubbling events in detached DOM trees&lt;/h1&gt;
&lt;p&gt;Here&apos;s a quick post on the topic. Sometimes we need events to still work in a
detached DOM tree. Even though the end-user can&apos;t really interact with detached
trees, DOM elements in that tree can still listen to other events and react to
them. This might also be efficient performance-wise, since changes in detached
trees don&apos;t trigger repaints.&lt;/p&gt;
&lt;h2&gt;Detached DOM trees&lt;/h2&gt;
&lt;p&gt;We&apos;ll talk about how to support events in detached DOM trees, and how to do this
in a performant way. First, a detached DOM tree is an HTML fragment that&apos;s not
in the current document (e.g. you won&apos;t find it in the Element Inspector of your
debugger), but it&apos;s still referenced in memory. Use cases where they come in
useful include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Fragments that were just rendered with a template engine, and ready to be
inserted to the DOM.&lt;/li&gt;
&lt;li&gt;Fragments that are attached and detached to minimize repaints while their DOM
structure is modified.&lt;/li&gt;
&lt;li&gt;Fragments that act as fixtures or sandboxes during tests.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In any of these situations, it can be very helpful if events would be able to
bubble up, even though it&apos;s not attached to the document yet.&lt;/p&gt;
&lt;p&gt;What most libraries do is either not support this at all, or not in the most
optimal way. For example, Zepto does not support it, and jQuery does something
similar to what I usually see:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;while (element.parentNode) {
  element.dispatchEvent(event);
  element = element.parentNode;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This might work for either attached or detached DOM trees: just dispatch the
event on each ancestor of the targeted element (often by calling the &amp;quot;trigger&amp;quot;
method).&lt;/p&gt;
&lt;p&gt;However, wouldn&apos;t it be better if we let the browser do all the work, and just
let the event bubble up the tree (while dispatching only a single event without
traversing the tree)?&lt;/p&gt;
&lt;h2&gt;Detect support&lt;/h2&gt;
&lt;p&gt;Here&apos;s a way to detect if a browser supports bubbling events in detached DOM
trees:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;var isEventBubblingInDetachedTree = (function (global) {
  var isBubbling = false;
  var doc = global.document;
  if (doc) {
    var parent = doc.createElement(&apos;div&apos;),
      child = parent.cloneNode();
    parent.appendChild(child);
    parent.addEventListener(&apos;e&apos;, function () {
      isBubbling = true;
    });
    child.dispatchEvent(new CustomEvent(&apos;e&apos;, { bubbles: true }));
  }
  return isBubbling;
})(this);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In a browser that supports this, dispatching the event is enough to have the
event bubble up. Currently, at least in IE10, IE11 and Firefox you can take
advantage of this.&lt;/p&gt;
&lt;p&gt;In other browsers, you still need to dispatch the event on each element in the
ancestor chain. Here&apos;s a snippet to demonstrate what this might look like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;if (
  !params.bubbles ||
  isEventBubblingInDetachedTree ||
  isAttachedToDocument(element)
) {
  element.dispatchEvent(event);
} else {
  triggerForPath(element, type, params);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I think this code is quite self-explanatory. See DOMtastic&apos;s &lt;a href=&quot;https://github.com/webpro/DOMtastic/blob/master/src/event/trigger.js&quot;&gt;event
implementation&lt;/a&gt; for an extended example.&lt;/p&gt;
&lt;h2&gt;Wrapping up&lt;/h2&gt;
&lt;p&gt;Bubbling events might not be your biggest (performance) issue, but I think it&apos;s
good to know how to deal with them anyway. Including situations where you&apos;re not
using jQuery to handle this for you.&lt;/p&gt;
</description><pubDate>Tue, 21 Jan 2014 00:00:00 GMT</pubDate></item><item><title>My takeaways from “Clean Code”</title><link>https://webpro.nl/articles/my-takeaways-from-clean-code</link><guid isPermaLink="true">https://webpro.nl/articles/my-takeaways-from-clean-code</guid><description>&lt;h1&gt;My takeaways from &amp;quot;Clean Code&amp;quot;&lt;/h1&gt;
&lt;blockquote&gt;
&lt;p&gt;To write clean code, you must first write dirty code and then clean it.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;With pleasure I have been reading &lt;a href=&quot;http://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882&quot;&gt;Clean Code&lt;/a&gt; by &lt;a href=&quot;http://en.wikipedia.org/wiki/Robert_Cecil_Martin&quot;&gt;Robert C. Martin&lt;/a&gt;. The
book is a nice read, with short chapters. However, just reading the book has no
value. You will need to recognize the &lt;em&gt;&amp;quot;smells and heuristics&amp;quot;&lt;/em&gt; in your day to
day work, and act on them. This requires labor and dedication, which will
gradually enhance your level of experience. The power of this book, at least to
me, lies in defining and describing many heuristics, making them easier to
recognize.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./clean-code.svg&quot; alt=&quot;clean-code&quot;&gt;&lt;/p&gt;
&lt;p&gt;The book is full of takeaways, and below is a small selection from the book that
drew my attention most.&lt;/p&gt;
&lt;h2&gt;Flag arguments are ugly&lt;/h2&gt;
&lt;p&gt;Perhaps the only exception is for specific setters that directly set the value
of an object property (flag?) itself. But I have to agree that flags implicitly
mean that the method is probably doing too much (e.g. there is no Command Query
Separation).&lt;/p&gt;
&lt;h2&gt;Minimize the number of arguments&lt;/h2&gt;
&lt;p&gt;I had seen the term of &lt;em&gt;dyadic functions&lt;/em&gt; before, but the term &amp;quot;dyadic&amp;quot; is
hardly used in programming conversations. I also think that when you do use the
term, you still have to explain what it means! Let alone &amp;quot;dyads&amp;quot; and &amp;quot;triads&amp;quot;...&lt;/p&gt;
&lt;p&gt;Anyway, it is always a good programming advice to minimize the number of
arguments. Zero or one argument is easiest to understand and maintain.&lt;/p&gt;
&lt;h2&gt;Have no side effects&lt;/h2&gt;
&lt;p&gt;Very valuable advice. Simple to understand, and shouldn&apos;t be too hard to
implement. Probably sometimes side effects happen when writing a method at
first, but during refactoring such &amp;quot;lies&amp;quot; should be taken care of, and removed.&lt;/p&gt;
&lt;h2&gt;Avoid output arguments&lt;/h2&gt;
&lt;p&gt;Output arguments are objects that the method operates on, and then returns. An
example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;extendWall(h);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Does this function extend &amp;quot;h&amp;quot; with a wall? Or is the wall extended with &amp;quot;h&amp;quot;? And
what would it return? It&apos;s more clear to use &amp;quot;this&amp;quot; as the output argument:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;h.extendWall();
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Command Query Separation&lt;/h2&gt;
&lt;p&gt;Functions should either do something or answer something. That&apos;s practical and
clear advice.&lt;/p&gt;
&lt;h2&gt;Comments are fails&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;Truth can only be found in one place: the code.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The author has a clear opinion on comments. He considers every single comment
written as a failure, because the code apparently isn&apos;t expressive enough. One
reason, which is hard to deny, is that comments are often badly maintained.
Investing time in proper and descriptive naming in the code is a rewarding
practice. Still, I think it&apos;s fine to explain the &amp;quot;why&amp;quot; of code where code alone
simply is not expressive enough to easily understand what&apos;s going on. But the
takeaway here to me is that &amp;quot;the only truth is in the code&amp;quot;.&lt;/p&gt;
&lt;h2&gt;The purpose of formatting&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;Your style and discipline survives, even though your code does not.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I&apos;m a big advocate of clear coding standards, but I didn&apos;t take it as far as
this. But ultimately, I think this is true. Maintainability and extensibility
are always top priority, more so than some implementation details. Still,
conventions alone will take you nowhere.&lt;/p&gt;
&lt;h2&gt;Don&apos;t pass null&lt;/h2&gt;
&lt;p&gt;Simply do not return or pass &lt;code&gt;null&lt;/code&gt;. It&apos;s better to use &amp;quot;empty&amp;quot; versions of the
type that&apos;s being expected, e.g. an empty array, string or object. This way, the
receiving code doesn&apos;t have to check the type. Unless you are writing some
public, robust API. But internally, it saves a lot of exception handling to
minimize such usage of &lt;code&gt;null&lt;/code&gt; values.&lt;/p&gt;
&lt;h2&gt;Learning tests are better than free&lt;/h2&gt;
&lt;p&gt;Writing (unit) tests are an absolutely smart way to learn and exercise a (new)
interface. It gives a feel about something you need to learn anyway. Tests from
both simple and exercising production code can serve as documentation along the
way.&lt;/p&gt;
&lt;h2&gt;Tests enable the -ilities&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;It&apos;s the tests that keep our code flexible, maintainable, and reusable [...]
Because tests enable change.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Think about that for a while, and probably you will appreciate tests even more.&lt;/p&gt;
&lt;h2&gt;Getting clean via emergent design&lt;/h2&gt;
&lt;p&gt;Any design is considered &amp;quot;simple&amp;quot; if it:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Runs all the tests&lt;/li&gt;
&lt;li&gt;Contains no duplication&lt;/li&gt;
&lt;li&gt;Expresses the intent of the programmer&lt;/li&gt;
&lt;li&gt;Minimizes the number of classes and methods&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Making a system testable motivates (or forces) to implement established
programming principles, which leads to better designs. Then, the rest follows
with incremental refactorings which can be done because of the tests. The
takeaway for me here is that tests both motivate and catalyse refactoring to
better designs.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;There are many more principles, patterns, and practices in the book. This list
summarizes what stood out for me most. I think any serious programmer will pick
up something useful from reading this book. Highly recommended!&lt;/p&gt;
</description><pubDate>Tue, 28 May 2013 00:00:00 GMT</pubDate><category>clean</category><category>code</category><category>heuristics</category><category>principles</category><category>patterns</category></item></channel></rss>