<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
<channel>
<title>Backdrifting</title>
<link>https://backdrifting.net</link>
<description>Backdrifting: An intersection of social system design, cybernetics, and hacking</description>
<item>
<title> Open Academic Publication
</title>
<link>https://backdrifting.net/post/072_open_publication</link>
<description><![CDATA[<h2 id="open-academic-publication">Open Academic Publication</h2>

<p><strong>Posted 10/28/2023</strong></p>

<p>I’m currently at a workshop on open practices across disciplines, and one topic of discussion is how to change the academic publishing process to be more accessible to both authors and readers. I’ve also had a few friends outside of academia ask me how publishing research papers works, so it’s a good opportunity to write a post about the messy world of academic publishing.</p>

<h3 id="the-traditional-publication-model">The Traditional Publication Model</h3>

<p>Academics conduct research, write an article about their findings, and submit their article to an appropriate journal for their subject. There it undergoes review by a committee of peer researchers qualified to assess the quality of the work, and upon acceptance, the article is included in the next issue of the journal. In a simple scenario, the process is illustrated by this flowchart:</p>

<object data="/postImages/publication_flowchart.svg" alt="Publication flowchart" type="image/svg+xml"></object>

<p>Libraries and research labs typically pay journals a subscription fee to receive new issues. This fee traditionally covered publication expenses, including typesetting (tedious for papers with lots of equations and plots and diagrams), printing, and mail distribution, along with the salaries of journal staff like editors, who are responsible for soliciting volunteer peer-reviews from other academics. These subscription fees were long considered a necessary evil: they limit the ability of low-income academics to access published research, such as scientists at universities in developing countries, let alone allowing the public to access research, but we all agree that printing and distributing all these journal issues has some significant financial overhead.</p>

<p>In recent decades, all significant journals have switched to majority or exclusively digital distribution. Academics do most of the typesetting themselves with LaTeX or Microsoft Word templates provided by the journals, and there’s no printing and negligible distribution costs for hosting a PDF online, so publication fees now go largely to the profit margins of publishers. This has made academic publishing <a href="https://wordsrated.com/academic-publishing-statistics/">ludicrously profitable, with margins as high as 40% in a multi-billion dollar industry</a>.</p>

<h3 id="the-shift-to-open-publishing">The Shift to Open Publishing</h3>

<p>Academics complain bitterly that journal publishers are parasitic, charging exorbitant publication fees while providing almost no service. After all, research is conducted by academics and submitted to the publishers for free. Other academics review the research, also for free, as peer-review is considered expected community service within academia. Since academics are typically funded by government agencies (such as the National Science Foundation, Department of Energy, and Department of Defense in the United States), this is taxpayer-funded public research, whose distribution is being limited by publishers rather than facilitated by them.</p>

<p>As journal subscription costs grew, these complaints eventually evolved into threats by universities to cancel their journal subscriptions, and funding agencies like <a href="https://www.nsf.gov/pubs/2018/nsf18041/nsf18041.jsp#q1">the NSF began to demand that work they fund be made publicly accessible</a>. The publisher profit margins were endangered, and they needed to act quickly to suppress dissent.</p>

<p>Many publishers now offer or require an alternative publishing scheme: Open Access. Under Open Access, articles can be read for free, but academics must pay to have their work published in order to cover staff salaries and the burdensome cost of web-hosting PDFs. This not only protects the revenue stream of publishers, but can expand it dramatically when journals like <em>Nature Neuroscience</em> <a href="https://www.nature.com/articles/d41586-023-01391-5">charge $11,690 per article</a>.</p>

<p><img src="/postImages/open_access_logo.png" alt="Open Access Logo" style="width: 100px;" /></p>

<p>While Open Access allows academics with fewer resources to read scholarly work from their peers, and allows the public to read academic papers, it also inhibits academics with less funding from publishing if they can’t afford the publication fees. Further, it provides an incentive for publishers to accept as many papers for publication as possible to maximize publication fees, even if these papers are of lower quality or do not pass rigorous peer-review. When journals are paid under a subscription model they make the same income whether a new issue has ten or a hundred articles in it, and so it is more profitable to be selective in order to maximize the ‘prestige’ of the journal and increase subscriptions.</p>

<h3 id="what-can-be-done">What Can Be Done?</h3>

<p>Academic research remains constrained by publishers, who either charge a fortune before publication, or after, while providing minimal utility to academia. These costs disproportionately impact researchers with less funding, often those outside North America and Europe. The most obvious solution to this problem might be “replace the journals with lower-cost alternatives,” but this is easier said than done. Even if we could find staff to organize and run a series of lower-cost journals, there’s a lot of political momentum behind the established publishers. Academics obtain grant funding, job offers, and tenure through publishing. Successful publishing means publishing many papers in prestigious journals and getting many citations on those papers. A new unproven journal won’t replace a big name like <em>Nature</em> or <em>Science</em> any time soon in the eyes of funding agencies and tenure committees, and will take time to gather a loyal readership before papers in it receive many reads or citations. While I hope for eventual reform of journals and academic institutional practices at large, a more immediate solution is needed.</p>

<h4 id="collective-bargaining">Collective Bargaining</h4>

<p>One option is to simply pressure existing journals into dropping fees. If enough universities threaten to cut their subscriptions to major journals, then publishers will have no choice but to lower subscription costs or Open Access publication fees and accept a lower profit margin. This strategy has seen some limited success - some universities are cutting their contracts with major publishers, perhaps most notably when <a href="https://www.insidehighered.com/news/2019/03/01/university-california-cancels-deal-elsevier-after-months-negotiations">the University of California system ended their subscription to all Elsevier journals</a> in 2019. However, this strategy can only work if researchers have leverage. Elsevier is the worst offender, and so universities can cut ties and push their researchers to publish in competitor journals from Springer or SAGE, but the costs at those competitor publishers remains high.</p>

<h4 id="preprints">Preprints</h4>

<p>Physicists popularized the idea of a “preprint.” Originally this consisted of astrophysicists emailing rough drafts of their papers to one another. This had less to do with publication fees and more to do with quickly sharing breakthroughs without the delays that peer-review and publication incurs. Over time, the practice shifted from mailing lists to centralized repositories, and grew to encompass physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. That preprint service <a href="https://arxiv.org/">is called arXiv</a>. This effort has been replicated in other fields, including <a href="https://www.biorxiv.org/">bioRxiv</a>, <a href="https://chemrxiv.org/">ChemRxiv</a>, <a href="https://www.medrxiv.org/">medRxiv</a>, and <a href="https://osf.io/preprints/socarxiv">SocArXiv</a>, although preprint usage is not common in all fields.</p>

<p><img src="/postImages/arxiv_logo.png" alt="ArXiv Logo" style="width: 200px;" /></p>

<p>Papers submitted to preprint servers have not undergone peer-review, and often have little to no quality control - the moderators at arXiv will give a paper a quick glance to remove obvious spam submissions, but they have neither the resources nor the responsibility to confirm that research they host is of high quality or was conducted ethically. Preprint papers were always intended to be rough drafts before publication in real journals, not a substitution for publication. Nevertheless, it is common practice for scholars to bypass journal paywalls by looking for a preprint of the same research before it underwent peer-review, so in practice preprint servers already serve as an alternative to journal subscriptions.</p>

<h4 id="shadow-libraries">Shadow Libraries</h4>

<p>The direct action counter to journal subscription fees is to simply pirate the articles. <a href="https://sci-hub.se/about">Sci-Hub</a> and <a href="https://libgen.rs/">Library Genesis</a> (URLs subject to frequent change) acquire research papers and books, respectively, and host them as PDFs for free, ignoring copyright. Both shadow libraries have been sued for copyright infringement in several jurisdictions, but have rotated operations between countries and have so far avoided law enforcement.</p>

<p><img src="/postImages/Scihub_raven.png" alt="Sci-Hub Raven Logo" style="width: 86px; background: rgb(255,255,255);" /></p>

<p>Use of Sci-Hub is ubiquitous in STEM-academia, and is often the only way that researchers can access articles if they have limited funding or operate out of sanctioned locations, such as Russia during the Russia-Ukraine war. Sci-Hub’s founder, Alexandra Elbakyan, considers the site’s operations to be a moral imperative under the Universal Declaration of Human Rights, which guarantees all human beings the right to freely share in scientific advancements and their benefits. Whether or not you agree with Elbakyan’s stance, it seems clear that a combination of shadow libraries and preprint services have undermined the business models of traditional academic publishers and made them more amenable to alternatives like Open Access, and more susceptible to threats by universities to end subscriptions.</p>

<h3 id="what-comes-next">What Comes Next?</h3>

<p>Academic publishing is approaching a crisis point. Research funding in most disciplines is scarce, and journal subscription or publication fees are steadily increasing. The number of graduate and postgraduate researchers is growing, guaranteeing an accelerating rate of papers, straining publication fees and the peer-review system even further. Academics have tolerated the current system by using preprints and shadow libraries to share work without paying journals, but these are stopgaps with a range of shortcomings. If academic research is to flourish then we will see a change that lowers publication costs and perhaps relieves strain on peer reviewers, but what that change will look like or how soon it will come remains open to debate.</p>
]]></description>
</item>
<item>
<title> When is a network "decentralized enough?"
</title>
<link>https://backdrifting.net/post/071_social_decentralization</link>
<description><![CDATA[<h2 id="when-is-a-network-decentralized-enough">When is a network “decentralized enough?”</h2>

<p><strong>Posted 08/08/2023</strong></p>

<p><em>I’ve submitted a new paper! <a href="https://arxiv.org/abs/2307.15027">Here’s the not-peer-reviewed pre-print.</a> This post will discuss my work for non-network-scientist audiences.</em></p>

<p>There is broad disillusionment regarding the influence major tech companies have over our online interactions. Social media is largely governed by Meta (Facebook, Instagram, Whatsapp), Google (YouTube), and Twitter. In specific sub-communities, like open source software development, a single company like GitHub (owned by Microsoft) may have near-monopolistic control over online human collaboration. These companies define both the technology we use to communicate, and thereby the actions we can take to interact with one another, and the administrative policies regarding what actions and content are permissible on each platform.</p>

<p>In addition to debates over civic responsibility and regulation of online platforms, pushback to the centralized influence of these companies has taken two practical forms:</p>

<ol>
  <li>
    <p>Alt-Tech. Communities that are excluded from mainstream platforms, often right-wing hate and conspiracy groups, have built <a href="https://knightcolumbia.org/blog/deplatforming-our-way-to-the-alt-tech-ecosystem">an ecosystem of alternative platforms</a> that mirrors their mainstream counterparts, but with administrations more supportive of their political objectives. These include Voat and the .Win-network (now defunct Reddit-clones), BitChute and Parler (YouTube-clones), Gab (Twitter-clone), and many others.</p>
  </li>
  <li>
    <p>The Decentralized Web. Developers concerned about centralized control of content have built <a href="/post/042_p2p_models">a number of decentralized platforms</a> that aim to limit the control a single entity can have over human communication. These efforts include Mastodon, a Twitter alternative consisting of federated Twitter-like subcommunities, and ad-hoc communities like a loose network of self-hosted git servers. The decentralized web also encompasses much older decentralized networks like Usenet and email, and bears similarity to the motivations behind some Web3 technologies.</p>
  </li>
</ol>

<p>It is this second category, of ostensibly self-governed online communities, that interests me. Building a community-run platform is a laudable goal, but does the implementation of Mastodon and similar platforms fall short of those aspirations? How do we measure how ‘decentralized’ a platform is, or inversely, how much influence an oligarchy has over a platform?</p>

<h3 id="the-community-size-argument">The Community-Size Argument</h3>

<p>One common approach to measuring social influence is to examine population size. <a href="https://kevq.uk/centralisation-and-mastodon/">The largest three Mastodon instances host over half of the entire Mastodon population</a>. Therefore, the administrators of those three instances have disproportionate influence over permitted speech and users on Mastodon as a whole. Users who disagree with their decisions are free to make their own Mastodon instances, but if the operators of the big three instances refuse to connect to yours then half the Mastodon population will never see your posts.</p>

<p>A size disparity in community population is inevitable without intervention. Social networks follow <a href="https://en.wikipedia.org/wiki/Scale-free_network#Generative_models">rich-get-richer</a> dynamics: new users are likely to join an existing vibrant community rather than a fledgling one, increasing its population and making it more appealing to future users. This is fundamentally a social pressure, but it is even further amplified by search engines, which are more likely to return links to larger and more active sites, funneling potential users towards the largest communities.</p>

<p>But is size disparity necessarily a failure of decentralization? Proponents of Mastodon have emphasized <a href="https://runyourown.social/#why-run-a-small-social-network-site?">the importance of small communities</a> that fit the needs of their members, and the Mastodon developers have stated that <a href="https://blog.joinmastodon.org/2019/03/the-role-of-mastodon.social-in-the-mastodon-ecosystem/">most Mastodon instances are small, topic-specific communities</a>, with their <code>mastodon.social</code> as a notable exception. If smaller communities operate comfortably under the shadow of larger ones, perhaps this is a healthy example of decentralized governance.</p>

<p>Before exploring alternative methods for measuring social centralization, let’s compare a few of these decentralized and alt-tech platforms using the lens of community size. Below is a plot of sub-community population sizes for five platforms.</p>

<object data="/postImages/community_sizes_percent.svg" alt="Size of sub-communities relative to largest community per platform" type="image/svg+xml"></object>

<p>The y-axis represents the population of each community as a fraction of the largest community’s size. In other words, the largest community on each platform has a size of “1”, while a community with a tenth as many users has a size of “0.1”. The x-axis is what fraction of communities have at least that large a population. This allows us to quickly show that about 2% of Mastodon instances are least 1% the size of the largest instance, or alternatively, 98% of Mastodon instances have fewer than 1% as many users as the largest instance.</p>

<p>This puts Mastodon in similar territory as two centralized platforms, BitChute and Voat. Specifically, the number of commenters on BitChute channels follows a similar distribution to Mastodon instance sizes, while the distribution of Voat “subverse” (analogous to “subreddits”) populations is even more skewed.</p>

<p>By contrast, the number of users on self-hosted Git servers (<a href="https://epjdatascience.springeropen.com/track/pdf/10.1140/epjds/s13688-022-00345-7.pdf">the Penumbra of Open-Source</a>), and unique authors on <a href="https://usenet.nereid.pl/">Polish Usenet newsgroups</a>, is far more equitable: around a third of git servers have at least 1% as many users as the largest, while the majority of newsgroups are within 1% of the largest.</p>

<h3 id="inter-community-influence">Inter-Community Influence</h3>

<p>If smaller communities exist largely independently of larger ones, then the actions of administrators on those large communities <em>does not matter</em> to the small community, and even in the face of a large population disparity a platform can be effectively decentralized. How can we measure this notion of “independence” in a platform-agnostic way such that we can compare across platforms?</p>

<p>Each of the five platforms examined above has some notion of cross-community activity. On Mastodon, users can follow other users on both their own instance and external instances. On the other four platforms, users can directly participate in multiple communities, by contributing to open source projects on multiple servers (Penumbra), or commenting on multiple channels (BitChute), subverses (Voat), or newsgroups (Usenet).</p>

<p>In network science terminology, we can create a <em>bipartite graph,</em> or a graph with two types of vertices: one for communities, and one for users. Edges between users and communities indicate that a user interacts with that community. For example, here’s a diagram of Mastodon relationships, where an edge of ‘3’ indicates that a user follows three accounts on a particular instance:</p>

<object data="/postImages/mastodon_follows.svg" alt="Example follow relationships between Mastodon instances" type="image/svg+xml"></object>

<p>This allows us to simulate the disruption caused by <em>removing</em> an instance: if <code>mastodon.social</code> went offline tomorrow, how many follow relationships from users on <code>kolektiva.social</code> and <code>scholar.social</code> would be disrupted? More globally, what percentage of all follow relationships by remaining users have just been pruned? If the disruption percentage is high, then lots of information flowed from the larger community to the smaller communities. Conversely, if the disruption percentage is low, then users of the smaller communities are largely unaffected.</p>

<p>Here is just such a plot, simulating removing the largest community from each platform, then the two largest, three largest, etcetera:</p>

<object data="/postImages/platform_disruption.svg" alt="Simulated disruption as communities are removed" type="image/svg+xml"></object>

<p>From this perspective on inter-community relationships, each platform looks a little different. Removing the largest three Mastodon instances has a severe effect on the remaining population, but removing further communities has a rapidly diminished effect. Removing Usenet newsgroups and BitChute channels has a similar pattern, but less pronounced.</p>

<p>Voat and the Penumbra require additional explanation. Voat, like Reddit, allowed users to subscribe to “subverses” to see posts from those communities on the front page of the site. New users were subscribed to a set of 27 subverses by default. While the two largest subverses by population (<code>QRV</code> and <code>8chan</code>) were topic-specific and non-default, the third largest subverse, <code>news</code>, was a default subverse with broad appeal and high overlap with all other communities. Therefore, removing the largest two communities would have had little impact on users uninvolved in QAnon discussions, but removing <code>news</code> would impact almost every user on the site and cuts nearly 10% of interactions site-wide.</p>

<p>The Penumbra consists of independently operated git servers, only implicitly affiliated in that some developers contributed to projects hosted on multiple servers. Since servers are largely insular, most developers only contribute to projects on one, and so those developers are removed entirely along with the git server. If a user contributed to projects hosted on two servers then disruption will increase when the first server is removed, but will <em>decrease</em> when the second server is removed, and the developer along with it. This is shown as spiky oscillations, where one popular git server is removed and drives up disruption, before another overlapping git server is removed and severs the other side of those collaborations.</p>

<p>Sometimes you may be uninterested in the impact of removing the largest 2, 3, or 10 instances, and want a simple summary statistic for whether one platform is “more centralized” than another. One way to approximate this is to calculate the area under the curve for each of the above curves:</p>

<object data="/postImages/real_networks_auc.svg" alt="Area under the curve for each graph" type="image/svg+xml"></object>

<p>This scores Mastodon as the most centralized, because removing its largest instances has such a large effect on its peers. By contrast, while the Voat curve is visually striking, it’s such a sharp increase because removing the largest two communities <em>doesn’t</em> have a large impact on the population.</p>

<h3 id="situating-within-network-science">Situating Within Network Science</h3>

<p>“Centralization” is an ill-defined term, and network scientists have a range of ways of measuring centralization for different scenarios. These metrics fall into three broad categories:</p>

<table>
  <thead>
    <tr>
      <th>Scale</th>
      <th>Description</th>
      <th>Examples</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Vertex</td>
      <td>Measures how central a role a single node plays in the network</td>
      <td>Betweenness centrality, Eigenvector centrality</td>
    </tr>
    <tr>
      <td>Cluster</td>
      <td>Measures aspects of a particular group of vertices</td>
      <td>Assortativity / Homophily, Modularity, Insularity / Border index</td>
    </tr>
    <tr>
      <td>Graph</td>
      <td>A summary attribute of an entire graph</td>
      <td>Diameter, Density, Cheeger numer</td>
    </tr>
  </tbody>
</table>

<p>These metrics can capture aspects of centrality like “this vertex is an important bridge connecting two regions of a graph” or “this vertex is an important hub because many shortest paths between vertices pass through it.” They can measure how tight a bottleneck a graph contains (or, phrased another way, how well a graph can be partitioned in two), they can measure how much more likely similar vertices are to connect with one another, or how skewed the degree distribution of a graph is.</p>

<p>However, these metrics are mostly intended for fully connected unipartite graphs, and do not always have clear parallels in disconnected or bipartite graphs. Consider the following examples:</p>

<object data="/postImages/centralization-figure.svg" alt="Diagram of centralized, decentralized, and ambiguous networks" type="image/svg+xml"></object>

<p>Many would intuitively agree that the left-most graph is central: one community in the center is larger than the rest, <em>and</em> serves as a bridge connecting several other communities together. By contrast, the middle graph is decentralized, because while the communities aren’t all the same size, none are dramatically larger than one another, and none serve a critical structural role as a hub or bridge.</p>

<p>The graph on the right is harder to describe. One community is <em>much</em> larger than its peers, but the remaining graph is identical to the decentralized example. By degree distribution, the graph would appear to be centralized. If we add a single edge connecting the giant community to any user in the main graph, then the giant community’s betweenness centrality score would skyrocket because of its prominent role in so many shortest-paths between users. However, it would still be inappropriate to say that the largest community plays a pivotal role in the activity of the users in the rest of the graph - it’s hardly connected at all!</p>

<p>My disruption metric is a cluster-level or mesoscale measurement for bipartite graphs that measures the influence of each community on its peers, although you can calculate the area under the disruption curve to make a graph-scale summary statistic. Using this approach, the centralized community is decidedly centralized, and the decentralized and ambiguous graphs are decidedly <em>not.</em></p>

<h3 id="takeaways">Takeaways</h3>

<p>Community size disparity is natural. Some communities will have broader appeal, and benefit from more from rich-get-richer effects than their smaller, more focused peers. Therefore, even a thriving decentralized platform may have a highly skewed population distribution. To measure the influence of oligarchies on a platform, we need a more nuanced view of interconnection and information flow between communities.</p>

<p>I have introduced a ‘disruption’ metric that accounts for both the size of a community and its structural role in the rest of the graph, measuring its potential influence on its peers. While the disruption metric illustrates how population distributions can be deceptive, it is only a preliminary measurement. Follows across communities and co-participation in communities are a rough proxy for information flow, or a network of <em>potential</em> information flow. A more precise metric for observed information flow might measure the number of messages that are boosted (“retweeted”) from one Mastodon instance to another, or might measure how frequently a new discussion topic, term, or URL appears first in one community, and later appears in a “downstream” community.</p>

<p>Does population size correlate with these measurements of information flow and influence? Are some smaller communities more influential than their size would suggest? How much does the graph structure of potential information flow predict ‘social decentralization’ in practice? There are many more questions to explore in this domain - but this is a start!</p>
]]></description>
</item>
<item>
<title> AntNet: Networks from Ant Colonies
</title>
<link>https://backdrifting.net/post/070_antnet</link>
<description><![CDATA[<h2 id="antnet-networks-from-ant-colonies">AntNet: Networks from Ant Colonies</h2>

<p><strong>Posted 08/07/2023</strong></p>

<p>Ant nests look kind of like networks - they have rooms, and tunnels between the rooms, analogous to vertices and edges on a graph. A graph representation of a nest might help us answer questions about different ant species like:</p>

<ul>
  <li>
    <p>Do some species create more rooms than others?</p>
  </li>
  <li>
    <p>Do some species have different room layouts, such as a star with a central room, or a main corridor rooms sprout off of, closer to a random network, or something like a small-world network?</p>
  </li>
  <li>
    <p>Do some species dig their rooms deeper, perhaps to better insulate from cold weather, or with additional ‘U’ shaped bends to limit flooding in wetter climates?</p>
  </li>
</ul>

<p>I’m no entomologist, and I will not answer those questions today. I <em>will</em> however, start work on a tool that can take photos of ant farms and produce corresponding network diagrams. I don’t expect this tool to be practical for real world research: ant farms are constrained to two dimensions, while ants in the wild will dig in three, and this tool may miss critical information like the shapes of rooms. But it will be a fun exercise, and maybe it will inspire something better.</p>

<h3 id="a-pictures-worth-a-thousand-words">A picture’s worth a thousand words</h3>

<p>We’ll start with a photo of an ant farm, cropped to only include the dirt:</p>

<p><img src="/postImages/antnet1.png" alt="Color photo of an ant farm" /></p>

<p>I want to reduce this image to a Boolean map of where the ants have and have not excavated. For a first step, I’ll flatten it to black and white, adjusting brightness and contrast to try to mark the tunnels as black, and the remaining dirt as white. Fortunately, <a href="https://imagemagick.org/">ImageMagick</a> makes this relatively easy:</p>

<pre><code>convert -white-threshold 25% -brightness-contrast 30x100 -alpha off -threshold 50% original.png processed.png
</code></pre>

<p><img src="/postImages/antnet2.png" alt="B/W photo of an ant farm" /></p>

<p>Clearly this is a noisy representation. Some flecks of dirt are dark enough to flatten to ‘black,’ and the ants have left some debris in their tunnels that appear ‘white.’ The background color behind the ant farm is white, so some regions that are particularly well excavated appear bright instead of dark. We might be able to improve that last problem by coloring each pixel according to its distance from either extreme, so that dark tunnels and bright backgrounds are set to ‘black’ and the medium brown dirt is set to ‘white’ - but that’s more involved, and we’ll return to that optimization later if necessary.</p>

<p>In broad strokes, we’ve set excavated space to black and dirt to white. If we aggregate over regions of the image, maybe we can compensate for the noise.</p>

<h3 id="hexagonal-lattices">Hexagonal Lattices</h3>

<p>My first thought was to overlay a square graph on the image. For each, say, 10x10 pixel region of the image, count the number of black pixels, and if they’re above a cutoff threshold then set the whole square to black, otherwise set it to white. This moves us from a messy image representation to a simpler tile representation, like a board game.</p>

<p>There are a few problems with this approach. Looking ahead, I want to identify rooms and tunnels based on clumps of adjacent black tiles. A square has only four neighbors - eight if we count diagonals, but diagonally-adjacent tiles don’t necessarily imply that the ants have dug a tunnel between the two spaces. So, we’ll use hexagons instead of squares: six-neighbors, no awkwardness about ‘corners,’ and we can still construct a regular lattice:</p>

<p><img src="/postImages/antnet3.png" alt="Hexagonal lattice overlayed on the B/W image" /></p>

<p>So far so good! A hexagonal coordinate system is a little different from a Cartesian grid, but fortunately <a href="/post/064_hex_grids">I’ve worked with cube coordinates before.</a> For simplicity, we’ll set the diameter of a hexagon to the diameter of a tunnel. This should help us distinguish between tunnels and rooms later on, because tunnels will be around one tile wide, while rooms will be much wider.</p>

<p>Unfortunately, a second problem still remains: there’s no good threshold for how many black pixels should be inside a hexagon before we set it to black. A hexagon smack in the middle of a tunnel <em>should</em> contain mostly black pixels. But what if the hexagons aren’t centered? In a worst-case scenario a tunnel will pass right <em>between</em> two hexagons, leaving them both with half as many black pixels. If we set the threshold too tight then we’ll set both tiles to white and lose a tunnel. If we set the threshold too loose then we’ll set both tiles to black and make a tunnel look twice as wide as is appropriate - perhaps conflating some tunnels with rooms.</p>

<p>So, <a href="/post/062_dithering">I’m going to try dithering!</a> This is a type of error propagation used in digital signal processing, typically in situations like converting color images to black and white. In our case, tiles close to white will still be set to white, and tiles close to black will still be darkened to black - but in an ambiguous case where two adjoining tiles are not-quite-dark-enough to be black, we’ll round one tile to white, and the other to black. The result is mostly okay:</p>

<p><img src="/postImages/antnet4.png" alt="Dithered hexagons" /></p>

<p>We’re missing some of the regions in the upper right that the ants excavated so completely that the white background shone through. We’re also missing about two hexagons needed to connect the rooms and tunnels on the center-left with the rest of the nest. We might be able to correct both these issues by coloring pixels according to contrast and more carefully calibrating the dithering process, but we’ll circle back to that later.</p>

<h3 id="flood-filling">Flood Filling</h3>

<p>So far we’ve reduced a messy color photograph to a much simpler black-and-white tile board, but we still need to identify rooms, junctions, and tunnels. I’m going to approach this with a depth first search:</p>

<ol>
  <li>
    <p>Define a global set of explored tiles, and a local set of “neighborhood” tiles. Select an unexplored tile at random as a starting point.</p>
  </li>
  <li>
    <p>Mark the current tile as explored, add it to the neighborhood, and make a list of unexplored neighbors</p>
  </li>
  <li>
    <p>If the list is longer than three, recursively explore each neighbor starting at step 2</p>
  </li>
  <li>
    <p>Once there are no more neighbors to explore, mark the neighborhood as a “room” if it contains at least ten tiles, and a “junction” if it contains at least four. Otherwise, the neighborhood is part of a tunnel, and should be discarded.</p>
  </li>
  <li>
    <p>If any unexplored tiles remain, select one and go to step 2</p>
  </li>
</ol>

<p>Once all tiles have been explored, we have a list of “rooms” and a list of “junctions,” each of which are themselves lists of tiles. We can visualize this by painting the rooms blue and the junctions red:</p>

<p><img src="/postImages/antnet5.png" alt="Flood filled" /></p>

<p>Looking good so far!</p>

<h3 id="making-a-graph">Making a Graph</h3>

<p>We’re most of the way to a graph representation. We need to create a ‘vertex’ for each room or junction, with a size proportional to the number of tiles in the room, and a position based on the ‘center’ of the tiles.</p>

<p>Then we need to add edges. For this we’ll return to a depth-first flood fill algorithm. This time, however, we’ll recursively explore all tiles adjacent to a room that aren’t part of another room or junction, to see which other vertices are reachable. This won’t preserve the shape, length, or width of a tunnel, but it will identify which areas of the nest are reachable from which others:</p>

<p><img src="/postImages/antnet6.png" alt="Graph Representation" /></p>

<p>And there we have it!</p>

<h3 id="drawbacks-limitations-next-steps">Drawbacks, Limitations, Next Steps</h3>

<p>We’ve gone from a color photo of an ant farm to a network diagram, all using simple algorithms, no fancy machine learning. I think we have a decent result for a clumsy first attempt!</p>

<p>There are <em>many</em> caveats. We’re missing some excavated spaces because of the wall color behind the ant farm in our sample photo. The dithering needs finer calibration to identify some of the smaller tunnels. Most importantly, an enormous number of details need to be calibrated for each ant farm photo. The brightness and contrast adjustments and noise reduction, the hexagon size, the dithering thresholds, and the room and junction sizes for flood filling, may <em>all</em> vary between each colony photo.</p>

<p>For all these reasons, I’m pausing here. If I think of a good way to auto-calibrate those parameters and improve on the image flattening and dithering steps, then maybe I’ll write a part two. Otherwise, this has progressed beyond a couple-evening toy project, so <a href="https://github.com/milo-trujillo/AntNet">I’ll leave the code as-is.</a></p>
]]></description>
</item>
<item>
<title> Geolocating Users via Text Messages
</title>
<link>https://backdrifting.net/post/069_freaky_leaky_sms</link>
<description><![CDATA[<h2 id="geolocating-users-via-text-messages">Geolocating Users via Text Messages</h2>

<p><strong>Posted 7/28/2023</strong></p>

<p>A recent research paper, <a href="https://arxiv.org/pdf/2306.07695.pdf">Freaky Leaky SMS: Extracting User Locations by Analyzing SMS Timings (PDF)</a>, purports to geolocate phone numbers by texting them and analyzing response times. This is creepy, interesting, and hopefully a warning that can perhaps help phone companies to better protect their customers’ privacy in the future. Today I’m writing up a short summary, context, and some of my thoughts about the study. The original paper is intended for computer security and machine learning scientists, but I intend to write for a broader audience in this post.</p>

<h3 id="the-concept">The Concept</h3>

<p>When Alice sends Bob a text message, Bob’s phone sends back an acknowledgement automatically - “I received your text!” If Alice’s phone doesn’t receive that acknowledgement before a timeout, Alice gets a “Failed to deliver text” error.</p>

<object data="/postImages/sms_delays.svg" alt="Diagram of Alice texting Bob, and Bob's phone sending back a delivery report" type="image/svg+xml"></object>

<p>If Alice is standing next to Bob in Chicago, that text should be delivered quickly, and the acknowledgement should arrive almost instantly. If Alice is in Chicago and Bob is in Hong Kong, it should take slightly longer for the round-trip text message and acknowledgement.</p>

<p>So, if the delay before a text acknowledgement correlates with the distance between the phones, can we text Bob from three different phones, and by analyzing the delays, triangulate his position? What level of precision can we obtain when tracking Bob in this way?</p>

<h3 id="the-limitations">The Limitations</h3>

<p>In reality, text message delays will be messy. If Alice’s texts travel through a telecommunications hub in Chicago, then there may be a delay related to the amount of congestion on that hub. If there are multiple paths between Alice and Bob across telecommunications equipment, then each path may incur a different delay. Finally, the routes of telecommunications equipment may not take birds-eye-view shortest paths between locations. For example, if Alice and Bob are on opposite sides of a mountain range, the phone switches connecting them may divert around the mountains or through a pass, rather than directly over.</p>

<p>However, “messy” does not mean random or uncorrelated. If we text Bob enough times from enough phones, and apply some kind of noise reduction (maybe taking the median delay from each test-phone?), we <em>may</em> be able to overcome these barriers and roughly identify Bob’s location.</p>

<h3 id="the-study">The Study</h3>

<p>The researchers set up a controlled experiment: they select 34 locations across Europe, the United States, and the United Arab Emirates, and place a phone at each. They assign three of these locations as “senders” and all 34 as “receivers.”</p>

<object data="/postImages/sms_sensor_locations.svg" alt="Receiver locations across the U.S., Europe, and the UAE" type="image/svg+xml"></object>

<p>To gather training data, they send around 155K text messages, in short bursts every hour over the course of three days. This provides a baseline of round-trip texting time from the three senders to the 34 receivers during every time of day (and therefore, hopefully, across a variety of network congestion levels).</p>

<p>For testing, the researchers can text a phone number from their three senders, compare the acknowledgement times to their training data, and predict which of the 34 locations a target phone is at. The researchers compare the test and training data using a ‘multilayer perceptron’, but the specific machine learning model isn’t critical here. I’m curious whether a much simpler method, like k-nearest-neighbors or a decision-tree, might perform adequately, but that’s a side tangent.</p>

<p>The heart of the research paper consists of two results, in sections 5.1 and 5.2. First, they try to distinguish whether a target is ‘domestic’ or ‘abroad.’ For example, the sensors in the UAE can tell whether a phone number is also at one of the locations in the UAE with 96% accuracy. This is analogous to our starting example of distinguishing between a Chicago-Chicago text and a Chicago-Hong-Kong text, and is relatively easy, but a good baseline. They try distinguishing ‘domestic’ and ‘abroad’ phones from a variety of locations, and retain high accuracy so long as the two countries are far apart. Accuracy drops to between 75 and 62% accuracy when both the sensor and target are in nearby European countries, where timing differences will be much smaller. Still better than random guessing, but no longer extremely reliable.</p>

<p>Next, the researchers pivot to distinguishing between multiple target locations in a single country - more challenging both because the response times will be much closer, and because they must now predict from among four or more options rather than a simple “domestic” and “abroad”. Accuracy varies between countries and the distances between target locations, but generally, the technique ranges between 63% and 98% accurate.</p>

<p>The rest of the paper has some auxiliary results, like how stable the classifier accuracy is over time as congestion patterns change, how different phones have slightly different SMS acknowledgement delays, and how well the classifier functions if the target individual travels between locations. There’s also some good discussion on the cause of errors in the classifier, and comparisons to other types of SMS attacks.</p>

<h3 id="discussion">Discussion</h3>

<p>These results are impressive, but it’s important to remember that they are distinguishing <em>only</em> between subsets of 34 predefined locations. This study is a far cry from “enter any phone number and get a latitude and longitude,” but clearly there’s a lot of signal in the SMS acknowledgement delay times.</p>

<p>So what can be done to fix this privacy leak? Unfortunately, I don’t see any easy answers. Phones <em>must</em> return SMS acknowledgements, or we’d never know if a text message was delivered successfully. Without acknowledgements, if someone’s phone battery dies, or they put it in airplane mode, or lose service while driving through a tunnel, text messages to them would disappear into the void.</p>

<p>Phones could add a random delay before sending an acknowledgement - or the telecommunications provider could add such a delay on their end. This seems appealing, but the delay would have to be short - wait too long to send an acknowledgement, and the other phones will time out and report that the text failed to deliver. If you add a short delay, chosen from, say, a uniform or normal distribution, then sending several texts and taking the median delay will ‘de-noise’ the acknowledgement time.</p>

<p>Right now there are two prominent “defenses” against this kind of attack. The first is that it’s a complicated mess to pull off. To generalize from the controlled test in the paper to finding the geolocation of any phone would require more ‘sending’ phones, lots more receiving phones for calibration, and a <em>ton</em> of training data, not to mention a data scientist to build a classifier around that data. The second is that the attack is “loud:” texting a target repeatedly to measure response times will bombard them with text messages. This doesn’t prevent the attack from functioning, but at least the victim receives some indication that <em>something</em> weird is happening to them. There is a type of diagnostic SMS ping called a <em>silent SMS</em> that does not notify the user, but these diagnostic messages can only be sent by a phone company, and are intended for things like confirming reception between a cell phone and tower.</p>

<p>Overall, a great paper on a disturbing topic. I often find side-channel timing attacks intriguing; the researchers haven’t identified a ‘bug’ exactly, the phone network is functioning exactly as intended, but this is a highly undesired consequence of acknowledgement messages, and a perhaps unavoidable information leak if we’re going to provide acknowledgement at all.</p>
]]></description>
</item>
<item>
<title> We don't need ML, we have gzip!
</title>
<link>https://backdrifting.net/post/068_text_classification_gzip</link>
<description><![CDATA[<h2 id="we-dont-need-ml-we-have-gzip">We don’t need ML, we have gzip!</h2>

<p><strong>Posted 7/15/2023</strong></p>

<p><a href="https://aclanthology.org/2023.findings-acl.426.pdf">A recent excellent paper</a> performs a sophisticated natural language processing task, usually solved using complicated deep-learning neural networks, using a shockingly simple algorithm and gzip. This post will contextualize and explain that paper for non-computer scientists, or for those who do not follow news in NLP and machine learning.</p>

<h3 id="what-is-text-classification">What is Text Classification?</h3>

<p>Text Classification is a common task in natural language processing (NLP). Here’s an example setting:</p>

<blockquote>
  <p>Provided are several thousand example questions from <a href="https://en.wikipedia.org/wiki/Yahoo!_Answers">Yahoo! Answers</a>, pre-categorized into bins like ‘science questions,’ ‘health questions,’ and ‘history questions.’ Now, given an arbitrary new question from Yahoo! Answers, which category does it belong in?</p>
</blockquote>

<p>This kind of categorization is easy for humans, and traditionally much more challenging for computers. NLP researchers have spent many years working on variations of this problem, and regularly host text classification competitions at NLP conferences. There are a few broad strategies to solving such a task.</p>

<h4 id="bag-of-words-distance">Bag of Words Distance</h4>

<p>One of the oldest computational tools for analyzing text is the <em>Bag of Words</em> model, which dates back to the 1950s. In this approach, we typically discard all punctuation and capitalization and common “stop words” like “the,” “a,” and “is” that convey only structural information. Then we count the number of unique words in a sample of text, and how many times each occur, then normalize by the total number of words.</p>

<p>For example, we may take the sentence “One Ring to rule them all, One Ring to find them, One Ring to bring them all, and in the darkness bind them” and reduce it to a bag of words:</p>

<pre><code>{
    'one': 0.27,
    'ring': 0.27,
    'rule': 0.09,
    'find': 0.09,
    'bring': 0.09,
    'darkness': 0.09,
    'bind': 0.09
}
</code></pre>

<p>We could then take another passage of text, reduce it to a bag of words, and compare the bags to see how similar the word distributions are, or whether certain words have much more prominence in one bag than another. There are <em>many</em> tools for performing this kind of distribution comparison, and many ways of handling awkward edge cases like words that only appear in one bag.</p>

<p>The limitations of bags of words are obvious - we’re destroying all the context! Language is much more than just a list of words and how often they appear: the order of words, and their co-occurrence, conveys lots of information, and even structural elements like stop words and punctuation convey some information, or we wouldn’t use them. A bag of words distills language down to something that basic statistics can wrestle with, but in so doing boils away much of the humanity.</p>

<h4 id="word-embeddings">Word Embeddings</h4>

<p>Natural Language Processing has moved away from bags of words in favor of <em>word embeddings.</em> The goal here is to capture exactly that context of word co-occurrence that a bag of words destroys. For a simple example, let’s start with Asimov’s laws of robotics:</p>

<blockquote>
  <p>A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law</p>
</blockquote>

<p>Removing punctuation and lower-casing all terms, we could construct a window of size two, encompassing the two words before and after each term as context:</p>

<table>
  <thead>
    <tr>
      <th>Term</th>
      <th>Context</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>robot</td>
      <td>a, may, not, harm, must, obey, law, protect</td>
    </tr>
    <tr>
      <td>human</td>
      <td>injure, a, being, or, allow, to, it, by, except</td>
    </tr>
    <tr>
      <td>orders</td>
      <td>must, obey, given, it, where, such, would, conflict</td>
    </tr>
    <tr>
      <td>…</td>
      <td>…</td>
    </tr>
  </tbody>
</table>

<p>This gives us a small amount of context for each term. For example, we know that “orders” are things that can be “obeyed,” “given,” and may “conflict.” You can imagine that if we used a larger corpus for training, such as the complete text of English Wikipedia, we would get a lot more context for each word, and a much better sense of how frequently words appear in conjunction with one another.</p>

<p>Now let’s think of each word as a point in space. The word “robot” should appear in space close to other words that it frequently appears near, such as “harm,” “obey,” and “protect,” and should appear far away from words it never co-occurs with, such as “watermelon.” Implicitly, this means that “robot” will also appear relatively close to other words that share the same context - for example, while “robot” does not share context with “orders,” both “orders” and “robot” share context with “obey,” so the words “robot” and “orders” will not be too distant.</p>

<p>This mathematical space, where words are points with distance determined by co-occurrence and shared context, is called an <em>embedding.</em> The exact process for creating this embedding, including how many dimensions the space should use, how much context should be included, how points are initially projected into space, how words are tokenized, whether punctuation is included, and many finer details, vary between models. For more details on the training process, I recommend <a href="https://www.cs.cmu.edu/~dst/WordEmbeddingDemo/tutorial.html">this Word Embedding tutorial from Dave Touretzky</a>.</p>

<p>Once we have an embedding, we can ask a variety of questions, like word association: kitten is to cat as puppy is to X? Mathematically, we can draw a vector from kitten to cat, then transpose that vector to “puppy” and look for the closest point in the embedding to find “dog.” This works because “cat” and “dog” are in a similar region of the embedding, as they are both animals, and both pets. The words “kitten” and “puppy” will be close to their adult counterparts, and so also close to animal and pet associations, but will <em>additionally</em> be close to youth terms like “baby” and “infant”.</p>

<p><img src="/postImages/word_embedding.png" alt="Word Embedding" width="50%" /></p>

<p>(Note that these embeddings can also contain undesired metadata: for example, “doctor” may be more closely associated with “man” than “woman”, and the inverse for “nurse”, if the training data used to create the embedding contains such a gender bias. Embeddings represent word adjacency and similar use in written text, and should not be mistaken for an understanding of language or a reflection of the true nature of the world.)</p>

<p>In addition to describing words as points in an embedding, we can now describe documents as a series of points, or as an average of those points. Given two documents, we can now calculate the average distance from points in one document to points in another document. Returning to the original problem of text classification, we can build categories of documents as clouds of points. For each new prompt, we can calculate its distance from each category, and place it in the closest category.</p>

<p>These embedding techniques allow us to build software that is impressively flexible: given an embedded representation of ‘context’ we can use vectors to categorize synonyms and associations, and build machines that appear to ‘understand’ and ‘reason’ about language much more than preceding Bag of Words models, simple approaches at representing context like Markov Chains, or attempts at formally parsing language and grammar. The trade-off is that these models are immensely complicated, and require enormous volumes of training data. Contemporary models like BERT have hundreds of millions of parameters, and can only be trained by corporations with vast resources like Google and IBM.</p>

<h3 id="the-state-of-the-art">The State of the Art</h3>

<p>In modern Natural Language Processing, deep neural networks using word embeddings dominate. They produce the best results in a wide variety of tasks, from text classification to translation to prediction. While variations between NLP models are significant, the general consensus is that more parameters and more training data increase performance. This focuses most of the field on enormous models built by a handful of corporations, and has turned attention away from simpler or more easily understood techniques.</p>

<p>Zhiying Jiang, Matthew Y.R. Yang, Mikhail Tsirlin, Raphael Tang, Yiqin Dai, and Jimmy Lin, did not use a deep neural network and a large embedding space. They did not use machine learning. They used gzip.</p>

<p>Their approach is simple: compression algorithms, like gzip, are very good at recognizing patterns and representing them succinctly. If two pieces of text are similar, such as sharing many words, or especially entire phrases, then compressing the two pieces of text together should be quite compact. If the two pieces of text have little in common, then their gzipped representation will be less compact.</p>

<p>Specifically, given texts <code>A</code>, <code>B</code>, and <code>C</code>, if <code>A</code> is more similar to <code>B</code> than <code>C</code>, then we can usually expect:</p>

<div class="language-python highlighter-coderay"><div class="CodeRay">
  <div class="code"><pre><span style="color:#369;font-weight:bold">len</span>(gzip(A+B)) - <span style="color:#369;font-weight:bold">len</span>(gzip(A)) &lt; <span style="color:#369;font-weight:bold">len</span>(gzip(A+C)) - <span style="color:#369;font-weight:bold">len</span>(gzip(A))
</pre></div>
</div>
</div>

<p>So, given a series of pre-categorized texts in a training set, and given a series of uncategorized texts in a test set, the solution is clear: compress each test text along with each training text to find the ‘distance’ between the test text and each training text. Select the <code>k</code> nearest neighbors, and find the most common category among them. Report this category as the predicted category for the test text.</p>

<p>Their complete algorithm is a fourteen line Python script:</p>

<div class="language-python highlighter-coderay"><div class="CodeRay">
  <div class="code"><pre><span style="color:#080;font-weight:bold">import</span> <span style="color:#B44;font-weight:bold">gzip</span>
<span style="color:#080;font-weight:bold">import</span> <span style="color:#B44;font-weight:bold">numpy</span> <span style="color:#080;font-weight:bold">as</span> np
<span style="color:#080;font-weight:bold">for</span> (x1,_) <span style="color:#080;font-weight:bold">in</span> test_set:
    Cx1 = <span style="color:#369;font-weight:bold">len</span>(gzip.compress(x1.encode()))
    distance_from_x1 = []
    <span style="color:#080;font-weight:bold">for</span> (x2,_) <span style="color:#080;font-weight:bold">in</span> training_set:
        Cx2 = <span style="color:#369;font-weight:bold">len</span>(gzip.compress(x2.encode())
        x1x2 = <span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">&quot;</span><span style="color:#D20"> </span><span style="color:#710">&quot;</span></span>.join([x1,x2])
        Cx1x2 = <span style="color:#369;font-weight:bold">len</span>(gzip.compress(x1x2.encode())
        ncd = (Cx1x2 - <span style="color:#369;font-weight:bold">min</span>(Cx1,Cx2)) / <span style="color:#369;font-weight:bold">max</span> (Cx1,Cx2)
        distance_from_x1.append(ncd)
    sorted_idx = np.argsort(np.array(distance_from_x1))
    top_k_class = training_set[sorted_idx[:k], <span style="color:#00D">1</span>]
    predict_class = <span style="color:#369;font-weight:bold">max</span>(<span style="color:#369;font-weight:bold">set</span>(top_k_class), key = top_k_class.count)
</pre></div>
</div>
</div>

<p>Shockingly, this performs on par with most modern NLP classifiers: it performs better than many for lots of common English classification data sets, and on most data sets it performs above average. BERT has higher accuracy on every data set, but not by much. A fourteen line Python script with gzip in lieu of machine learning performs almost as well as Google’s enormous embedded deep learning neural network. (See table 3 in the original paper, page 5)</p>

<p>A more recent variant on the classification challenge is to classify text in a language not included in the training data. For example, if we expended enormous resources training BERT on English text, is there any way to pivot that training and apply that knowledge to Swahili? Can we use embeddings from several languages to get some general cross-language fluency at text categorization in <em>other</em> languages? Or, if we do need to re-train, how little training data can we get away with to re-calibrate our embeddings and function on a new language? This is unsurprisingly a very difficult task. The gzip classifier outperformed all contemporary machine learning approaches that the authors compared to. (See table 5 in the original paper, page 6)</p>

<h3 id="conclusions">Conclusions</h3>

<p>This paper is a great reminder that more complicated tools, like ever-larger machine-learning models, are not always better. In particular, I think their approach hits upon an interesting balance regarding complexity. Bag of words models discard context and punctuation, making computation simple, but at the cost of destroying invaluable information. However, keeping all of this information in the form of an embedding, and attempting to parse human language, incurs a heavy complexity cost. There’s a lot of “fluff” in language that we do not necessarily need for classification. The gzip approach <em>keeps</em> the extra context of word order and punctuation, but does not try to tackle the harder problem of understanding language in order to address the simpler problem of looking for similarities. In general, tools should be as simple as possible to complete their task, but no simpler.</p>

<h3 id="edit-7182023---misleading-scores-in-paper">EDIT 7/18/2023 - Misleading Scores in Paper</h3>

<p>It appears that the authors have a made an unusual choice in their accuracy calculations, which inflate their scores compared to contemporary techniques. In summary, they use a kNN classifier with <code>k=2</code>, but rather than choosing a tie-breaking metric for when the two neighbors diverge, they mark their algorithm as correct if <em>either</em> neighbor has the correct label. This effectively makes their accuracy a “top 2” classifier rather than a kNN classifier, which misrepresents the performance of the algorithm. This isn’t necessarily an invalid way to measure accuracy, but it <em>does</em> need to be documented, and <em>isn’t</em> what we’d expect in traditional kNN. The gzip scores under a standard k=2 kNN <em>remain</em> impressive for such a simple approach and are still competitive - but they’re no longer beating deep neural network classifiers for non-English news datasets (table 5).</p>

<p>Here’s the problem in a little more detail:</p>

<ul>
  <li>
    <p>The authors compress all training texts along with the test prompt, to find the gzip distance between the prompt and each possible category example</p>
  </li>
  <li>
    <p>Rather than choosing the <em>closest</em> example and assuming the categories match, the authors choose the <code>k</code> closest examples, take the mode of their categories, and predict <em>that.</em> This k-nearest-neighbors (kNN) strategy is common in machine learning, and protects against outliers</p>
  </li>
  <li>
    <p>When the number of categories among neighbors are tied, one must have a tie-breaking strategy. A common choice is to pick the tie that is a closer neighbor. Another choice might be to expand the neighborhood, considering one additional neighbor until the tie is broken - or inversely, to shrink the neighborhood, using a smaller k until the tie is broken. Yet another choice might be to randomly choose one of the tied categories.</p>
  </li>
  <li>
    <p>The authors use <code>k=2</code>, meaning that they examine the two closest neighbors, which will either be of the same category, or will be a tie. Since they will encounter many ties, their choice of tie-breaking algorithm is very important</p>
  </li>
  <li>
    <p>In the event of a tie between two neighbors, the authors report success if <em>either</em> neighbor has the correct label</p>
  </li>
</ul>

<p>The code in question <a href="https://github.com/bazingagin/npc_gzip/blob/a46991564161023bba3b1267e0e74c69dab8f8eb/experiments.py#L112-L120">can be found here</a>. Further analysis by someone attempting to reproduce the results of the paper <a href="https://kenschutte.com/gzip-knn-paper/">can be found here</a> and is <a href="https://github.com/bazingagin/npc_gzip/issues/3">discussed in this GitHub issue</a>. In <a href="https://twitter.com/ZhiyingJ/status/1679988463431458818">conversations with the author</a> this appears to be an intentional choice - but unfortunately it’s one that makes the gzip classifier appear to outperform BERT, when in reality if it gives you two potentially correct classes and you always pick the correct choice you will outperform BERT.</p>

<p>I’ve heard the first author is defending their thesis tomorrow - Congratulations, good luck, hope it goes great!</p>

]]></description>
</item>
<item>
<title> Bloom Filters
</title>
<link>https://backdrifting.net/post/067_bloom_filters</link>
<description><![CDATA[<h2 id="bloom-filters">Bloom Filters</h2>

<p><strong>Posted 4/8/2023</strong></p>

<p>Sticking with a theme from my last post on <a href="/post/066_hyperloglog">HyperLogLog</a> I’m writing about more probabilistic data structures! Today: <em>Bloom Filters.</em></p>

<h3 id="what-are-they">What are they?</h3>

<p>Bloom filters track <em>sets:</em> you can add elements to the set, and you can ask “is this element in the set?” and you can estimate the size of the set. That’s it.  So what’s the big deal? Most languages have a <code>set</code> in their standard library. You can build them with hash tables or trees pretty easily.</p>

<p>The magic is that Bloom filters can store a set in <em>constant space,</em> while traditional <code>set</code> data structures scale linearly with the elements they contain. You allocate some space when you create the Bloom filter - say, 8 kilobytes of cache space - and the Bloom filter will use exactly 8 kilobytes no matter how many elements you add to it or how large those elements are.</p>

<p>There are two glaring limitations:</p>

<ol>
  <li>
    <p>You cannot enumerate a Bloom filter and ask <em>what</em> elements are in the set, you can only ask <em>whether</em> a specific element may be in the set</p>
  </li>
  <li>
    <p>Bloom filters are probabilistic: they can tell you that an element is <em>not</em> in the set with certainty (no false negatives), but they can only tell you that an element <em>may</em> be in the set, with uncertainty</p>
  </li>
</ol>

<p>When creating a Bloom filter, you tune two knobs that adjust their computational complexity and storage requirements, which in turn control their accuracy and the maximum number of unique elements they can track.</p>

<h3 id="applications">Applications</h3>

<p>Why would we want a non-deterministic set that can’t tell us definitively what elements it includes? Even if constant-space storage is impressive, what use is a probabilistic set?</p>

<h4 id="pre-cache-for-web-browsers">Pre-Cache for Web Browsers</h4>

<p>Your web-browser stores images, videos, CSS, and other web elements as you browse, so that if you navigate to multiple pages on a website that re-use elements, or you browse from one website to another and back again, it doesn’t need to re-download all those resources. However, spinning hard drives are slow, so checking an on-disk cache for every element of a website will add a significant delay, especially if we learn that we don’t have the element cached and then need to fetch it over the Internet anyway. One solution here is using a Bloom filter as a pre-cache: check whether the URL of a resource is in the Bloom filter, and if we get a “maybe” then we check the disk cache, but if we get a “no” then we definitely don’t have the asset cached and need to make a web request. Because the Bloom filter takes a small and fixed amount of space we can cache it in RAM, even if a webpage contains many thousands of assets.</p>

<h4 id="pre-cache-for-databases">Pre-Cache for Databases</h4>

<p>Databases can use Bloom filters in a similar way. SQL databases are typically stored as binary trees (if indexed well) to faciliate fast lookup times in queries. However, if a table is large, and lots of data must be read from a spinning hard drive, then even a well-structured table can be slow to read through. If queries often return zero rows, then this is an expensive search for no data! We can use Bloom filters as a kind of LOSSY-compressed version of rows or columns in a table. Does a row containing the value the user is asking for exist in the table? If the Bloom filter returns “maybe” then evaluate the query. If the Bloom filter returns “no” then return an empty set immediately, without loading the table at all.</p>

<h4 id="tracking-novel-content">Tracking Novel Content</h4>

<p>Social media sites may want to avoid recommending the same posts to users repeatedly in their timeline - but maintaining a list of every tweet that every user has ever seen would require an unreasonable amount of overhead. One possible solution is maintaining a Bloom filter for each user, which would use only a small and fixed amount of space and can identify posts that are definitely new to the user. False positives will lead to skipping some posts, but in an extremely high-volume setting this may be an acceptable tradeoff for guaranteeing novelty.</p>

<h3 id="how-do-bloom-filters-work">How do Bloom filters work?</h3>

<h4 id="adding-elements">Adding elements</h4>

<p>Bloom filters consist of an array of <code>m</code> bits, initially all set to 0, and <code>k</code> hash functions (or a single function with <code>k</code> salts). To add an element to the set, you hash it with each hash function. You use each hash to choose a bucket from <code>0</code> to <code>m-1</code>, and set that bucket to 1. In psuedocode:</p>

<div class="language-ruby highlighter-coderay"><div class="CodeRay">
  <div class="code"><pre><span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">add</span>(element)
    <span style="color:#080;font-weight:bold">for</span> i <span style="color:#080;font-weight:bold">in</span> <span style="color:#00D">0</span>..k
        bin = hash(element, i) % m
        <span style="color:#036;font-weight:bold">Bloomfilter</span>[bin] = <span style="color:#00D">1</span>
</pre></div>
</div>
</div>

<p>As a visual example, consider a ten-bit Bloom filter with three hash functions. Here we add two elements:</p>

<object data="/postImages/bloom_filter_adding.svg" alt="Adding two elements to a Bloom filter, with one overlapping bit" type="image/svg+xml"></object>

<h4 id="querying-the-bloom-filter">Querying the Bloom filter</h4>

<p>Querying the Bloom filter is similar to adding elements. We hash our element <code>k</code> times, check the corresponding bits of the filter, and if <em>any</em> of the bits are zero then the element does not exist in the set.</p>

<div class="language-ruby highlighter-coderay"><div class="CodeRay">
  <div class="code"><pre><span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">isMaybePresent</span>(element)
    <span style="color:#080;font-weight:bold">for</span> i <span style="color:#080;font-weight:bold">in</span> <span style="color:#00D">0</span>..k
        bin = hash(element, i) % m
        <span style="color:#080;font-weight:bold">if</span>( <span style="color:#036;font-weight:bold">Bloomfilter</span>[bin] == <span style="color:#00D">0</span> )
            <span style="color:#080;font-weight:bold">return</span> <span style="color:#069">false</span>
    <span style="color:#080;font-weight:bold">return</span> <span style="color:#069">true</span>
</pre></div>
</div>
</div>

<p>For example, if we query ‘salmon’, we find that one of the corresponding bits is set, but the other two are not. Therefore, we are certain that ‘salmon’ has not been added to the Bloom filter:</p>

<object data="/postImages/bloom_filter_searching.svg" alt="Searching for an element in a Bloom filter. While one bit is set, the other two are not, so the element has not been added to the set." type="image/svg+xml"></object>

<p>If <em>all</em> of the corresponding bits are one then the element <em>might</em> exist in the set, or those bits could be the result of a full- or several partial-collisions with the hashes of other elements. For example, here’s the same search for ‘bowfin’:</p>

<object data="/postImages/bloom_filter_searching2.svg" alt="Searching for an element in a Bloom filter. All bits are set from collisions with two other added words." type="image/svg+xml"></object>

<p>While ‘bowfin’ hasn’t been added to the Bloom filter, and neither of the added fish have a complete hash collision, the partial hash collisions with ‘swordfish’ and ‘sunfish’ cover the same bits as ‘bowfin’. Therefore, we cannot be certain that ‘bowfin’ has or has not been added to the filter.</p>

<h4 id="estimating-the-set-length">Estimating the Set Length</h4>

<p>There are two ways to estimate the number of elements in the set. One is to maintain a counter: every time we add a new element to the set, if those bits were not all already set, then we’ve definitely added a new item. If all the bits were set, then we can’t distinguish between adding a duplicate element and an element with hash collisions.</p>

<p>Alternatively, we can retroactively estimate the number of elements based on the density of 1-bits, the number of total bits, and the number of hash functions used, as follows:</p>

<object data="/postImages/bloom_filter_length.svg" alt="Estimating the number of elements in a Bloom filter based on its configuration and the number of one-bits" type="image/svg+xml"></object>

<p>In other words, the density of 1-bits should correlate with the number of elements added, where we add <code>k</code> or less (in the case of collision) bits with each element.</p>

<p>Both estimates will begin to undercount the number of elements as the Bloom filter “fills.” Once many bits are set to one, hash collisions will be increasingly common, and adding more elements will have little to no effect on the number of one-bits in the filter.</p>

<h4 id="configurability">Configurability</h4>

<p>Increasing the number of hash functions lowers the chance of a complete collision. For example, switching from two hash functions to four means you need twice as many bits to be incidentally set by other elements of the set before a query returns a false positive. While I won’t include the full derivation, the optimal number of hash functions is mathematically determined by the desired false-positive collision rate (one in a hundred, one in a thousand, etc):</p>

<object data="/postImages/bloom_filter_hashes.svg" alt="Optimal number of hash functions based on desired error rate" type="image/svg+xml"></object>

<p>However, increasing the number of hash functions also fills the bits of the Bloom filter more quickly, decreasing the total number of elements that can be stored. We can compensate by storing more bits in the Bloom filter, but this increases memory usage. Therefore, the optimal number of bits in a Bloom filter will <em>also</em> be based on the false-positive rate, and on the number of unique elements we expect to store, which will determine how “full” the filter bits will be.</p>

<object data="/postImages/bloom_filter_bins.svg" alt="Optimal number of Bloom filer bits based on input size and desired error rate" type="image/svg+xml"></object>

<p>If we want to store more elements without increasing the error rate, then we need more bits to avoid further collisions. If we want to insert the same number of elements and a <em>lower</em> error-rate, then we need more bits to lower the number of collisions. If we deviate from this math by using too few bits or too many hash functions then we’ll quickly fill the filter and our error rate will skyrocket. If we use <em>fewer</em> hash functions then we’ll increase the error-rate through sensitivity to collisions, unless we also increase the number of bits, which can lower the error-rate at the cost of using more memory than necessary.</p>

<p>Note that this math isn’t <em>quite</em> right - we need an integer number of hash functions, and an integer number of bits, so we’ll round both to land close to the optimal configuration.</p>

<h3 id="how-well-does-this-work-in-practice">How well does this work in practice?</h3>

<p>Let’s double-check the theoretical math with some simulations. I’ve inserted between one and five-thousand elements, and used the above equations to solve for optimal Bloom filter parameters for a desired error rate of 1%, 5%, and 10%.</p>

<p>Here’s the observed error rate, and the number of recommended hash functions, plotted using the mean result and a 95% confidence-interal:</p>

<object data="/postImages/bloom_filter_observed_error_k.svg" alt="Observed false positive rate when using optimal filter parameters" type="image/svg+xml"></object>

<p>As we can see, our results are almost spot-on, and become more reliable as the Bloom filter increases in size! Here are the same simulation results, where the hue represents the number of bits used rather than the number of hash functions:</p>

<p><img src="/postImages/bloom_filter_observed_error_m.png" alt="Observed false positive rate when using optimal filter parameters" width="60%" /></p>

<p>Since the number of recommended bits changes with the number of inserted elements, I had to plot this as a scatter plot rather than a line plot. We can see that the number of bits needed steadily increases with the number of inserted elements, but especially with the error rate. While storing 5000 elements with a 5% error rate requires around 24-kilobits, maintaining a 1% error rate requires over 40 kilobits (5 kilobytes).</p>

<p>Put shortly, the math checks out.</p>

<h3 id="closing-thoughts">Closing Thoughts</h3>

<p>I think I’m drawn to these probabilistic data structures because they loosen a constraint that I didn’t realize existed to do the “impossible.”</p>

<p>Computer scientists often discuss a trade-off between time and space. Some algorithms and data structures use a large workspace to speed computation, while others can fit in a small amount of space at the expense of more computation.</p>

<object data="/postImages/bloom_filter_complexity_axis.svg" alt="Traditional tradeoff between time and space in computer science" type="image/svg+xml"></object>

<p>For example, inserting elements into a sorted array runs in <code>O(n)</code> - it’s quick to find the right spot for the new element, but it takes a long time to scoot all the other elements over to make room. By contrast, a hash table can insert new elements in (amortized) <code>O(1)</code>, meaning its performance scales much better. However, the array uses exactly as much memory as necessary to fit all its constituent elements, while the hash table must use several more times memory - and keep most of it empty - to avoid hash collisions. Similarly, compression algorithms pack data into more compact formats, but require additional computation to get useful results back out.</p>

<p>However, if we loosen accuracy and determinism, creating data structures like Bloom filters that can only answer set membership with a known degree of confidence, or algorithms like Hyperloglog that can count elements with some error, then we can create solutions that are both time <em>and</em> space efficient. Not just space efficient, but preposterously so: constant-space solutions to set membership and size seem fundamentally impossible. This trade-off in accuracy challenges my preconceptions about what kind of computation is possible, and that’s mind-blowingly cool.</p>
]]></description>
</item>
<item>
<title> HyperLogLog: Counting Without Counters
</title>
<link>https://backdrifting.net/post/066_hyperloglog</link>
<description><![CDATA[<h2 id="hyperloglog-counting-without-counters">HyperLogLog: Counting Without Counters</h2>

<p><strong>Posted 3/20/2023</strong></p>

<p>I recently learned about <a href="https://en.wikipedia.org/wiki/HyperLogLog">HyperLogLog</a>, which feels like cursed counter-intuitive magic, so I am eager to share.</p>

<h3 id="the-task">The Task</h3>

<p>We want to count unique items, like “how many unique words appear across all books at your local library?” or “how many unique Facebook users logged in over the past month?” For a small set of unique tokens, like counting the unique words in this blog post, you might store each word in a set or hash table as you read them, then count the length of your set when you’re done. This is simple, but means the amount of memory used will scale linearly with the number of unique tokens, making such an approach impractical when counting <em>enormous</em> sets of tokens. But what if I told you we could accurately estimate the number of unique words while storing only a single integer?</p>

<h3 id="probabilistic-counting-algorithm">Probabilistic Counting Algorithm</h3>

<p>To start with, we want to <em>hash</em> each of our words. A hash function takes arbitrary data and translates it to a ‘random’ but consistent number. For example, we’ll use a hash function that takes any word and turns it into a number from zero to <code>2**64</code>, with a uniform probability across all possible numbers. A good hash function will be unpredictable, so changing a single letter in the word or swapping the order of letters will yield a completely different number.</p>

<p>Next, we take the resulting hash, treat it as binary, and count how many leading bits are zero. An example is shown below:</p>

<object data="/postImages/hyperloglog_hashing.svg" alt="Word -&gt; Hash function -&gt; Hash in hex -&gt; Hash in binary with leading zero-bits highlighted" type="image/svg+xml"></object>

<p>We repeat this process for every word, tracking only the highest number of leading zero-bits we’ve observed, which we’ll call <code>n</code>. When we reach the end of our data, we return <code>2**n</code> as our estimate of how many unique words we’ve seen.</p>

<h3 id="probabilistic-counting-theory">Probabilistic Counting Theory</h3>

<p>So how in the world does this work? The key is that a good hash function returns hashes uniformly across its range, so we have turned each unique word into random numbers. Since hashing functions are deterministic, duplicate words will return the same hash.</p>

<p>A uniformly random number of fixed bit-length (for example, a random 64-bit integer) will start with a zero-bit with a probability of <code>1/2</code>, and will start with a 1-bit with a probability of <code>1/2</code>. It will start with two zero-bits with a probability of <code>1/4</code>, three zero-bits with a probability of <code>1/8</code>, and so on. A probability tree for this might look like:</p>

<object data="/postImages/hyperloglog_probability.svg" alt="Probability of a bit string starting with 0, then 00, then 000..." type="image/svg+xml"></object>

<p>We can run this explanation in reverse: if you have observed a hash that starts with three zero-bits, then <em>on average</em> you will have observed about 8 unique hashes, because around 1 in 8 hashes start with three zero-bits.</p>

<p>This sounds great, but there are two problems. First, the words “on average” are pretty important here: if you only examine one word, and it happens to have a hash starting with four leading zeros, then our probabilistic counting algorithm will guess that you’ve examined sixteen words, rather than one. Over 6% of hashes will start with four leading zeros, so this is easily possible. We need some way to overcome these ‘outliers’ and get a more statistically representative count of leading zeros.</p>

<p>Second, our probabilistic counting function can only return integer powers of two as estimates. It can guess that you’ve observed 8, 256, or 1024 words, but it can never estimate that you’ve observed 800 words. We want an estimator with a higher precision.</p>

<h3 id="outlier-compensation-and-precision-boosting-multiple-hashes">Outlier Compensation and Precision Boosting: Multiple Hashes</h3>

<p>One strategy for addressing both limitations of probabilistic counting is to use multiple hashes. If we hash each observed word using ten different hash functions (or one hash function with ten different salts, but that’s a technical tangent), then we can maintain ten different counts of the highest number of leading zeros observed. Then at the end, we return the average of the ten estimates.</p>

<p>The more hash functions we use, the less sensitive our algorithm will be to outliers. Additionally, averaging over multiple counts lets us produce non-integer estimates. For example, if half our hash functions yield a maximum of four leading zeros, and half yield a maximum of five leading zeros, then we could estimate <code>2**4.5</code> unique tokens, or around 23.</p>

<p>This approach solves both our problems, but at a severe cost: now we need to calculate ten times as many hashes! If we’re counting upwards of billions of words, then this approach requires calculating nine billion additional hashes. Clearly, this won’t scale well.</p>

<h3 id="outlier-compensation-and-precision-boosting-hyperloglog">Outlier Compensation and Precision Boosting: HyperLogLog</h3>

<p>Fortunately, there is an alternative solution that requires no additional hashing, known as <em>HyperLogLog.</em> Instead of using multiple hash functions and averaging across the results, we can instead pre-divide our words into buckets, and average across <em>those.</em></p>

<p>For example, we could make 16 buckets, assign incoming hashes to each bucket uniformly, and maintain a “most leading zero-bits observed” counter for each bucket. Then we calculate an estimated number of unique elements from each bucket, and average across all buckets to get a global estimate.</p>

<p>For an easy approach to assigning hashes to each bucket, we can use the first four bits of each hash as a bucket ID, then count the number of leading zeros after this ID.</p>

<object data="/postImages/hyperloglog_bucket_assignment.svg" alt="First four bits used as a bucket ID, following two bits read as 'two leading zero-bits'" type="image/svg+xml"></object>

<p>Once again, averaging across several sets of “most leading zeros” will minimize the impact of outliers, and afford us greater precision, by allowing non-integer exponents for our powers of two. Unlike the multiple hash solution, however, this approach will scale nicely.</p>

<p>One downside to HyperLogLog is that the bucket-averaging process is a little complicated. Dividing hashes across multiple buckets diminishes the impact of outliers, as desired, but it also diminishes the impact of <em>all our hashes.</em> For example, say we have 64 hashes, spread across 16 buckets, so 4 hashes per bucket. With 64 hashes, we can expect, on average, one hash with six leading zeros. However, each bucket has only four hashes, and therefore an expected maximum of two leading zeros. So while one bucket probably has six, most have closer to two, and taking the arithmetic mean of the buckets would severely underestimate the number of unique hashes we’ve observed. Therefore, HyperLogLog has a more convoluted estimation algorithm, consisting of creating estimates from each bucket, taking their <a href="https://en.wikipedia.org/wiki/Harmonic_mean">harmonic mean</a>, multiplying by the number of buckets squared, and multiplying by a magic number derived from the number of buckets<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>. This results in dampening outliers while boosting the estimate back into the appropriate range.</p>

<h3 id="how-well-does-it-work-in-practice">How well does it work in practice?</h3>

<p>Here’s a plot comparing the accuracy of Probabilistic counting (count leading zeros, no compensation for outliers), Probabilistic-Med counting (run Probabilistic using ten hash functions, return median of results), and HyperLogLog (our fancy bucket solution):</p>

<object data="/postImages/hyperloglog.svg" alt="Comparison between probabilistic counting, probabilistic counting using multiple hash functions and averaging, and hyperloglog." type="image/svg+xml"></object>

<p>I’ve generated random strings as input, and evaluate at 50 points on the x-axis, with 100 draws of random strings per x-axis point to create a distribution and error bars. The y-axis represents each estimation function’s guess as to the number of unique elements, with a 95% confidence interval.</p>

<p>Unsprisingly, plain probabilistic counting does not fare well. When we generate thousands of strings, the likelihood that at least one will have many leading zeros is enormous, and since our algorithm relies on counting the <em>maximum</em> observed leading zeros, it’s extremely outlier sensitive.</p>

<p>Taking the mean across ten hash algorithms is <em>also</em> outlier-sensitive when the outliers are large enough, which is why I’ve opted for the median in this plot. Probabilistic-Med performs much better, but it suffers the same problems over a larger time-scale: as we read more and more unique tokens, the likelihood goes up that all ten hash functions will see at least one hash with many leading zeros. Therefore, as the number of unique tokens increases, Probabilistic-Med steadily begins to over-estimate the number of unique tokens, with increasing error bars.</p>

<p>HyperLogLog reigns supreme. While error increases with the number of unique hashes, it remains more accurate, with tighter error bars, than the multi-hash strategy, while remaining computationally cheap. We can increase HyperLogLog’s error tolerance and accuracy in high-unique-token scenarios by increasing the number of buckets, although this lowers accuracy when the number of unique tokens is small.</p>

<h3 id="closing-thoughts">Closing Thoughts</h3>

<p>This is so darn cool! Tracking the total number of unique elements without keeping a list of those elements seems impossible - and it is if you need absolute precision - but with some clever statistics we can get a shockingly close estimate.</p>

<p>If you’d like to see a working example, <a href="https://github.com/milo-trujillo/HyperLogLog">here’s the code I wrote for generating the accuracy plot</a>, which includes implementations of Probabilistic counting, Probabilistic-Med, and HyperLogLog. This is toy code in Python that converts all the hashes to strings of one and zero characters for easy manipulation, so it is <em>not</em> efficient and shouldn’t be treated as anything like an ideal reference.</p>

<p>If you enjoyed this post, you may enjoy my other writing on <a href="/post/054_dimensional_analysis">dimensional analysis</a>, <a href="/post/039_netsci">network science for social modeling</a>, or <a href="/post/065_algorithmic_complexity">algorithmic complexity</a>.</p>

<h4 id="footnotes">Footnotes</h4>
<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1" role="doc-endnote">
      <p>The derivation of this number is quite complex, so in practice it’s drawn from a lookup table or estimated <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>
]]></description>
</item>
<item>
<title> Algorithmic Complexity
</title>
<link>https://backdrifting.net/post/065_algorithmic_complexity</link>
<description><![CDATA[<h2 id="algorithmic-complexity">Algorithmic Complexity</h2>

<p><strong>Posted 3/6/2023</strong></p>

<p><em>This is a post about Big-O notation and measuring algorithmic complexity; topics usually taught to computer science undergraduates in their second to fourth semester. It’s intended for curious people outside the field, or new students. There are many posts on this subject, but this one is mine.</em></p>

<p>In computer science we often care about whether an algorithm is an efficient solution to a problem, or whether one algorithm is more efficient than another approach. One might be tempted to measure efficiency in terms of microseconds it takes a process to run, or perhaps number of assembly instructions needed. However, these metrics will vary widely depending on what language an algorithm is implemented in, what hardware it’s run on, what other software is running on the system competing for resources, and a host of other factors. We’d prefer to think more abstractly, and compare one strategy to another rather than their implementations. In particular, computer scientists often examine how an algorithm <em>scales,</em> or how quickly it slows down as inputs grow very large.</p>

<h3 id="the-basics">The Basics</h3>

<p>Let’s start with a trivial example: given a list of numbers, return their sum. Looks something like:</p>

<object data="/postImages/big_o_sum_list.svg" alt="Walking a list left to right" type="image/svg+xml"></object>

<div class="language-ruby highlighter-coderay"><div class="CodeRay">
  <div class="code"><pre><span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">sum</span>(list):
    total = <span style="color:#00D">0</span>
    <span style="color:#080;font-weight:bold">for</span> item <span style="color:#080;font-weight:bold">in</span> list
        total += item
    <span style="color:#080;font-weight:bold">end</span>
    <span style="color:#080;font-weight:bold">return</span> total
<span style="color:#080;font-weight:bold">end</span>
</pre></div>
</div>
</div>

<p>Since we need to read the entire list, this algorithm scales linearly with the length of the list - make the list a hundred times longer, and it will take roughly a hundred times longer to get a sum. We write this formally as <code>O(n)</code>, meaning “scales linearly with <code>n</code>, the size of the input.” We call this formal syntax “Big O notation,” where the ‘O’ stands for “order of approximation” (or in the original German, “Ordnung”).</p>

<p>Not all algorithms scale. If we were asked “return the third element in the list” then it wouldn’t matter whether the list is three elements long or three million elements long, we can get to the third element in a constant amount of time. This is written as <code>O(1)</code>, indicating no reliance on the input size.</p>

<p>Search algorithms give us our first example problem with divergent solutions. Given a stack of papers with names on them, tell me whether “Rohan” is in the stack. A trivial solution might look like:</p>

<object data="/postImages/big_o_linear_search.svg" alt="Walking a list left to right" type="image/svg+xml"></object>

<div class="language-ruby highlighter-coderay"><div class="CodeRay">
  <div class="code"><pre><span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">hasName</span>(list)
    <span style="color:#080;font-weight:bold">for</span> name <span style="color:#080;font-weight:bold">in</span> list
        <span style="color:#080;font-weight:bold">if</span> name == <span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">&quot;</span><span style="color:#D20">Rohan</span><span style="color:#710">&quot;</span></span>
            <span style="color:#080;font-weight:bold">return</span> <span style="color:#069">true</span>
        <span style="color:#080;font-weight:bold">end</span>
    <span style="color:#080;font-weight:bold">end</span>
    <span style="color:#080;font-weight:bold">return</span> <span style="color:#069">false</span>
<span style="color:#080;font-weight:bold">end</span>
</pre></div>
</div>
</div>

<p>This scales linearly with the length of the list, just like summing the elements. If the list is in an unknown order then we have no choice but to examine every element. However, if we know the list is in alphabetical order then we can do better. Start in the middle of the list - if the name is Rohan, we’re done. If we’re after Rohan alphabetically, then discard the second half of the list, and repeat on the first half. If we’re before Rohan alphabetically, then discard the first half of the list and repeat on the second. If we exhaust the list, then Rohan’s not in it. This approach is called a <em>binary search,</em> and visually looks like:</p>

<object data="/postImages/big_o_binary_search.svg" alt="Binary search diagram" type="image/svg+xml"></object>

<p>In code, a binary search looks something like:</p>

<div class="language-ruby highlighter-coderay"><div class="CodeRay">
  <div class="code"><pre><span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">hasName</span>(list)
    <span style="color:#080;font-weight:bold">if</span>( list.length == <span style="color:#00D">0</span> )
        <span style="color:#080;font-weight:bold">return</span> <span style="color:#069">false</span>
    <span style="color:#080;font-weight:bold">end</span>
    middle = list.length / <span style="color:#00D">2</span>
    <span style="color:#080;font-weight:bold">if</span>( list[middle] == <span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">&quot;</span><span style="color:#D20">Rohan</span><span style="color:#710">&quot;</span></span> )
        <span style="color:#080;font-weight:bold">return</span> <span style="color:#069">true</span>
    <span style="color:#080;font-weight:bold">elsif</span>( list[middle] &gt; <span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">&quot;</span><span style="color:#D20">Rohan</span><span style="color:#710">&quot;</span></span> )
        <span style="color:#777"># Search left half</span>
        <span style="color:#080;font-weight:bold">return</span> hasName(list.first(middle))
    <span style="color:#080;font-weight:bold">else</span>
        <span style="color:#777"># Search right half</span>
        <span style="color:#080;font-weight:bold">return</span> hasName(list[middle .. list.length]
    <span style="color:#080;font-weight:bold">end</span>
<span style="color:#080;font-weight:bold">end</span>
</pre></div>
</div>
</div>

<p>With every step in the algorithm we discard half the list, so we look at far fewer than all the elements. Our binary search still gets slower as the input list grows longer - if we double the length of the list we need one extra search step - so the algorithm scales logarithmically rather than linearly, denoted <code>O(log n)</code>.</p>

<p>We’ll end this section by looking at two sorting algorithms: insertion sort, and merge sort.</p>

<h4 id="insertion-sort">Insertion Sort</h4>

<p>We want to sort a list, provided to us in random order. One simple approach is to build a new sorted list: one at a time, we take elements from the front of the main list, and find their correct position among the sorted list we’ve built so far. To find the correct position we just look at the value left of our new element, and check whether they should be swapped or not. Keep swapping left until the new element finds its correct position. This visually looks like:</p>

<object data="/postImages/big_o_insertion_sort.svg" alt="Insertion sort step diagram" type="image/svg+xml"></object>

<p>One implementation might look like:</p>

<div class="language-ruby highlighter-coderay"><div class="CodeRay">
  <div class="code"><pre><span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">insertionSort</span>(list)
    <span style="color:#080;font-weight:bold">for</span> i <span style="color:#080;font-weight:bold">in</span> <span style="color:#00D">0</span>.upto(list.length-<span style="color:#00D">1</span>)
        <span style="color:#080;font-weight:bold">for</span> j <span style="color:#080;font-weight:bold">in</span> (i-<span style="color:#00D">1</span>).downto(<span style="color:#00D">0</span>)
            <span style="color:#080;font-weight:bold">if</span>( list[j] &gt; list[j+<span style="color:#00D">1</span>] )
                list[j], list[j+<span style="color:#00D">1</span>] = list[j+<span style="color:#00D">1</span>], list[j]
            <span style="color:#080;font-weight:bold">else</span>
                <span style="color:#080;font-weight:bold">break</span> <span style="color:#777"># Done swapping, found the right spot!</span>
            <span style="color:#080;font-weight:bold">end</span>
        <span style="color:#080;font-weight:bold">end</span>
    <span style="color:#080;font-weight:bold">end</span>
    <span style="color:#080;font-weight:bold">return</span> list
<span style="color:#080;font-weight:bold">end</span>
</pre></div>
</div>
</div>

<p>Insertion sort is simple and easy to implement. If you were coming up with a sorting algorithm on the spot for something like sorting a deck of cards, you might invent something similar. So what’s the runtime?</p>

<p>In insertion sort, we walk the list from start to end, which is <code>O(n)</code>. For every new element we examine, however, we walk the list backwards from our current position to the start. This operation <em>also</em> scales linearly with the length of the list, and so is also <code>O(n)</code>. If we perform a backwards <code>O(n)</code> walk for every step of the forwards <code>O(n)</code> walk, that’s <code>O(n) * O(n)</code> for a total of <code>O(n^2)</code>. Can we do better?</p>

<h4 id="merge-sort">Merge Sort</h4>

<p>An alternative approach to sorting is to think of it as a divide-and-conquer problem. Split the list in half, and hand the first half to one underling and the second half to another underling, and instruct them each to sort their lists. Each underling does the same, splitting their lists in half and handing them to two further underlings. Eventually, an underling receives a list of length one, which is by definition already sorted. This splitting stage looks something like:</p>

<object data="/postImages/big_o_merge_sort1.svg" alt="Merge sort tree diagram splitting inputs" type="image/svg+xml"></object>

<p>Now we want to merge our results upwards. Each underling hands their sorted list back up to their superiors, who now have two sorted sub-lists. The superior combines the two sorted lists by first making a new empty “merged” list that’s twice as long. For every position in the merged list, the superior compares the top element of each sorted sub-list, and moves the lower element to the merged list. This process looks like:</p>

<object data="/postImages/big_o_merge_lists.svg" alt="Merge sort tree diagram splitting inputs" type="image/svg+xml"></object>

<p>Once all elements from the two sub-lists have been combined into a merged list, the superior hands their newly sorted list upwards to <em>their</em> superior. We continue this process until we reach the top of the tree, at which point our work is done. This merge step looks like:</p>

<object data="/postImages/big_o_merge_sort2.svg" alt="Merge sort tree diagram merging results" type="image/svg+xml"></object>

<p>In code, the full algorithm might look something like:</p>

<div class="language-ruby highlighter-coderay"><div class="CodeRay">
  <div class="code"><pre><span style="color:#777"># Combine two sorted lists</span>
<span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">merge</span>(left, right)
    merged = []
    <span style="color:#080;font-weight:bold">while</span>( left.length + right.length &gt; <span style="color:#00D">0</span> )
        <span style="color:#080;font-weight:bold">if</span>( left.length == <span style="color:#00D">0</span> )       <span style="color:#777"># Left empty, take from right</span>
            merged += right.shift(<span style="color:#00D">1</span>)
        <span style="color:#080;font-weight:bold">elsif</span>( right.length == <span style="color:#00D">0</span> )   <span style="color:#777"># Right empty, take from left</span>
            merged += left.shift(<span style="color:#00D">1</span>)
        <span style="color:#080;font-weight:bold">elsif</span>( left[<span style="color:#00D">0</span>] &lt; right[<span style="color:#00D">0</span>] )  <span style="color:#777"># Top of left stack is less, take it</span>
            merged += left.shift(<span style="color:#00D">1</span>)
        <span style="color:#080;font-weight:bold">else</span>                         <span style="color:#777"># Top of right stack is less, take it</span>
            merged += right.shift(<span style="color:#00D">1</span>)
        <span style="color:#080;font-weight:bold">end</span>
    <span style="color:#080;font-weight:bold">end</span>
    <span style="color:#080;font-weight:bold">return</span> merged
<span style="color:#080;font-weight:bold">end</span>

<span style="color:#777"># Takes a single list, sub-divides it, sorts results</span>
<span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">mergeSort</span>(list)
    <span style="color:#080;font-weight:bold">if</span>( list.length &lt;= <span style="color:#00D">1</span> )
        <span style="color:#080;font-weight:bold">return</span> list <span style="color:#777"># Sorted already :)</span>
    <span style="color:#080;font-weight:bold">end</span>
    middle = list.length / <span style="color:#00D">2</span>
    left = list[<span style="color:#00D">0</span> .. middle-<span style="color:#00D">1</span>]
    right = list[middle .. list.length-<span style="color:#00D">1</span>]
    leftSorted = mergeSort(left)
    rightSorted = mergeSort(right)
    <span style="color:#080;font-weight:bold">return</span> merge(leftSorted, rightSorted)
<span style="color:#080;font-weight:bold">end</span>
</pre></div>
</div>
</div>

<p>So what’s the runtime of merge sort? Well it takes <code>log n</code> steps to divide the list in half down to one element. We do this division process for every element in the list. That gives us a runtime of <code>n * log n</code> to break the list apart and create the full tree diagram.</p>

<p>Merging two sorted lists together scales linearly with the size of the lists, so the merge step is <code>O(n)</code>. We need to perform a merge each time we move up a “level” of the tree, and there are <code>log n</code> levels to this tree. Therefore, the full merge process <em>also</em> scales with <code>O(n log n)</code>.</p>

<p>This gives us a total runtime of <code>O(n log n + n log n)</code> or <code>O(2n log n)</code> to create the tree and merge it back together. However, because we are concerned with how algorithms scale as the inputs become very large, we drop constants and all expressions but the dominant term - multiplying by 2 doesn’t mean much as <code>n</code> approaches infinity - and simplify the run time to <code>O(n log n)</code>. That’s a lot better than insertion sort’s <code>O(n^2)</code>!</p>

<h3 id="limitations-of-big-o-notation">Limitations of Big-O notation</h3>

<p>Big O notation typically describes an “average” or “expected” performance and not a “best case” or “worst-case”. For example, if a list is in a thoroughly random order, then insertion sort will have a performance of <code>O(n^2)</code>. However, if the list is already sorted, or only one or two elements are out of place, then insertion sort’s best-case performance is <code>O(n)</code>. That is, insertion sort will walk the list forwards, and if no elements are out of place, there will be no need to walk the list backwards to find a new position for any elements. By contrast, merge sort will <em>always</em> split the list into a tree and merge the branches back together, so even when handed a completely sorted list, merge sort’s best-case performance is still <code>O(n log n)</code>.</p>

<p>Big O notation also does not describe <em>memory</em> complexity. The description of merge sort above creates a temporary <code>merged</code> list during the merge step, meaning however long the input list is, merge sort needs at least twice as much memory space for its overhead. By contrast, insertion sort works “in place,” sorting the input list without creating a second list as a workspace. Many algorithms make a trade-off between time and space in this way.</p>

<p>Finally, Big O notation describes how an algorithm scales as <code>n</code> gets very large. For small values of <code>n</code>, insertion sort may outperform merge sort, because merge sort has some extra bookkeeping to allocate temporary space for merging and coordinate which minions are sorting which parts of the list.</p>

<p>In summary, Big O notation is a valuable tool for quickly comparing two algorithms, and can provide programmers with easy estimates as to which parts of a problem will be the most time-consuming. However, Big O notation is not the only metric that matters, and should not be treated as such.</p>

<h3 id="problem-complexity-a-birds-eye-view">Problem Complexity: A Birds-eye View</h3>

<p>All of the algorithms described above can be run in <em>polynomial time.</em> This means their scaling rate, or Big O value, can be upper-bounded by a polynomial of the form <code>O(n^k)</code>. For example, while merge sort scales with <code>O(n log n)</code>, and logarithms are not polynomials, <code>n log n</code> is strictly less than <code>n^2</code>, so merge sort is considered to run in polynomial time. By contrast, algorithms with runtimes like <code>O(2^n)</code> or <code>O(n!)</code> are <em>not</em> bounded by a polynomial, and perform abysmally slowly as <code>n</code> grows large.</p>

<p>These definitions allow us to describe categories of algorithms. We describe algorithms that run in polynomial time as part of set <strong>P</strong>, and we typically describe <strong>P</strong> as a subset of <strong>NP</strong> - the algorithms where we can verify whether a solution is correct in polynomial time.</p>

<p>To illustrate the difference between running and verifying an algorithm, consider the <a href="https://en.wikipedia.org/wiki/Graph_coloring">graph coloring problem</a>: given a particular map, and a set of three or more colors, can you color all the countries so that no two bordering countries have the same color? The known algorithms for this problem are tedious. Brute forcing all possible colorings scales with <code>O(k^n)</code> for k-colors and n-countries, and the fastest known general algorithms run in <code>O(n * 2^n)</code>. However, given a colored-in map, it’s easy to look at each country and its neighbors and verify that none violate the coloring rules. At worst, verifying takes <code>O(n^2)</code> time if all countries border most others, but more realistically <code>O(n)</code> if we assume that each country only borders a small number of neighbors rather than a significant fraction of all countries.</p>

<p>Next, we have <strong>NP-Hard</strong>: these are the set of problems at least as hard as the most computationally intensive <strong>NP</strong> problems, but maybe harder - some <strong>NP-Hard</strong> problems cannot even have their solutions verified in polynomial time. When we describe a problem as <strong>NP-Hard</strong> we are often referring to this last property, even though the most challenging <strong>NP</strong> problems are also <strong>NP-Hard</strong>.</p>

<p>One example of an <strong>NP-Hard</strong> problem without polynomial verification is <a href="https://en.wikipedia.org/wiki/Travelling_salesman_problem">the Traveling Salesman</a>: given a list of cities and distances between cities, find the shortest path that travels through every city exactly once, ending with a return to the original city. Trying all paths through cities scales with <code>O(n!)</code>. More clever dynamic programming solutions improve this to <code>O(n^2 2^n)</code>. But if someone claims to have run a traveling salesman algorithm, and hands you a path, how do you know it’s the <em>shortest possible</em> path? The only way to be certain is to solve the traveling salesman problem yourself, and determine whether your solution has the same length as the provided answer.</p>

<p>Finally, we have <strong>NP-Complete</strong>. These are the most challenging problems in <strong>NP</strong>, meaning:</p>

<ol>
  <li>
    <p>Solutions to these algorithms can be verified in polynomial time</p>
  </li>
  <li>
    <p>There is no known polynomial-time solution to these algorithms</p>
  </li>
  <li>
    <p>Any problem in <strong>NP</strong> can be translated into an input to an <strong>NP-Complete</strong> problem in polynomial time, and the result of the <strong>NP-Complete</strong> algorithm can be translated back, again in polynomial time</p>
  </li>
</ol>

<p>Here’s a visualization of these problem classes:</p>

<object data="/postImages/big_o_p_not_np.svg" alt="Problem complexity classes if P!=NP" type="image/svg+xml"></object>

<h3 id="does-p--np">Does P = NP?</h3>

<p>Broad consensus in computer science is that the <strong>NP</strong> problem space is larger than the <strong>P</strong> problem space. That is, there are some problems that cannot be <em>solved</em> in polynomial time, but can be <em>verified</em> in polynomial time. However, no one has been able to definitively prove this, in large part because making formal arguments about such abstract questions is exceedingly difficult. There are many problems we do not know how to solve in polynomial time, but how do we prove there isn’t a faster, more clever solution that we haven’t thought of?</p>

<p>Therefore, a minority of computer scientists hold that <strong>P = NP</strong>, or in other words, all problems that can be verified in polynomial time can also be solved in polynomial time. This would make our set of problem classes look more like:</p>

<object data="/postImages/big_o_p_equals_np.svg" alt="Problem complexity classes if P=NP" type="image/svg+xml"></object>

<p>To prove that <strong>P</strong> equals <strong>NP</strong>, all someone would need to do is find a polynomial-time solution to any <strong>NP-Complete</strong> problem. Since we know all <strong>NP</strong> problems can be translated back and forth to <strong>NP-Complete</strong> problems in polynomial time, a fast solution to any of these most challenging problems would be a fast solution to <em>every</em> poly-verifiable algorithm. No such solution has been found.</p>
]]></description>
</item>
<item>
<title> Hex Grids and Cube Coordinates
</title>
<link>https://backdrifting.net/post/064_hex_grids</link>
<description><![CDATA[<h2 id="hex-grids-and-cube-coordinates">Hex Grids and Cube Coordinates</h2>

<p><strong>Posted 2/10/2023</strong></p>

<p>I recently needed to make a graph with a hex lattice shape, like this:</p>

<p><img src="/postImages/hex_matplotlib_plain.png" alt="Hex grid tiles" width="60%" /></p>

<p>I needed to find distances and paths between different hexagonal tiles, which proved challenging in a cartesian coordinate system. I tried a few solutions, and it was a fun process, so let’s examine each option.</p>

<h3 id="row-and-column-offset-coordinates">Row and Column (Offset) Coordinates</h3>

<p>The most “obvious” way to index hexagonal tiles is to label each according to their row and column, like:</p>

<p><img src="/postImages/hex_matplotlib_offset.png" alt="Hex grid tiles with row and column labels" width="60%" /></p>

<p>This feels familiar if we’re used to a rectangular grid and cartesian coordinate system. It also allows us to use integer coordinates. However, it has a few severe disadvantages:</p>

<ol>
  <li>
    <p>Moving in the y-axis implies moving in the x-axis. For example, moving from (0,0) to (0,1) sounds like we’re only moving vertically, but additionally shifts us to the right!</p>
  </li>
  <li>
    <p>Coordinates are not mirrored. Northwest of (0,0) is (-1,1), so we might expect that Southeast of (0,0) would be flipped across the vertical and horizontal, yielding (1,-1). But this is not the case! Southeast of (0,0) is (0,-1) instead, because by dropping two rows we’ve implicitly moved twice to the right already (see point one)</p>
  </li>
</ol>

<p>These issues make navigation challenging, because the offsets of neighboring tiles depend on the row. Southeast of (0,0) is (0,-1), but Southeast of (0,1) is (1,0), so the same relative direction sometimes requires changing the column, and sometimes does not.</p>

<h3 id="cartesian-coordinates">Cartesian Coordinates</h3>

<p>Rather than using row and column coordinates we could re-index each tile by its “true” cartesian coordinates:</p>

<p><img src="/postImages/hex_matplotlib_cartesian.png" alt="Hex grid tiles with cartesian coordinates" width="60%" /></p>

<p>This makes the unintuitive aspects of offset coordinates intuitive:</p>

<ol>
  <li>
    <p>It is now obvious that moving from (0,0) to (0.5,1) implies both a vertical and horizontal change</p>
  </li>
  <li>
    <p>Coordinates now mirror nicely: Northwest of (0,0) is (-0.5,1), and Southeast of (0,0) is (0.5,-1).</p>
  </li>
  <li>
    <p>Following from point 1, it’s now clear why the distance between (0,0) and (3,0) isn’t equal to the distance between (0,0) and (0.5,3).</p>
  </li>
</ol>

<p>But while cartesian coordinates are more “intuitive” than offset coordinates, they have a range of downsides:</p>

<ol>
  <li>
    <p>We no longer have integer coordinates. We could compensate by doubling all the coordinates, but then (0,0) is adjacent to (2,0), and keeping a distance of one between adjacent tiles would be ideal.</p>
  </li>
  <li>
    <p>While euclidean-distances are easy to calculate in cartesian space, it’s still difficult to calculate tile-distances using these indices. For example, if we want to find all tiles within two “steps” of (0,0) we need to use a maximum range of about 2.237, or the distance to (1,2).</p>
  </li>
</ol>

<h3 id="cube-coordinates">Cube Coordinates</h3>

<p>Fortunately there is a third indexing scheme, with integer coordinates, coordinate mirroring, and easy distance calculations in terms of steps! It just requires thinking in three dimensions!</p>

<p>In a cartesian coordinate system we use two axes, since we can move up/down, and left/right. However, on a hexagonal grid, we have <em>three</em> degrees of freedom: we can move West/East, Northwest/Southeast, and Northeast/Southwest. We can define the coordinate of each tile in terms of the distance along each of these three directions, like so:</p>

<p><img src="/postImages/hex_matplotlib_cube.png" alt="Hex grid tiles with cube coordinates" width="60%" /></p>

<h4 id="why-arent-the-cube-coordinates-simpler">Why aren’t the cube coordinates simpler?</h4>

<p>These “cube coordinates” have one special constraint: the sum of the coordinates is always zero. This allows us to maintain a canonical coordinate for each tile.</p>

<p>To understand why this is necessary, imagine a system where the three coordinates (typically referred to as (q,r,s) to distinguish between systems when we are converting to or from an (x,y) system) correspond directly with the three axes: q refers to distance West/East, r to Northwest/Southeast, and s to Northeast/Southwest. Here’s a visualization of such a scheme:</p>

<p><img src="/postImages/hex_matplotlib_bad.png" alt="Hex grid tiles with broken cube coordinates" width="60%" /></p>

<p>We could take several paths, such as (0,1,1) or (1,2,0) or (-1,0,2), and all get to the same tile! That would be a mess for comparing coordinates, and would make distance calculations almost impossible. With the addition of this “sum to zero” constraint, all paths to the tile yield the same coordinate of (-1,2,-1).</p>

<h4 id="what-about-distances-and-coordinate-conversion">What about distances and coordinate conversion?</h4>

<p>Distances in cube coordinates are also easy to calculate - just half the “Manhattan distance” between the two points:</p>

<div class="language-python highlighter-coderay"><div class="CodeRay">
  <div class="code"><pre><span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">distance</span>(q1, r1, s1, q2, r2, s2):
        <span style="color:#080;font-weight:bold">return</span> (<span style="color:#369;font-weight:bold">abs</span>(q1-q2) + <span style="color:#369;font-weight:bold">abs</span>(r1-r2) + <span style="color:#369;font-weight:bold">abs</span>(s1-s2)) / <span style="color:#00D">2</span>
</pre></div>
</div>
</div>

<p>We can add coordinates, multiply coordinates, calculate distances, and everything is simple so long as we remain in cube coordinates.</p>

<p>However, we will unavoidably sometimes need to convert from cube to cartesian coordinates. For example, while I built the above hex grids using cube coordinates, I plotted them in matplotlib, which wants cartesian coordinates to place each hex. Converting to cartesian coordinates will also allow us to find the distance between hex tiles “as the crow flies,” rather than in path-length, which may be desirable. So how do we convert back to xy coordinates?</p>

<p>First, we can disregard the <code>s</code> coordinate. Since all coordinates sum to zero, <code>s = -1 * (q + r)</code>, so it represents redundant information, and we can describe the positions of each tile solely using the first two coordinates.</p>

<p><img src="/postImages/hex_matplotlib_distance.png" alt="Hex grid tiles with distance arrows" width="40%" /></p>

<p>We can also tell through the example above that changing the <code>q</code> coordinate contributes only to changing the x-axis, while changing the <code>r</code> coordinate shifts both the x- and y-axes. Let’s set aside the <code>q</code> coordinate for the moment and focus on how much <code>r</code> contributes to each cartesian dimension.</p>

<p>Let’s visualize the arrow from (0,0,0) to (0,1,-1) as the hypotenuse of a triangle:</p>

<object data="/postImages/hex_triangle.svg" alt="Right triangle connecting (0,0,0) to (0,1,-1)" type="image/svg+xml"></object>

<p>We want to break down the vector of length <code>r=1</code> into <code>x</code> and <code>y</code> components. You may recognize this as a 30-60-90 triangle, or you could use some geometric identities: the internal angles of a hexagon are 120-degrees, and this triangle will bisect one, so theta must be 60-degrees. Regardless of how you get there, we land at our triangle identities:</p>

<object data="/postImages/hex_triangle2.svg" alt="Edge length identities for a 30-60-90 triangle" type="image/svg+xml"></object>

<p>From here we can easily solve for the <code>x</code> and <code>y</code> components of <code>r</code>, using <code>2a = r</code>:</p>

<object data="/postImages/hex_triangle_math1.svg" alt="Derivation of x and y components of r" type="image/svg+xml"></object>

<p>We know that (0,1,-1) is halfway between (0,0) and (1,0,-1) on the x-axis, so <code>q</code> must contribute twice as much to the x-axis as <code>r</code> does. Therefore, we can solve for the full cartesian coordinates of a hex using the cube coordinates as follows:</p>

<object data="/postImages/hex_triangle_math2.svg" alt="Translation from cube to cartesian coordinates" type="image/svg+xml"></object>

<p>This works great! But it leaves the hexagons with a radius of <code>sqrt(3) / 3</code>, which may be inconvenient for some applications. For example, if you were physically manufacturing these hexagons, like making tiles for a board-game, they’d be much easier to cut to size if they had a radius of one. Therefore, you will often see the conversion math from cube to cartesian coordinates written with a constant multiple of <code>sqrt(3)</code>, like:</p>

<object data="/postImages/hex_triangle_math3.svg" alt="Translation from cube to cartesian coordinates, multiplied by sqrt(3)" type="image/svg+xml"></object>

<p>Since this is a constant multiple, it just re-scales the graph, so all the distance measurements and convenient properties of the system remain the same, but hexagons now have an integer radius.</p>

<h3 id="this-is-the-most-interesting-thing-in-the-world-where-do-i-learn-more">This is the most interesting thing in the world, where do I learn more?</h3>

<p>If you are also excited by these coordinate systems, and want to read more about the logic behind cube coordinates, path-finding, line-drawing, wrapping around the borders of a map, and so on, then I highly recommend the <a href="https://www.redblobgames.com/grids/hexagons/">Red Blob Games Hexagon article</a>, which goes into much more detail.</p>
]]></description>
</item>
<item>
<title> Image Dithering in Color!
</title>
<link>https://backdrifting.net/post/063_dithering_color</link>
<description><![CDATA[<h2 id="image-dithering-in-color">Image Dithering in Color!</h2>

<p><strong>Posted 1/17/2023</strong></p>

<p>In <a href="/post/062_dithering">my last post</a> I demonstrated how to perform image dithering to convert colored images to black and white. This consists of converting each pixel to either black or white (whichever is closer), recording the amount of “error,” or the difference between the original luminoscity and the new black/white value, and propagating this error to adjoining pixels to brighten or darken them in compensation. This introduces local error (some pixels will be converted to white when their original value is closer to black, and vice versa), but globally lowers error, producing an image that appears much closer to the original.</p>

<p>I’m still playing with dithering, so in this post I will extend the idea to color images. Reducing the number of colors in an image used to be a common task: while digital cameras may be able to record photos with millions of unique colors, computers throughout the 90s often ran in “256 color” mode, where they could only display a small range of colors at once. This reduces the memory footprint of images significantly, since you only need 8-bits per pixel rather than 24 to represent their color. Some image compression algorithms still use palette compression today, announcing a palette of colors for a region of the image, then listing an 8- or 16-bit palette index for each pixel in the region rather than a full 24-bit color value.</p>

<p>Reducing a full color image to a limited palette presents a similar challenge to black-and-white image dithering: how do we choose what palette color to use for each pixel, and how do we avoid harsh color banding?</p>

<p>We’ll start with a photo of a hiking trail featuring a range of greens, browns, and whites:</p>

<p><img src="/postImages/dither_bridge.png" alt="Photo of a snowy hiking trail" /></p>

<p>Let’s reduce this to a harsh palette of 32 colors. First, we need to generate such a palette:</p>

<div class="language-python highlighter-coderay"><div class="CodeRay">
  <div class="code"><pre><span style="color:#777">#!/usr/bin/env python3</span>
<span style="color:#080;font-weight:bold">import</span> <span style="color:#B44;font-weight:bold">numpy</span> <span style="color:#080;font-weight:bold">as</span> np

<span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">getPalette</span>(palette_size=<span style="color:#00D">32</span>):
    colors = []
    values = np.linspace(<span style="color:#00D">0</span>, <span style="color:#02b">0xFFFFFF</span>, palette_size, dtype=<span style="color:#369;font-weight:bold">int</span>)
    <span style="color:#080;font-weight:bold">for</span> val <span style="color:#080;font-weight:bold">in</span> values:
        r = val &gt;&gt; <span style="color:#00D">16</span>
        g = (val &amp; <span style="color:#02b">0x00FF00</span>) &gt;&gt; <span style="color:#00D">8</span>
        b = val &amp; <span style="color:#02b">0x0000FF</span>
        colors.append((r,g,b))
    <span style="color:#080;font-weight:bold">return</span> colors
</pre></div>
</div>
</div>

<p>I don’t know much color theory, so this is far from an “ideal” spread of colors. However, it is 32 equally spaced values on the numeric range 0x000000 to 0xFFFFFF, which we can convert to RGB values. We can think of color as a three dimensional space, where the X, Y, and Z axes represent red, green, and blue. This lets us visualize our color palette as follows:</p>

<div class="language-python highlighter-coderay"><div class="CodeRay">
  <div class="code"><pre><span style="color:#080;font-weight:bold">import</span> <span style="color:#B44;font-weight:bold">matplotlib.pyplot</span> <span style="color:#080;font-weight:bold">as</span> plt

<span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">plotPalette</span>(palette):
    fig = plt.figure(figsize=(<span style="color:#00D">6</span>,<span style="color:#00D">6</span>))
    ax = fig.add_subplot(<span style="color:#00D">111</span>, projection=<span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">'</span><span style="color:#D20">3d</span><span style="color:#710">'</span></span>)
    r = []
    g = []
    b = []
    c = []
    <span style="color:#080;font-weight:bold">for</span> color <span style="color:#080;font-weight:bold">in</span> palette:
        r.append(color[<span style="color:#00D">0</span>])
        g.append(color[<span style="color:#00D">1</span>])
        b.append(color[<span style="color:#00D">2</span>])
        c.append(<span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">&quot;</span><span style="color:#D20">#%02x%02x%02x</span><span style="color:#710">&quot;</span></span> % color)
    g = ax.scatter(r, g, b, c=c, marker=<span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">'</span><span style="color:#D20">o</span><span style="color:#710">'</span></span>, depthshade=<span style="color:#069">False</span>)
    ax.invert_xaxis()
    ax.set_xlabel(<span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">'</span><span style="color:#D20">Red</span><span style="color:#710">'</span></span>)
    ax.set_ylabel(<span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">'</span><span style="color:#D20">Green</span><span style="color:#710">'</span></span>)
    ax.set_zlabel(<span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">'</span><span style="color:#D20">Blue</span><span style="color:#710">'</span></span>)
    plt.show()
</pre></div>
</div>
</div>

<p>Which looks something like:</p>

<p><img src="/postImages/dither_palette_32.png" alt="32 colors represented in 3-space on a scatterplot" width="60%" /></p>

<p>Just as in black-and-white image conversion, we can take each pixel and round it to the closest available color - but instead of two colors in our palette, we now have 32. Here’s a simple (and highly inefficient) conversion:</p>

<div class="language-python highlighter-coderay"><div class="CodeRay">
  <div class="code"><pre><span style="color:#777"># Returns the closest rgb value on the palette, as (red,green,blue)</span>
<span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">getClosest</span>(color, palette):
    (r,g,b) = color
    closest = <span style="color:#069">None</span> <span style="color:#777">#(color, distance)</span>
    <span style="color:#080;font-weight:bold">for</span> p <span style="color:#080;font-weight:bold">in</span> palette:
        <span style="color:#777"># A real distance should be sqrt(x^2 + y^2 + z^2), but</span>
        <span style="color:#777"># we only care about relative distance, so faster to leave it off</span>
        distance = (r-p[<span style="color:#00D">0</span>])**<span style="color:#00D">2</span> + (g-p[<span style="color:#00D">1</span>])**<span style="color:#00D">2</span> + (b-p[<span style="color:#00D">2</span>])**<span style="color:#00D">2</span>
        <span style="color:#080;font-weight:bold">if</span>( closest == <span style="color:#069">None</span> <span style="color:#080;font-weight:bold">or</span> distance &lt; closest[<span style="color:#00D">1</span>] ):
            closest = (p,distance)
    <span style="color:#080;font-weight:bold">return</span> closest[<span style="color:#00D">0</span>]

<span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">reduceNoDither</span>(img, palette, filename):
    pixels = np.array(img)
    <span style="color:#080;font-weight:bold">for</span> y,row <span style="color:#080;font-weight:bold">in</span> <span style="color:#369;font-weight:bold">enumerate</span>(pixels):
        <span style="color:#080;font-weight:bold">for</span> x,col <span style="color:#080;font-weight:bold">in</span> <span style="color:#369;font-weight:bold">enumerate</span>(row):
            pixels[y,x] = getClosest(pixels[y,x], palette)
    reduced = Image.fromarray(pixels)
    reduced.save(filename)

img = Image.open(<span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">&quot;</span><span style="color:#D20">bridge.png</span><span style="color:#710">&quot;</span></span>)
palette = getPalette()
reduceNoDither(img, palette, <span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">&quot;</span><span style="color:#D20">bridge_32.png</span><span style="color:#710">&quot;</span></span>)
</pre></div>
</div>
</div>

<p>The results are predictably messy:</p>

<p><img src="/postImages/dither_bridge_32.png" alt="Hiking trail rendered in 32 colors by closest color conversion" /></p>

<p>Our palette only contains four colors close to brown, and most are far too red. If we convert each pixel to the closest color on the palette, we massively over-emphasize red, drowning out our greens and yellows.</p>

<p>Dithering to the rescue! Where before we had an integer error for each pixel (representing how much we’d over or under-brightened the pixel when we rounded it to black/white), we now have an error <em>vector,</em> representing how much we’ve over or under emphasized red, green, and blue in our rounding.</p>

<p>As before, we can apply Atkinson dithering, with the twist of applying a vector error to three dimensional color points:</p>

<div class="language-python highlighter-coderay"><div class="CodeRay">
  <div class="code"><pre><span style="color:#777"># Returns an error vector (delta red, delta green, delta blue)</span>
<span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">getError</span>(oldcolor, newcolor):
    dr = oldcolor[<span style="color:#00D">0</span>] - newcolor[<span style="color:#00D">0</span>]
    dg = oldcolor[<span style="color:#00D">1</span>] - newcolor[<span style="color:#00D">1</span>]
    db = oldcolor[<span style="color:#00D">2</span>] - newcolor[<span style="color:#00D">2</span>]
    <span style="color:#080;font-weight:bold">return</span> (dr, dg, db)

<span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">applyError</span>(pixels, y, x, error, factor):
    <span style="color:#080;font-weight:bold">if</span>( y &gt;= pixels.shape[<span style="color:#00D">0</span>] <span style="color:#080;font-weight:bold">or</span> x &gt;= pixels.shape[<span style="color:#00D">1</span>] ):
        <span style="color:#080;font-weight:bold">return</span> <span style="color:#777"># Don't run off edge of image</span>
    er = error[<span style="color:#00D">0</span>] * factor
    eg = error[<span style="color:#00D">1</span>] * factor
    eb = error[<span style="color:#00D">2</span>] * factor
    pixels[y,x,RED] += er
    pixels[y,x,GREEN] += eg
    pixels[y,x,BLUE] += eb

<span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">ditherAtkinson</span>(img, palette, filename):
    pixels = np.array(img)
    total_pixels = pixels.shape[<span style="color:#00D">0</span>] * pixels.shape[<span style="color:#00D">1</span>]
    <span style="color:#080;font-weight:bold">for</span> y,row <span style="color:#080;font-weight:bold">in</span> <span style="color:#369;font-weight:bold">enumerate</span>(pixels):
        <span style="color:#080;font-weight:bold">for</span> x,col <span style="color:#080;font-weight:bold">in</span> <span style="color:#369;font-weight:bold">enumerate</span>(row):
            old = pixels[y,x] <span style="color:#777"># Returns reference</span>
            new = getClosest(old, palette)
            quant_error = getError(old, new)
            pixels[y,x] = new
            applyError(pixels, y,   x+<span style="color:#00D">1</span>, quant_error, <span style="color:#00D">1</span>/<span style="color:#00D">8</span>)
            applyError(pixels, y,   x+<span style="color:#00D">2</span>, quant_error, <span style="color:#00D">1</span>/<span style="color:#00D">8</span>)
            applyError(pixels, y+<span style="color:#00D">1</span>, x+<span style="color:#00D">1</span>, quant_error, <span style="color:#00D">1</span>/<span style="color:#00D">8</span>)
            applyError(pixels, y+<span style="color:#00D">1</span>, x,   quant_error, <span style="color:#00D">1</span>/<span style="color:#00D">8</span>)
            applyError(pixels, y+<span style="color:#00D">1</span>, x-<span style="color:#00D">1</span>, quant_error, <span style="color:#00D">1</span>/<span style="color:#00D">8</span>)
            applyError(pixels, y+<span style="color:#00D">2</span>, x,   quant_error, <span style="color:#00D">1</span>/<span style="color:#00D">8</span>)
    dithered = Image.fromarray(pixels)
    dithered.save(filename)
</pre></div>
</div>
</div>

<p>Aaaaaand presto!</p>

<p><img src="/postImages/dither_bridge_32_at.png" alt="Forest trail put through colored Atkinson dithering, looks closer to a correct shade of brown, but has blue flecks of snow on close inspection" /></p>

<p>It’s far from perfect, but our dithered black and white images were facsimiles of their greyscale counterparts, too. Pretty good for only 32 colors! The image no longer appears too red, and the green pine needles stand out better. Interestingly, the dithered image now appears flecked with blue, with a blue glow in the shadows. This is especially striking on my old Linux laptop, but is more subtle on a newer screen with a better color profile, so your mileage may vary.</p>

<p>We might expect the image to be slightly blue-tinged, both because reducing red values will make green and blue stand out, and because we are using an extremely limited color palette. However, the human eye is also better at picking up some colors than others, so perhaps these blue changes stand out disproportionately. We can try compensating, by reducing blue error to one third:</p>

<p><img src="/postImages/dither_bridge_32_at_eye.png" alt="Forest trail put through colored Atkinson dithering, now with far fewer blue flecks" /></p>

<p>That’s an arbitrary and unscientific compensation factor, but it’s removed the blue tint from the shadows in the image, and reduced the number of blue “snow” effects, suggesting there’s some merit to per-channel tuning. Here’s a side-by-side comparison of the original, palette reduction, and each dithering approach:</p>

<p><img src="/postImages/dither_bridge_montage.png" alt="Side by side of four images from earlier in the post" /></p>

<p>Especially at a smaller resolution, we can do a pretty good approximation with a color selection no wider than a big box of crayons. Cool!</p>
]]></description>
</item>
<item>
<title> Image Dithering
</title>
<link>https://backdrifting.net/post/062_dithering</link>
<description><![CDATA[<h2 id="image-dithering">Image Dithering</h2>

<p><strong>Posted 1/16/2023</strong></p>

<p>Dithering means intentionally adding noise to a signal to reduce large artifacts like color banding. A classic example is reducing a color image to black and white. Take this magnificent photo of my neighbor’s cat:</p>

<p><img src="/postImages/kacie_color.png" alt="Kacie asking for a bellyrub, in color" /></p>

<p>To trivially convert this image to black and white we can take each pixel, decide which color it’s closest to, and set it to that:</p>

<div class="language-python highlighter-coderay"><div class="CodeRay">
  <div class="code"><pre><span style="color:#777">#!/usr/bin/env python3</span>
<span style="color:#080;font-weight:bold">from</span> <span style="color:#B44;font-weight:bold">PIL</span> <span style="color:#080;font-weight:bold">import</span> <span style="color:#B44;font-weight:bold">Image</span>
<span style="color:#080;font-weight:bold">import</span> <span style="color:#B44;font-weight:bold">numpy</span> <span style="color:#080;font-weight:bold">as</span> np

<span style="color:#777"># Load image as grayscale</span>
img = Image.open(<span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">&quot;</span><span style="color:#D20">kacie_color.png</span><span style="color:#710">&quot;</span></span>).convert(<span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">&quot;</span><span style="color:#D20">L</span><span style="color:#710">&quot;</span></span>)
pixels = np.array(img)
<span style="color:#080;font-weight:bold">for</span> y, row <span style="color:#080;font-weight:bold">in</span> <span style="color:#369;font-weight:bold">enumerate</span>(pixels):
    <span style="color:#080;font-weight:bold">for</span> x,col <span style="color:#080;font-weight:bold">in</span> <span style="color:#369;font-weight:bold">enumerate</span>(row):
        <span style="color:#080;font-weight:bold">if</span>( pixels[y,x] &gt;= <span style="color:#00D">127</span> ):
            pixels[y,x] = <span style="color:#00D">255</span>
        <span style="color:#080;font-weight:bold">else</span>:
            pixels[y,x] = <span style="color:#00D">0</span>
bw = Image.fromarray(pixels)
bw.save(<span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">&quot;</span><span style="color:#D20">kacie_bw.png</span><span style="color:#710">&quot;</span></span>)
</pre></div>
</div>
</div>

<p>But the result is not very satisfying:</p>

<p><img src="/postImages/kacie_bw.png" alt="Kacie in black and white, looks like a white cloud" /></p>

<p>The cat is white. Every pixel will be closer to white than black, and we lose the whole cat except the eyes and nose, along with most of the background detail. But we can do better! What if we set the density of black pixels based on the brightness of a region? That is, black regions will receive all black pixels, white regions all white, but something that should be a mid-gray will get closer to a checkerboard of black and white pixels to approximate the correct brightness.</p>

<p>One particularly satisfying way to approach this regional checkerboarding is called <em>error diffusion.</em> For every pixel, when we set it to black or white, we record how far off the original color is from the new one. Then we adjust the color of the adjacent pixels based on this error. For example, if we set a gray pixel to black, then we record that we’ve made an error by making this pixel darker than it should be, and we’ll brighten the surrounding pixels we haven’t evaluated yet to make them more likely to be set to white. Similarly, if we round a gray pixel up to white, then we darken the nearby pixels to make them more likely to be rounded down to black.</p>

<p>In <a href="https://en.wikipedia.org/wiki/Floyd%E2%80%93Steinberg_dithering">Floyd-Steinberg dithering</a> we process pixels left to right, top to bottom, and propagate the error of each pixel to its neighbors with the following distribution:</p>

<object data="/postImages/fs_dithering.svg" alt="Floyd-Steinberg dithering diffusion matrix" type="image/svg+xml"></object>

<p>That is, pass on 7/16 of the error to the pixel right of the one we’re examining. Pass on 5/16 of the error to the pixel below, and a little to the two diagonals we haven’t examined yet. We can implement Floyd-Steinberg dithering as follows:</p>

<div class="language-python highlighter-coderay"><div class="CodeRay">
  <div class="code"><pre><span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">getClosest</span>(color):
    <span style="color:#080;font-weight:bold">if</span>( color &gt;= <span style="color:#00D">127</span> ):
        <span style="color:#080;font-weight:bold">return</span> <span style="color:#00D">255</span> <span style="color:#777"># White</span>
    <span style="color:#080;font-weight:bold">return</span> <span style="color:#00D">0</span> <span style="color:#777"># Black</span>

<span style="color:#080;font-weight:bold">def</span> <span style="color:#06B;font-weight:bold">setAdjacent</span>(pixels, y, x, error):
    (rows,cols) = pixels.shape[<span style="color:#00D">0</span>:<span style="color:#00D">2</span>]
    <span style="color:#080;font-weight:bold">if</span>( y &gt;= rows <span style="color:#080;font-weight:bold">or</span> x &gt;= cols ):
        <span style="color:#080;font-weight:bold">return</span> <span style="color:#777"># Don't run past edge of image</span>
    pixels[y,x] += error

<span style="color:#777"># Load image as grayscale</span>
img = Image.open(<span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">&quot;</span><span style="color:#D20">kacie_color.png</span><span style="color:#710">&quot;</span></span>).convert(<span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">&quot;</span><span style="color:#D20">L</span><span style="color:#710">&quot;</span></span>)
pixels = np.array(img)
<span style="color:#080;font-weight:bold">for</span> y,row <span style="color:#080;font-weight:bold">in</span> <span style="color:#369;font-weight:bold">enumerate</span>(pixels):
    <span style="color:#080;font-weight:bold">for</span> x,col <span style="color:#080;font-weight:bold">in</span> <span style="color:#369;font-weight:bold">enumerate</span>(row):
        old = pixels[y,x]
        new = getClosest(old)
        pixels[y,x] = new
        quant_error = old - new
        setAdjacent(pixels, y,   x+<span style="color:#00D">1</span>, quant_error*(<span style="color:#00D">7</span>/<span style="color:#00D">16</span>))
        setAdjacent(pixels, y+<span style="color:#00D">1</span>, x-<span style="color:#00D">1</span>, quant_error*(<span style="color:#00D">3</span>/<span style="color:#00D">16</span>))
        setAdjacent(pixels, y+<span style="color:#00D">1</span>, x,   quant_error*(<span style="color:#00D">5</span>/<span style="color:#00D">16</span>))
        setAdjacent(pixels, y+<span style="color:#00D">1</span>, x+<span style="color:#00D">1</span>, quant_error*(<span style="color:#00D">1</span>/<span style="color:#00D">16</span>))
dithered = Image.fromarray(pixels)
dithered.save(<span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">&quot;</span><span style="color:#D20">kacie_dithered_fs.png</span><span style="color:#710">&quot;</span></span>)
</pre></div>
</div>
</div>

<p>The results are a stunning improvement:</p>

<p><img src="/postImages/kacie_dithered_fs.png" alt="Kacie in black and white, dithered to maintain maximum detail, but with snow artifacts" /></p>

<p>We’ve got the whole cat, ruffles on her fur, the asphalt and wood chips, details on rocks, gradients within shadows, the works! But what are those big black flecks across the cat’s fur? These flecks of “snow” impact the whole image, but they don’t stand out much on the background where we alternate between black and white pixels frequently. On the cat, even small errors setting near-white fur to white pixels build up, and we periodically set a clump of pixels to black.</p>

<p>We can try to reduce this snow by fiddling with the error propagation matrix. Rather than passing <em>all</em> of the error on to adjacent pixels, and mostly to the pixel to the right and below, what if we ‘discount’ the error, only passing on 75% of it? This is the diffusion matrix used in <a href="https://en.wikipedia.org/wiki/Atkinson_dithering">Atkinson dithering</a>:</p>

<object data="/postImages/at_dithering.svg" alt="Atkinson dithering diffusion matrix" type="image/svg+xml"></object>

<p>The code hardly needs a change:</p>

<div class="language-python highlighter-coderay"><div class="CodeRay">
  <div class="code"><pre>img = Image.open(<span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">&quot;</span><span style="color:#D20">kacie_color.png</span><span style="color:#710">&quot;</span></span>).convert(<span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">&quot;</span><span style="color:#D20">L</span><span style="color:#710">&quot;</span></span>)
pixels = np.array(img)
<span style="color:#080;font-weight:bold">for</span> y,row <span style="color:#080;font-weight:bold">in</span> <span style="color:#369;font-weight:bold">enumerate</span>(pixels):
    <span style="color:#080;font-weight:bold">for</span> x,col <span style="color:#080;font-weight:bold">in</span> <span style="color:#369;font-weight:bold">enumerate</span>(row):
        old = pixels[y,x]
        new = getClosest(old)
        pixels[y,x] = new
        quant_error = old - new
        setAdjacent(pixels, y,   x+<span style="color:#00D">1</span>, quant_error*(<span style="color:#00D">1</span>/<span style="color:#00D">8</span>))
        setAdjacent(pixels, y,   x+<span style="color:#00D">2</span>, quant_error*(<span style="color:#00D">1</span>/<span style="color:#00D">8</span>))
        setAdjacent(pixels, y+<span style="color:#00D">1</span>, x+<span style="color:#00D">1</span>, quant_error*(<span style="color:#00D">1</span>/<span style="color:#00D">8</span>))
        setAdjacent(pixels, y+<span style="color:#00D">1</span>, x,   quant_error*(<span style="color:#00D">1</span>/<span style="color:#00D">8</span>))
        setAdjacent(pixels, y+<span style="color:#00D">1</span>, x-<span style="color:#00D">1</span>, quant_error*(<span style="color:#00D">1</span>/<span style="color:#00D">8</span>))
        setAdjacent(pixels, y+<span style="color:#00D">2</span>, x,   quant_error*(<span style="color:#00D">1</span>/<span style="color:#00D">8</span>))
dithered = Image.fromarray(pixels)
dithered.save(<span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">&quot;</span><span style="color:#D20">kacie_dithered_at.png</span><span style="color:#710">&quot;</span></span>)
</pre></div>
</div>
</div>

<p>And the snow vanishes:</p>

<p><img src="/postImages/kacie_dithered_at.png" alt="Kacie in black and white, dithered to minimize snow, with some loss of detail in bright and dark regions" /></p>

<p>This is a lot more pleasing to the eye, but it’s important to note that the change isn’t free: if you look closely, we’ve lost some detail on the cat’s fur, particularly where the edges of her legs and tail have been ‘washed out.’ After all, we’re now ignoring some of the error caused by our black and white conversion, so we’re no longer compensating for all our mistakes in nearby pixels. This is most noticeable in bright and dark areas where the errors are small.</p>

<h3 id="closing-thoughts">Closing Thoughts</h3>

<p>I really like this idea of adding noise and propagating errors to reduce overall error. It’s a little counter-intuitive; by artificially brightening or darkening a pixel, we’re making an objectively worse local choice when converting a pixel to black or white. Globally, however, this preserves much more of the original structure and detail. This type of error diffusion is most often used in digital signal processing of images, video, and audio, but I am curious whether it has good applications in more distant domains.</p>

<p>If you enjoyed this post and want to read more about mucking with images and color, you may enjoy reading my post on <a href="/post/043_camera_forensics">color filter array forensics</a>.</p>
]]></description>
</item>
</channel>
</rss>