Compare commits

...

10 Commits

Author SHA1 Message Date
Nick Zana d8a4c6a403 Partial revert "remove outdated projects"
Temporarily restores ciphey project
12 months ago
Nick Zana 3912abdc21 Add qubes tailscale article 12 months ago
Nick Zana c7f351017f Disable compiling sass
No sass is used, so zola serve does not work correctly without being
able to watch the "sass" directory if this remains enabled.
12 months ago
Nick Zana 8713cb4e86
remove outdated projects 12 months ago
Nick Zana 05a34637a6
add password compartmentalization blog post 12 months ago
Nick Zana c93b5b238e
Merge branch 'master' of https://git.nickzana.dev/nick/blog 12 months ago
Nick Zana 84571db438
Add backup project 12 months ago
Nick Zana 8cf8df20f5 rename gieta to Git Forge 12 months ago
Nick Zana 857e2ea60b
fix(build): Sync contents of public directory
Changes the rsync command to sync the contents of the built "public"
directory rather than the directory itself.
2 years ago
Nick Zana 97d9c37b30
fix(build): Fix git branch name conditional 2 years ago

@ -14,9 +14,9 @@ tasks:
- deploy: |
cd blog
# only deploy when on master branch
if [ "$(git rev-parse --abbrev-ref HEAD)" == "master" ]; then
if [ "$(git rev-parse master)" == "$(git rev-parse HEAD)" ]; then
sshopts="ssh -p 221 -o StrictHostKeyChecking=no"
openrsync --rsh="$sshopts" --delete -r public $deploy
openrsync --rsh="$sshopts" --delete -r public/ $deploy
else
echo "Skipping: not on master branch"
complete-build;

@ -3,7 +3,6 @@ base_url = "/"
theme = "zola-private-dev-blog"
taxonomies = [ { name = "tags", feed = true } ]
generate_feed = true
compile_sass = true
build_search_index = false
highlight_code = true
@ -14,7 +13,7 @@ external_links_target_blank = true
[extra]
nav = [
{ name = "Home", path = "/" },
{ name = "Gitea", path = "https://git.nickzana.dev" },
{ name = "Git Forge", path = "https://git.nickzana.dev" },
{ name = "Feed", path = "/atom.xml" },
{ name = "About", path = "/about/" },
]

@ -2,6 +2,6 @@
template = "page.html"
+++
[Git Forge](https://git.nickzana.dev)
[GitHub](https://github.com/nickzana)
[Gitea](https://git.nickzana.dev)
[Email](mailto:me@nickzana.dev)

@ -0,0 +1,127 @@
+++
title = "Password Security through Compartmentalization"
[taxonomies]
tags = [ "privacy", "security", "passwords" ]
+++
Not all passwords are created equal. You can find evidence of this in the
countless people who use "layered" password schemes. While a some people just
re-use the same password on every website, others have an intuition for the
different trust levels between websites that they use.
Nobody needs absolute protection for every single account that they sign up for.
If you make an account to sign up for a newsletter, the impact of that account
getting breached is small. On the other hand, compromised "critical" ones like
email, Google, and bank accounts can cause catastrophic (and often
unrecoverable) damage to your financial, social, and professional identity.
There aren't many proxies for this in the offline world. Abysmal "security"
measures like Social Security numbers (in the U.S.; those who live in countries
with government-issued public/private keys can smugly skip this part) are
incredibly sensitive, given that they're basically master keys to our financial
lives. Each person generally only has a single SSN, meaning that every
institution who mandates it gets the same one.
People using the "layered" strategy online recognize different sensitivity
levels of accounts. Generally, they'll have one "high security" password used
for critical accounts, a "regular" password for things like social media and
other relatively trustworthy apps, and then a "throwaway" password for junk
accounts like retailers, newsletters, and the like.
My generation distrusts online companies and have shown through the above scheme
that they can recognize these different security levels. Unfortunately, today's
biggest password managers create single, all-or-nothing security vaults. I've
yet to see any password manager explore encouraging users to compartmentalize
their vaults cryptographically.
Compartmentalization enables a wide variety of additional use-cases and
potential features. When stop making "vaults" and start distinguishing
"passwords" and "devices," features like two-factor encryption, password
sharing, and compartmentalization become a natural extension of both the
cryptography and the user's mental model.
Now, to be fair, 1Password has [built in support for sharing vault
entries](https://support.1password.com/share-items/) (sadly, I can't find too
much of the feature's clever "Psst!" branding anymore). Even if I think it's a
bit clunky, it covers a lot of compartmentalization's use cases.
In the [Access control
enforcement](https://1passwordstatic.com/files/security/1password-white-paper.pdf)
section of their security whitepaper, 1Password explains that they have several
mechanisms for protecting shared passwords: cryptographic, server-side, and
client-side protections.
*1Password warns that this section of their whitepaper is potentially
incomplete. I've done my best to verify that all of the below is accurate, but
if the information is incorrect or outdated, please [contact
me](mailto:me@nickzana.dev).*
Say Alice, a 1Password user, wants to share a password with Bob, who doesn't use
1Password. Alice can generate a link to an encrypted *copy* of the password on
1Password's servers. The password is encrypted with a newly-generated key that
is then encoded into the link Alice is sharing. Alice needs to send this link to
Bob somehow, but once he has it, he can enter it into his web browser and view
the password.
If Alice would like, she can also use email-based access control that's enforced
by the 1Password server. When Bob opens the link, 1Password will send him an
email with a one-time code that he has to enter before he can access the
password.
This is a pretty good low-friction alternative to sharing passwords over text or
email. It provides some marginal benefit in terms of server-side controls. In
the end, though, it just shifts the sharing of the cryptographic material from
the password itself to the link. Unless Alice shares the link over an end-to-end
encrypted channel with Bob, the largest benefits provided by this method are the
server-side access controls for things like email verification, time expiration,
and one-time access limits.
It also comes with the downside of being inherently web-based. Yes, asking the
person you want to share a password with to download a specific piece of
software is unreasonable. But this weakens the security of this particular
feature if you want to use it for another purpose, such as using a password on a
public or otherwise untrusted computer.
Don't get me wrong, this is a good improvement. But for the case of sharing
a limited set of passwords with someone else - yourself - it doesn't quite stack
up.
On the other end of the spectrum, 1Password has many features that focus on
enterprise users, where compartmentalization, access control, and centralized
management are major focuses. [From their
whitepaper](https://1passwordstatic.com/files/security/1password-white-paper.pdf#chapter*.16):
> Alice is running a small company and would like to share the office Wi-Fi
> password and the keypad code for the front door with the entire team. Alice
> will create the items in the Everyone vault, using the 1Password client on her
> device. These items will be encrypted with the vault key. When Alice adds Bob
> to the Everyone vault, Alice's 1Password client will encrypt a copy of the
> vault key with Bob's public key.
> Bob will be notified that he has access to a new vault, and his client will
> download the encrypted vault key, along with the encrypted vault items. Bob's
> client can decrypt the vault key using Bob's private key, giving him access to
> the items within the vault and allowing Bob to see the Wi-Fi password and the
> front door keypad code.
If a password manager could enforce per-device access cryptographically (likely
by giving each device its own keypair, to which entry and/or vault keys are
encrypted), losing your phone wouldn't mean a potential compromise of, say, your
bank password or SSN, so long as only more "trusted" devices have access to
these entries.
I could even imagine a service that allowed you to temporarily access passwords
from an untrusted device. Imagine if you had a relatively unimportant account.
Maybe it's a web-based game, or just information that's convenient to have in a
lot of places like your bookmarks. Downloading a client (or, more likely,
visiting a website, as terrible an idea as that is) and entering just a vault
password on any public computer could gain access to just these "low security"
entries in your vault. This vastly improves over the typical methods that solve
this problem, usually by either re-using passwords on these low-sensitivity
sites or logging into a password vault via the web.
Compartmentalization is one of the strongest defenses against exposing private
files to compromised environments. It can be mitigated if that environment only
has access to the minimum information that it needs. Given the potentially vast
differences in trust levels between our devices, it seems necessary that we
begin to reassess the threats we're designing our password managers around.

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

@ -0,0 +1,241 @@
+++
title = "Setting up a Tailscale ProxyVM on QubesOS"
[taxonomies]
tags = [ "qubesos","tailscale", "howto" ]
+++
QubesOS and Tailscale are both useful tools to protect your privacy and
security. However, QubesOS's unique network structure of many VMs being used on
a single host requires additional configuration to be used similarly to
Tailscale.
Once this setup is complete, any AppVM configured to use `sys-tailnet` as its
network qube will have access to your Tailscale network ("tailnet"). All AppVMs
will be able to utilize Tailscale features such as MagicDNS or custom DNS
records to route traffic within your tailnet. The Qubes firewall rules can also
be used to restrict traffic to and from a particular AppVM.
> *Warning*: Tailscale is a unique mesh VPN that doesn't have the same privacy
> and security properties as a typical VPN provider. Ensure you understand how
> Tailscale works before relying on it to protect your QubesOS network traffic.
# Potential Strategies
## A Tailscale AppVM
If you only need to access your tailnet for a particular purpose, it may be
sufficient to simply install Tailscale in the AppVM that needs access it. For
example, if you need to access your job's git server from your `work` AppVM,
just install Tailscale in `work` and you're good to go -- it will only affect
the network traffic from your `work` VM.
Conveniently, this works around one of the most common Tailscale problems: it
allows you to connect to multiple tailnets at once. This is useful if you need
your work AppVM to be connected to your employer's tailnet, but you also want to
access a personal tailnet.
However, if you need to connect several AppVMs, or regularly connect DispVMs, to
your tailnet, a more complex configuration is needed.
## One Node per AppVM
Perhaps the easiest way to configure Tailscale on QubesOS would be to install
the Tailscale daemon (`tailscaled`) in your Template VMs, then register each
AppVM or DispVM based off of those templates as its own node.
The one benefit that this method provides is that you have much greater control
over what each VM can access on your tailnet. For instance, you could log into
different groups of AppVMs with different users, then use Tailscale's Access
Control Lists to limit access to network resources.
However, this method is inconvenient in several ways. For one, Tailscale's
pricing model limits you by the number of devices on your tailnet:
![Screenshot of IRC pricing model.](./tailscale-pricing.png)
While it would be pretty difficult to use 100 devices, using 100 VMs (especially
if you want to use disposable VMs frequently on within your tailnet) isn't. I'm
not actually sure how deregistering a device works on Tailscale, but it's not
something I would want to think about regularly, especially if I intend to have
multiple QubesOS machines or users on my network.
## `sys-tailnet`
A better way to connect your QubesOS machine to your tailnet is to create a
dedicated network VM, which I call `sys-tailnet`. Following the idioms of
QubesOS, you can register `sys-tailnet` as a network provider for other AppVMs.
While this is certainly the most complicated way to configure Tailscale on your
Qubes machine, it has the benefit of integrating well with the rest of the
QubesOS networking stack, including enforcing per-AppVM firewalls and
controlling which AppVMs route traffic through your tailnet.
An additional benefit of this method is that, as far as Tailscale is concerned,
your QubesOS machine is only a single device, no matter how many VMs you're
running through `sys-tailnet`. This greatly reduces the frequency with which you
have to authenticate (at least compared to the previous method).
# `sys-tailnet` Guide
You can connect `sys-tailnet` to Tailscale like any other Linux system. Once
you're authenticated and connected to your tailnet, configuring `sys-tailnet` is
much like configuring any other ProxyVM in QubesOS.
## Creating `sys-tailnet`
First, create a new AppVM named `sys-tailnet`. For now, the only requirement for
the qube is that you can install `tailscaled`, so it's best to pick a template
that Tailscale has a package repository you can get automatic updates from. See
Tailscale's [Setting up Tailscale on Linux](https://tailscale.com/kb/1031/install-linux/)
guide to check if they publish a repository for your distro.
![Creating sys-tailnet](./create-sys-tailnet.png)
If you want to avoiding adding Tailscale's package repository to your
TemplateVM, consider making `sys-tailnet` a StandaloneVM or creating a new
TemplateVM for it. You should also select a networking qube to route your
Tailscale traffic through (you likely want this to be `sys-firewall`).
![Advanced settings for sys-tailnet](./advanced-sys-tailnet.png)
Also, in the "Advanced" tab, mark that `sys-tailnet` provides network access to
other qubes.
## Installing `tailscaled`
Tailscale [provides instructions](https://tailscale.com/kb/1031/install-linux/)
for installing `tailscaled` on a large variety of Linux distros. You'll want to
perform these instructions in `sys-tailnet` if you configured it as a
StandaloneVM, or in your `sys-tailnet`'s TemplateVM if you configured it as an
AppVM. For Fedora, for example, you just install the Tailscale repository:
```bash
sudo dnf config-manager --add-repo https://pkgs.tailscale.com/stable/fedora/tailscale.repo
```
And then use `dnf` to install `tailscaled` like any other package:
```bash
sudo dnf install tailscale
```
## Configuring `sys-tailnet`
### If `sys-tailnet` is a StandaloneVM
If `sys-tailnet` is a StandaloneVM, you can continue to configure it just like
any other Linux system. Enable and start `tailscaled`:
```bash
sudo systemctl enable --now tailscaled
```
Then log into your Tailscale account:
```bash
sudo tailscale up
```
> Tip: see `tailscale up --help` for additional configuration options.
Since Tailscale's session will persist in a StandaloneVM, this is all of the
configuration you need to log into Tailscale.
### If `sys-tailnet` is an AppVM
In order to start `tailscaled` at launch, you'll need to modify the
`/rw/config/rc.local` script within your `sys-tailnet`. This script is run in
any AppVM whenever it starts. We'll be using it to initialize Tailscale.
Run `sudoedit /rw/config/rc.local` inside `sys-tailnet` and add the following
line to the end of it:
```bash
systemctl --no-block start tailscaled
```
Before we log into Tailscale, however, we need to tell QubesOS to persist the
Tailscale login data on reboot. In general terms, system data usually isn't
persisted between reboots of an AppVM. The `/var/lib/tailscale`directory must be
persisted for Tailscale to remain logged in.
QubesOS provides a mechanism for this called
["bind-dirs"](https://www.qubes-os.org/doc/bind-dirs/). On AppVM boot, it bind
mounts directories in `/rw/bind-dirs`, which is persist across AppVM reboots, to
a corresponding location in the filesystem.
We can persist the Tailscale directory by adding a new configuration file to
`/rw/config/qubes-bind-dirs.d/` in our `sys-tailnet`. If it doesn't exist
already, create the directory by running the following inside `sys-tailnet`:
```bash
sudo mkdir /rw/config/qubes-bind-dirs.d/
```
Then, run `sudoedit /rw/config/qubes-bind-dirs.d/50_user.conf` and add the
following contents to the file:
```conf
binds+=( '/var/lib/tailscale' )
```
Finally, `bind-dirs` won't work correctly if the `/var/lib/tailscale` directory
doesn't exist in the template, so we need to manually create it:
```bash
sudo mkdir -p /rw/bind-dirs/var/lib/tailscale
```
Once you've configured persistence for the `/var/lib/tailscale` directory,
reboot your `sys-tailnet` AppVM. From here, proceed as you normally would to
connect a Linux system to your tailnet:
```bash
sudo tailscale up
```
> Tip: see `tailscale up --help` for additional configuration options.
# DNS
Tailscale relies on DNS for [various
features](https://tailscale.com/kb/1054/dns/). On Linux, Tailscale modifies
`/etc/resolv.conf` to configure the system's DNS resolver. However, these
changes within your `sys-tailnet` do not apply to any AppVMs that use
`sys-tailnet` as their network qube.
In order to forward DNS queries to the Tailscale DNS server, we can forward any
incoming DNS requests in `sys-tailnet` coming from downstream AppVMs to the
Tailscale nameserver. For details on how Tailscale handles DNS traffic, the
article [Private DNS with
MagicDNS](https://tailscale.com/blog/2021-09-private-dns-with-magicdns/) is a
good overview.
In `sys-tailnet`, add the following `iptables` commands to
`/rw/config/qubes-firewall-user-script`:
```bash
#!/bin/sh
# Based on https://forum.qubes-os.org/t/how-do-i-setup-a-custom-dns-in-appvm/5207
# This will Flush PR-QBS chain
iptables -F PR-QBS -t nat
# Redirects all the DNS traffic to Tailscale DNS server
iptables -t nat -I PR-QBS -i vif+ -p udp --dport 53 -j DNAT --to-destination 100.100.100.100
# Accepts the traffic coming to Tailscale DNS server from XEN's virtual interfaces on port 53
iptables -I INPUT -i vif+ -p udp --dport 53 -d 100.100.100.100 -j ACCEPT
```
> This script will run whenever the AppVM starts, but to avoid restarting
> `sys-tailnet`, you can manually trigger it by running `sudo
> /rw/config/qubes-firewall-user-script`
Tailscale's DNS server runs locally on your device and will forward any DNS
queries that don't belong to your tailnet to the upstream netvm of your
`sys-tailnet` qube, just like any other AppVM.
> Technically, your tailnet administrator can configure DNS settings, but these
> only apply if you include the `--accept-dns` flag in your `tailscale up`
> command.
---
That's it! Now you can start configuring your AppVMs access your tailnet while
maintaining the security benefits of QubesOS's compartmentalization.

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

@ -0,0 +1,106 @@
+++
title = "Backups - Time Machine over Tailscale"
weight = 2
[taxonomies]
tags = ["security", "docker", "tailscale"]
+++
[`tailscale-timemachine`](https://git.nickzana.dev/nick/tailscale-timemachine) uses Tailscale networking to integrate a modern SSO provider with Time Machine for simple and secure remote backups. It is deployable as a set of containers that connect to any Tailscale network.
# Time Machine
Time Machine is an incremental backup program shipped as part of macOS. It offers a simple, built-in method to backup and recover versions of any file backed up as part of the system. The user interface works seamlessly within macOS and requires little to no interaction from the user.
Unfortunately, Samba is the only network file sharing protocol officially supported by Time Machine. Like many legacy protocols, its authentication is an outdated username and password based approach that does not support modern security tools such as SSO or multi-factor authentication.
Beyond simply being less secure than an SSO provider, a username and password are an additional set of credentials for users to configure and keep track of. It adds friction and complexity to the backup onboarding and data recovery processes. Using a web-based SSO solution that protects all resources is a more familiar and streamlined experience for users, particularly if an organization already has an identity provider.
# The Solution: Tailscale
Tailscale (or its self-hosted alternative, Headscale) integrates with any SSO provider that supports OAuth2, which makes it ideal for integrating with existing infrastructure. Since authentication happens within the web browser, where most of a user's credentials already are, users have access to password managers or modern APIs such as WebAuthn, further increasing the available security measures.
As an overlay network, Tailscale uses Access Control Lists (ACLs) to define which resources in the network a node can access. By creating a new Tailscale node on the backup server for each user who uses Time Machine in the network, authentication can be effectively enforced at the network level, completely bypassing the need for Samba's built-in authentication.
The client for this project needed to back up machines remotely. While incremental backups are relatively small, initial backups and, critically, system recovery benefit from maximizing the bandwith of the connection. Beyond simply providing authentication, Tailscale simplifies routing the Time Machine traffic over the fastest possible path.
Tailscale uses a technique called NAT-Traversal to create direct connections between individual nodes. Whether the client machine has a direct 10 Gbps connection to the server or is on a weak WiFi network thousands of miles away, Tailscale exposes the Samba share to the client at the same host address, leaving Time Machine to scale with the available bandwidth regardless of the current network conditions.
# Networking challenges
## Inter-container communication
While Tailscale handles routing traffic between nodes on the network, there are still some remaining challenges getting the Samba share connected to it.
`tailscale-timemachine` uses the reverse-proxy Caddy, along with the `caddy-tailscale` module, to bind to each hostname on the Tailscale network and route traffic to the corresponding Samba shares. In order for traffic to be routed directly from the client nodes to the Samba share, `caddy-tailscale` currently chooses a random port to listen on. Unfortunately, there is no mechanism for specifying a specific port, meaning the correct port to forward to the container cannot be known ahead of time.
As a consequence of this, the container that binds to the `tailscale` network must be in "host" networking mode so that the chosen port is automatically forwarded to the container. Unfortunately, this causes a problem for us. Due to a limitation of the Docker networking model, containers in the "host" networking mode cannot be a member of Docker bridge networks, which are the typical method of inter-container networking.
In order to get around this, `tailscale-timemachine` requires manually specifying an IP for each docker container so that the Caddy container can forward traffic to it.
In a `docker-compose.yml` file, each Time Machine Samba share container is defined like so:
```yaml
version: "3"
services:
tm-1:
image: mbentley/timemachine:smb
container_name: tm-1
volumes:
- $TM_PATH/tm-1:/opt/timemachine
- ./smb.conf:/etc/samba/smb.conf
environment:
- DEBUG_LEVEL=2
- CUSTOM_SMB_CONF=true
networks:
expose:
ipv4_address: 172.24.0.11
restart: unless-stopped
```
## Raw TCP Forwarding
Due to its ubiquity, most modern software services use the HTTP protocol to communicate. As a result, it's generally the most supported protocol for tools interacting with reverse proxies like Caddy. This complicates putting a reverse-proxy like Caddy in front of Samba server, as Samba communicates over a raw TCP socket, not the HTTP protocol.
While Caddy lacks native support for TCP socket forwarding, there is another third-party module called "layer4" that enables TCP port forwarding. This limits `tailscale-timemachine`'s ability to use many tools, including the Authentication module built into `caddy-tailscale`.
In order to add a Time Machine share, a new server is defined to route traffic from the specified host to the Samba container IP address, like so:
### `config.json`
```json
{
"apps": {
"layer4": {
"servers": {
"tm-1": {
"listen": ["tailscale/tm-1:445"],
"routes": [
{
"handle": [
{
"handler": "proxy",
"upstreams": [
{"dial": ["172.24.0.11:445"]}
]
}
]
}
]
}
}
}
}
}
```
# Future Improvements
## Docker network simplification
The first and simplest thing to solve would be adding a method to specify the port `caddy-tailscale` binds to. If this was implemented, the Caddy container could forward that particular port from the host and would no longer require "host" networking mode. As a result, it could directly join the docker bridge network, eliminating the need to manually specify IPs for each container.
`tailscaled`, the software that powers most Tailscale clients, already has a method for specifying a port, so it should be possible to modify the code of `caddy-tailscale` to support this.
## Tailscale AuthKey Provisioning
Currently, the AuthKeys required to add the Time Machine Samba nodes to the network must be manually provisioned. Integrating with the Tailscale and Headscale APIs to automatically provision the API keys would reduce the manual intervention required from the server adminitrator to add a Time Machine share.

@ -1,45 +0,0 @@
+++
title = "Bamboo Media Server"
weight = 3
[taxonomies]
tags = []
+++
Bamboo is a self-hosted personal media server written in Rust that I've been
working on for the past two years with my friend
[Ersei](https://ersei.saggis.com). It aims to replace existing media servers
like Jellyfin and Plex by decoupling the front-end, API, and content as much as
possible, allowing for richer client applications and a wider range of content
and content types.
Most of what makes Bamboo's architecture different is its ability to generalize
over types and sources of content while providing common primitives for richer
metadata.
For example, Jellyfin provides a few distinct types of media that it can serve:
Movies, Music, Shows, Books, Photos, and Music Videos. Jellyfin's API is deeply
tied to its web front-end, so the content you can host on Jellyfin is limited to
what the Jellyfin web client is capable of displaying. This works okay for
classic media server content like Movies and TV, but even something slightly
beyond its expectations, like a YouTube video or Twitch stream, must be made to
conform to the formats that Jellyfin expects.
Bamboo solves this problem by defining generic types that are common to all
media, such as the Title, a UUID, and a list of URLs it can be accessed at, and
allowing specific types of media to expand upon with additional data and fields.
This way, clients can be as specific or general as they desire based on their
required functionality. A media search application may not need to care about
anything beyond the title, while a Podcast application should only accept media
that is of the Podcast data type.
Of course, being written in Rust, Bamboo utilizes Rust's type system to define
strict API specifications. Invariants, optional or mandatory fields, and data
types are explicitly encoded using `struct`s, `enum`s, and `serde` for
serialization and deserialization, removing any ambiguity in the specification.
This is important for an application as dynamic as Bamboo, as the vast range of
content that it's capable of serving is intentionally vague and thus incorrect
assumptions by the server or its clients can make for inconsistent, incorrect,
or buggy code.
The public repository for the project can be found at
[git.nickzana.dev/bamboo/bamboo](https://git.nickzana.dev/bamboo/bamboo).

@ -1,19 +0,0 @@
+++
title = "iso639_enum"
weight = 4
[taxonomies]
tags = []
+++
`iso639_enum` is a small Rust crate I wrote for my Bamboo media server project.
ISO639 is a standard that enumerates world languages and provides two and three
character codes to represent them. This is important in the context of a media
server because any media in a particular language (basically anything with
spoken or written words) will represent its language in its metadata in some
form, usually as a two or three character language code.
The documentation for the crate can be found at
[lib.rs/iso639_enum](https://lib.rs/iso639_enum).
The repository for the crate can be found at
[git.nickzana.dev/nick/rust-iso639](https://git.nickzana.dev/nick/rust-iso639).
Loading…
Cancel
Save