back arrowView show

Episode 268: Netcat Demystified | BSD Now 268

1:07:20

Episode description

6 metrics for zpool performance, 2FA with ssh on OpenBSD, ZFS maintaining file type information in dirs, everything old is new again, netcat demystified, and more.

##Headlines
###Six Metrics for Measuring ZFS Pool Performance Part 1

The layout of a ZFS storage pool has a significant impact on system performance under various workloads. Given the importance of picking the right configuration for your workload and the fact that making changes to an in-use ZFS pool is far from trivial, it is important for an administrator to understand the mechanics of pool performance when designing a storage system.

  • To quantify pool performance, we will consider six primary metrics:
  • Read I/O operations per second (IOPS)
  • Write IOPS
  • Streaming read speed
  • Streaming write speed
  • Storage space efficiency (usable capacity after parity versus total raw capacity)
  • Fault tolerance (maximum number of drives that can fail before data loss)
  • For the sake of comparison, we’ll use an example system with 12 drives, each one sized at 6TB, and say that each drive does 100MB/s streaming reads and writes and can do 250 read and write IOPS. We will visualize how the data is spread across the drives by writing 12 multi-colored blocks, shown below. The blocks are written to the pool starting with the brown block on the left (number one), and working our way to the pink block on the right (number 12).

Note that when we calculate data rates and IOPS values for the example system, they are only approximations. Many other factors can impact pool access speeds for better (compression, caching) or worse (poor CPU performance, not enough memory).
There is no single configuration that maximizes all six metrics. Like so many things in life, our objective is to find an appropriate balance of the metrics to match a target workload. For example, a cold-storage backup system will likely want a pool configuration that emphasizes usable storage space and fault tolerance over the other data-rate focused metrics.
Let’s start with a quick review of ZFS storage pools before diving into specific configuration options. ZFS storage pools are comprised of one or more virtual devices, or vdevs. Each vdev is comprised of one or more storage providers, typically physical hard disks. All disk-level redundancy is configured at the vdev level. That is, the RAID layout is set on each vdev as opposed to on the storage pool. Data written to the storage pool is then striped across all the vdevs. Because pool data is striped across the vdevs, the loss of any one vdev means total pool failure. This is perhaps the single most important fact to keep in mind when designing a ZFS storage system. We will circle back to this point in the next post, but keep it in mind as we go through the vdev configuration options.
Because storage pools are made up of one or more vdevs with the pool data striped over the top, we’ll take a look at pool configuration in terms of various vdev configurations. There are three basic vdev configurations: striping, mirroring, and RAIDZ (which itself has three different varieties). The first section will cover striped and mirrored vdevs in this post; the second post will cover RAIDZ and some example scenarios.
A striped vdev is the simplest configuration. Each vdev consists of a single disk with no redundancy. When several of these single-disk, striped vdevs are combined into a single storage pool, the total usable storage space would be the sum of all the drives. When you write data to a pool made of striped vdevs, the data is broken into small chunks called “blocks” and distributed across all the disks in the pool. The blocks are written in “round-robin” sequence, meaning after all the disks receive one row of blocks, called a stripe, it loops back around and writes another stripe under the first. A striped pool has excellent performance and storage space efficiency, but absolutely zero fault tolerance. If even a single drive in the pool fails, the entire pool will fail and all data stored on that pool will be lost.
The excellent performance of a striped pool comes from the fact that all of the disks can work independently for all read and write operations. If you have a bunch of small read or write operations (IOPS), each disk can work independently to fetch the next block. For streaming reads and writes, each disk can fetch the next block in line synchronized with its neighbors. For example, if a given disk is fetching block n, its neighbor to the left can be fetching block n-1, and its neighbor to the right can be fetching block n+1. Therefore, the speed of all read and write operations as well as the quantity of read and write operations (IOPS) on a striped pool will scale with the number of vdevs. Note here that I said the speeds and IOPS scale with the number of vdevs rather than the number of drives; there’s a reason for this and we’ll cover it in the next post when we discuss RAID-Z.
Here’s a summary of the total pool performance (where N is the number of disks in the pool):

  • N-wide striped:
  • Read IOPS: N * Read IOPS of a single drive
  • Write IOPS: N * Write IOPS of a single drive
  • Streaming read speed: N * Streaming read speed of a single drive
  • Streaming write speed: N * Streaming write speed of a single drive
  • Storage space efficiency: 100%
  • Fault tolerance: None!

Let’s apply this to our example system, configured with a 12-wide striped pool:

  • 12-wide striped:
  • Read IOPS: 3000
  • Write IOPS: 3000
  • Streaming read speed: 1200 MB/s
  • Streaming write speed: 1200 MB/s
  • Storage space efficiency: 72 TB
  • Fault tolerance: None!
  • Below is a visual depiction of our 12 rainbow blocks written to this pool configuration:

The blocks are simply striped across the 12 disks in the pool. The LBA column on the left stands for “Logical Block Address”. If we treat each disk as a column in an array, each LBA would be a row. It’s also easy to see that if any single disk fails, we would be missing a color in the rainbow and our data would be incomplete. While this configuration has fantastic read and write speeds and can handle a ton of IOPS, the data stored on the pool is very vulnerable. This configuration is not recommended unless you’re comfortable losing all of your pool’s data whenever any single drive fails.
A mirrored vdev consists of two or more disks. A mirrored vdev stores an exact copy of all the data written to it on each one of its drives. Traditional RAID-1 mirrors usually only support two drive mirrors, but ZFS allows for more drives per mirror to increase redundancy and fault tolerance. All disks in a mirrored vdev have to fail for the vdev, and thus the whole pool, to fail. Total storage space will be equal to the size of a single drive in the vdev. If you’re using mismatched drive sizes in your mirrors, the total size will be that of the smallest drive in the mirror.
Streaming read speeds and read IOPS on a mirrored vdev will be faster than write speeds and IOPS. When reading from a mirrored vdev, the drives can “divide and conquer” the operations, similar to what we saw above in the striped pool. This is because each drive in the mirror has an identical copy of the data. For write operations, all of the drives need to write a copy of the data, so the mirrored vdev will be limited to the streaming write speed and IOPS of a single disk.

Here’s a summary:

  • N-way mirror:

  • Read IOPS: N * Read IOPS of a single drive

  • Write IOPS: Write IOPS of a single drive

  • Streaming read speed: N * Streaming read speed of a single drive

  • Streaming write speed: Streaming write speed of a single drive

  • Storage space efficiency: 50% for 2-way, 33% for 3-way, 25% for 4-way, etc. [(N-1)/N]

  • Fault tolerance: 1 disk per vdev for 2-way, 2 for 3-way, 3 for 4-way, etc. [N-1]

  • For our first example configuration, let’s do something ridiculous and create a 12-way mirror. ZFS supports this kind of thing, but your management probably will not.

  • 1x 12-way mirror:

  • Read IOPS: 3000

  • Write IOPS: 250

  • Streaming read speed: 1200 MB/s

  • Streaming write speed: 100 MB/s

  • Storage space efficiency: 8.3% (6 TB)

  • Fault tolerance: 11

As we can clearly see from the diagram, every single disk in the vdev gets a full copy of our rainbow data. The chainlink icons between the disk labels in the column headers indicate the disks are part of a single vdev. We can lose up to 11 disks in this vdev and still have a complete rainbow. Of course, the data takes up far too much room on the pool, occupying a full 12 LBAs in the data array.

Obviously, this is far from the best use of 12 drives. Let’s do something a little more practical and configure the pool with the ZFS equivalent of RAID-10. We’ll configure six 2-way mirror vdevs. ZFS will stripe the data across all 6 of the vdevs. We can use the work we did in the striped vdev section to determine how the pool as a whole will behave. Let’s first calculate the performance per vdev, then we can work on the full pool:

  • 1x 2-way mirror:

  • Read IOPS: 500

  • Write IOPS: 250

  • Streaming read speed: 200 MB/s

  • Streaming write speed: 100 MB/s

  • Storage space efficiency: 50% (6 TB)

  • Fault tolerance: 1

  • Now we can pretend we have 6 drives with the performance statistics listed above and run them through our striped vdev performance calculator to get the total pool’s performance:

  • 6x 2-way mirror:

  • Read IOPS: 3000

  • Write IOPS: 1500

  • Streaming read speed: 3000 MB/s

  • Streaming write speed: 1500 MB/s

  • Storage space efficiency: 50% (36 TB)

  • Fault tolerance: 1 per vdev, 6 total

  • Again, we will examine the configuration from a visual perspective:

Each vdev gets a block of data and ZFS writes that data to all of (or in this case, both of) the disks in the mirror. As long as we have at least one functional disk in each vdev, we can retrieve our rainbow. As before, the chain link icons denote the disks are part of a single vdev. This configuration emphasizes performance over raw capacity but doesn’t totally disregard fault tolerance as our striped pool did. It’s a very popular configuration for systems that need a lot of fast I/O. Let’s look at one more example configuration using four 3-way mirrors. We’ll skip the individual vdev performance calculation and go straight to the full pool:

  • 4x 3-way mirror:
  • Read IOPS: 3000
  • Write IOPS: 1000
  • Streaming read speed: 3000 MB/s
  • Streaming write speed: 400 MB/s
  • Storage space efficiency: 33% (24 TB)
  • Fault tolerance: 2 per vdev, 8 total

While we have sacrificed some write performance and capacity, the pool is now extremely fault tolerant. This configuration is probably not practical for most applications and it would make more sense to use lower fault tolerance and set up an offsite backup system.
Striped and mirrored vdevs are fantastic for access speed performance, but they either leave you with no redundancy whatsoever or impose at least a 50% penalty on the total usable space of your pool. In the next post, we will cover RAIDZ, which lets you keep data redundancy without sacrificing as much storage space efficiency. We’ll also look at some example workload scenarios and decide which layout would be the best fit for each.

###2FA with ssh on OpenBSD

Five years ago I wrote about using a yubikey on OpenBSD. The only problem with doing this is that there’s no validation server available on OpenBSD, so you need to use a different OTP slot for each machine. (You don’t want to risk a replay attack if someone succeeds in capturing an OTP on one machine, right?) Yubikey has two OTP slots per device, so you would need a yubikey for every two machines with which you’d like to use it. You could use a bastion—and use only one yubikey—but I don’t like the SPOF aspect of a bastion. YMMV.
After I played with TOTP, I wanted to use them as a 2FA for ssh. At the time of writing, we can’t do that using only the tools in base. This article focuses on OpenBSD; if you use another operating system, here are two handy links.

  • SEED CONFIGURATION

The first thing we need to do is to install the software which will be used to verify the OTPs we submit.

# pkg_add login_oath

We need to create a secret - aka, the seed - that will be used to calculate the Time-based One-Time Passwords. We should make sure no one can read or change it.

$ openssl rand -hex 20 > ~/.totp-key
$ chmod 400 ~/.totp-key

Now we have a hexadecimal key, but apps usually want a base32 secret. I initially wrote a small script to do the conversion.
While writing this article, I took the opportunity to improve it. When I initially wrote this utility for my use, python-qrcode hadn’t yet been imported to the OpenBSD ports/packages system. It’s easy to install now, so let’s use it.
Here’s the improved version. It will ask for the hex key and output the secret as a base32-encoded string, both with and without spacing so you can copy-paste it into your password manager or easily retype it. It will then ask for the information needed to generate a QR code. Adding our new OTP secret to any mobile app using the QR code will be super easy!

  • SYSTEM CONFIGURATION

We can now move to the configuration of the system to put our new TOTP to use. As you might guess, it’s going to be quite close to what we did with the yubikey.
We need to tweak login.conf. Be careful and keep a root shell open at all times. The few times I broke my OpenBSD were because I messed with login.conf without showing enough care.

  • SSHD CONFIGURATION

Again, keeping a root shell around decreases the risk of losing access to the system and being locked outside.
A good standard is to use PasswordAuthentication no and to use public key only. Except… have a guess what the P stands for in TOTP. Yes, congrats, you guessed it!
We need to switch to PasswordAuthentication yes. However, if we made this change alone, sshd would then accept a public key OR a password (which are TOTP because of our login.conf). 2FA uses both at the same time.
To inform sshd we intend to use both, we need to set AuthenticationMethods publickey,password. This way, the user trying to login will first need to perform the traditional publickey authentication. Once that’s done, ssh will prompt for a password and the user will need to submit a valid TOTP for the system.
We could do this the other way around, but I think bots could try passwords, wasting resources. Evaluated in this order, failing to provide a public key leads to sshd immediately declining your attempt.

  • IMPROVING SECURITY WITHOUT IMPACTING UX

My phone has a long enough password that most of the time, I fail to type it correctly on the first try. Of course, if I had to unlock my phone, launch my TOTP app and use my keyboard to enter what I see on my phone’s screen, I would quickly disable 2FA.
To find a balance, I have whitelisted certain IP addresses and users. If I connect from a particular IP address or as a specific user, I don’t want to go through 2FA. For some users, I might not even enable 2FA.
To sum up, we covered how to create a seed, how to perform a hexadecimal to base32 conversion and how to create a QR code for mobile applications. We configured the login system with login.conf so that ssh authentication uses the TOTP login system, and we told sshd to ask for both the public key and the Time-based One-Time Password. Now you should be all set to use two-factor ssh authentication on OpenBSD!

##News Roundup
###How ZFS maintains file type information in directories

As an aside in yesterday’s history of file type information being available in Unix directories, I mentioned that it was possible for a filesystem to support this even though its Unix didn’t. By supporting it, I mean that the filesystem maintains this information in its on disk format for directories, even though the rest of the kernel will never ask for it. This is what ZFS does.
The easiest way to see that ZFS does this is to use zdb to dump a directory. I’m going to do this on an OmniOS machine, to make it more convincing, and it turns out that this has some interesting results. Since this is OmniOS, we don’t have the convenience of just naming a directory in zdb, so let’s find the root directory of a filesystem, starting from dnode 1 (as seen before).

# zdb -dddd fs3-corestaff-01/h/281 1
Dataset [....]
[...]
microzap: 512 bytes, 4 entries
[...]
ROOT = 3

# zdb -dddd fs3-corestaff-01/h/281 3
Object lvl iblk dblk dsize lsize %full type
3 1 16K 1K 8K 1K 100.00 ZFS directory
[...]
microzap: 1024 bytes, 8 entries

RESTORED = 4396504 (type: Directory)
ckstst = 12017 (type: not specified)
ckstst3 = 25069 (type: Directory)
.demo-file = 5832188 (type: Regular File)
.peergroup = 12590 (type: not specified)
cks = 5 (type: not specified)
cksimap1 = 5247832 (type: Directory)
.diskuse = 12016 (type: not specified)
ckstst2 = 12535 (type: not specified)

This is actually an old filesystem (it dates from Solaris 10 and has been transferred around with ‘zfs send | zfs recv’ since then), but various home directories for real and test users have been created in it over time (you can probably guess which one is the oldest one). Sufficiently old directories and files have no file type information, but more recent ones have this information, including .demo-file, which I made just now so this would have an entry that was a regular file with type information.
Once I dug into it, this turned out to be a change introduced (or activated) in ZFS filesystem version 2, which is described in ‘zfs upgrade -v’ as ‘enhanced directory entries’. As an actual change in (Open)Solaris, it dates from mid 2007, although I’m not sure what Solaris release it made it into. The upshot is that if you made your ZFS filesystem any time in the last decade, you’ll have this file type information in your directories.
How ZFS stores this file type information is interesting and clever, especially when it comes to backwards compatibility. I’ll start by quoting the comment from zfs_znode.h:

/*
* The directory entry has the type (currently unused on
* Solaris) in the top 4 bits, and the object number in
* the low 48 bits. The "middle" 12 bits are unused.
*/

In yesterday’s entry I said that Unix directory entries need to store at least the filename and the inode number of the file. What ZFS is doing here is reusing the 64 bit field used for the ‘inode’ (the ZFS dnode number) to also store the file type, because it knows that object numbers have only a limited range. This also makes old directory entries compatible, by making type 0 (all 4 bits 0) mean ‘not specified’. Since old directory entries only stored the object number and the object number is 48 bits or less, the higher bits are guaranteed to be all zero.
The reason this needed a new ZFS filesystem version is now clear. If you tried to read directory entries with file type information on a version of ZFS that didn’t know about them, the old version would likely see crazy (and non-existent) object numbers and nothing would work. In order to even read a ‘file type in directory entries’ filesystem, you need to know to only look at the low 48 bits of the object number field in directory entries.

###Everything old is new again

Just because KDE4-era software has been deprecated by the KDE-FreeBSD team in the official ports-repository, doesn’t mean we don’t care for it while we still need to. KDE4 was released on January 11th, 2008 — I still have the T-shirt — which was a very different C++ world than what we now live in. Much of the code pre-dates the availability of C11 — certainly the availability of compilers with C11 support. The language has changed a great deal in those ten years since the original release.
The platforms we run KDE code on have, too — FreeBSD 12 is a long way from the FreeBSD 6 or 7 that were current at release (although at the time, I was more into OpenSolaris). In particular, since then the FreeBSD world has switched over to Clang, and FreeBSD current is experimenting with Clang 7. So we’re seeing KDE4-era code being built, and running, on FreeBSD 12 with Clang 7. That’s a platform with a very different idea of what constitutes correct code, than what the code was originally written for. (Not quite as big a difference as Helio’s KDE1 efforts, though)
So, while we’re counting down to removing KDE4 from the FreeBSD ports tree, we’re also going through and fixing it to work with Clang 7, which defaults to a newer C++ standard and which is quite picky about some things. Some time in the distant past, when pointers were integers and NULL was zero, there was some confusion about booleans. So there’s lots of code that does list.contains(element) > 0 … this must have been a trick before booleans were a supported type in all our compilers. In any case it breaks with Clang 7, since contains() returns a QBool which converts to a nullptr (when false) which isn’t comparable to the integer 0. Suffice to say I’ve spent more time reading KDE4-era code this month, than in the past two years.
However, work is proceeding apace, so if you really really want to, you can still get your old-school kicks on a new platform. Because we care about packaging things right, even when we want to get rid of it.

###OpenBSD netcat demystified

Owing to its versatile functionalities, netcat earns the reputation as “TCP/IP Swiss army knife”. For example, you can create a simple chat app using netcat:

  • (1) Open a terminal and input following command:

# nc -l 3003

This means a netcat process will listen on 3003 port in this machine (the IP address of current machine is 192.168.35.176).

  • (2) Connect aforemontioned netcat process in another machine, and send a greeting:

# nc 192.168.35.176 3003
hello

Then in the first machine’s terminal, you will see the “hello” text:

# nc -l 3003
hello

A primitive chatroom is built successfully. Very cool! Isn’t it? I think many people can’t wait to explore more features of netcatnow. If you are among them, congratulations! This tutorial may be the correct place for you.
In the following parts, I will delve into OpenBSD’s netcatcode to give a detailed anatomy of it. The reason of picking OpenBSD’s netcat rather than others’ is because its code repository is small (~2000 lines of code) and neat. Furthermore, I also hope this little book can assist you learn more socket programming knowledge not just grasping usage of netcat.
We’re all set. Let’s go!

##Beastie Bits

##Feedback/Questions

  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

episodes iconMore Episodes

315: Recapping vBSDcon 2019

September 12th, 2019

1:16:55

vBSDcon 2019 recap, Unix at 50, OpenBSD on fan-less Tuxedo InfinityBook, humungus - an hg server, how to configure a network dump in FreeBSD, and more.

Headlines vBSDcon Recap

Allan and Benedict attended vBSDcon …

314: Swap that Space

September 5th, 2019

48:28

Unix virtual memory when you have no swap space, Dsynth details on Dragonfly, Instant Workstation on FreeBSD, new servers new tech, Experimenting with streaming setups on NetBSD, NetBSD’s progress towards Steam support …

313: In-Kernel TLS

August 29th, 2019

55:12

OpenBSD on 7th gen Thinkpad X1 Carbon, how to install FreeBSD on a MacBook, Kernel portion of in-kernel TLS (KTLS), Boot Environments on …

312: Why Package Managers

August 22nd, 2019

1:12:03

The UNIX Philosophy in 2019, why use package managers, touchpad interrupted, Porting wine to amd64 on NetBSD second evaluation report, Enhancing Syzkaller Support for NetBSD, all about the Pinebook Pro, killing a …

311: Conference Gear Breakdown

August 15th, 2019

1:13:25

NetBSD 9.0 release process has started, xargs, a tale of two spellcheckers, Adapting TriforceAFL for NetBSD, Exploiting a no-name freebsd kernel …

310: My New Free NAS

August 8th, 2019

48:09

OPNsense 19.7.1 is out, ZFS on Linux still has annoying issues with ARC size, Hammer2 is now default, NetBSD audio – an application perspective, new …

Episode 309: Get Your Telnet Fix

August 1st, 2019

48:24

DragonFlyBSD Project Update - colo upgrade, future trends, resuming ZFS send, realtime bandwidth terminal graph visualization, fixing telnet fixes, a …

308: Mumbling with OpenBSD

July 25th, 2019

44:25

Replacing a (silently) failing disk in a ZFS pool, OPNsense 19.7 RC1 released, implementing DRM ioctl support for NetBSD, High quality/low latency …

307: Twitching with OpenBSD

July 18th, 2019

50:59

FreeBSD 11.3 has been released, OpenBSD workstation, write your own fuzzer for the NetBSD kernel, Exploiting FreeBSD-SA-19:02.fd, streaming to twitch using OpenBSD, 3 different ways of dumping hex contents of a file, …

306: Comparing Hammers

July 11th, 2019

38:21

Am5x86 based retro UNIX build log, setting up services in a FreeNAS Jail, first taste of DragonflyBSD, streaming Netflix on NetBSD, NetBSD on the …

305: Changing face of Unix

July 4th, 2019

56:09

Website protection with OPNsense, FreeBSD Support Pull Request for ZFS-on-Linux, How much has Unix changed, Porting Wine to amd64 on NetBSD, FreeBSD …

304: Prospering with Vulkan

June 27th, 2019

1:03:33

DragonflyBSD 5.6 is out, OpenBSD Vulkan Support, bad utmp implementations in glibc and FreeBSD, OpenSSH protects itself against Side Channel attacks, …

303: OpenZFS in Ports

June 20th, 2019

52:33

OpenZFS-kmod port available, using blacklistd with NPF as fail2ban replacement, ZFS raidz expansion alpha preview 1, audio VU-meter increases CO2 footprint rant, XSAVE and compat32 kernel work for LLDB, where icons for …

302: Contention Reduction

June 13th, 2019

1:09:30

DragonFlyBSD's kernel optimizations pay off, differences between OpenBSD and Linux, NetBSD 2019 Google Summer of Code project list, Reducing that contention, fnaify 1.3 released, vmctl(8): CLI syntax changes, and things …

301: GPU Passthrough

June 6th, 2019

45:34

GPU passthrough on bhyve, confusion with used/free disk space on ZFS, OmniOS Community Edition, pfSense 2.4.4 Release p3, NetBSD 8.1 RC1, FreeNAS as your Server OS, and more.

Headlines GPU Passthrough Reported Working …

300: The Big Three

May 30th, 2019

1:14:06

FreeBSD 11.3-beta 1 is out, BSDCan 2019 recap, OpenIndiana 2019.04 is out, Overview of ZFS Pools in FreeNAS, why open source firmware is important …

299: The NAS Fleet

May 22nd, 2019

52:47

Running AIX on QEMU on Linux on Windows, your NAS fleet with TrueCommand, Unleashed 1.3 is available, LLDB: CPU register inspection support extension, V7 Unix programs often not written as expected, and more.

298: BSD On The Road

May 16th, 2019

52:22

36 year old UFS bug fixed, a BSD for the road, automatic upgrades with OpenBSD, DTrace ext2fs support in FreeBSD, Dedicated SSH tunnel user, …

297: Dragonfly In The Wild

May 9th, 2019

40:16

FreeBSD ZFS vs. ZoL performance, Dragonfly 5.4.2 has been release, containing web services with iocell, Solaris 11.4 SRU8, Problem with SSH Agent …

296: It’s Alive: OpenBSD 6.5

May 3rd, 2019

1:01:35

OpenBSD 6.5 has been released, mount ZFS datasets anywhere, help test upcoming NetBSD 9 branch, LibreSSL 2.9.1 is available, Bail Bond Denied Edition of FreeBSD Mastery: Jails, and one reason ed(1) was a good editor …

295: Fun with funlinkat()

April 25th, 2019

1:01:02

Introducing funlinkat(), an OpenBSD Router with AT&T U-Verse, using NetBSD on a raspberry pi, ZFS encryption is still under development, Rump kernel servers and clients tutorial, Snort on OpenBSD 6.4, and more.

294: The SSH Tarpit

April 18th, 2019

57:03

A PI-powered Plan 9 cluster, an SSH tarpit, rdist for when Ansible is too much, falling in love with OpenBSD again, how I created my first FreeBSD port, the Tilde Institute of OpenBSD education and more.

Headlines A …

293: Booking Jails

April 11th, 2019

1:16:41

This week we have a special episode with a Michael W. Lucas interview about his latest jail book that’s been released. We’re talking all things jails, writing, book sponsoring, the upcoming BSDCan 2019 conference, and …

292: AsiaBSDcon 2019 Recap

April 4th, 2019

1:30:25

FreeBSD Q4 2018 status report, the GhostBSD alternative, the coolest 90s laptop, OpenSSH 8.0 with quantum computing resistant keys exchange, project …

291: Storage Changes Software

March 28th, 2019

1:12:44

Storage changing software, what makes Unix special, what you need may be “pipeline +Unix commands”, running a bakery on Emacs and PostgreSQL, the …

290: Timestamped Notes

March 21st, 2019

50:01

FreeBSD on Cavium ThunderX, looking at NetBSD as an OpenBSD user, taking time-stamped notes in vim, OpenBSD 6.5 has been tagged, FreeBSD and NetBSD in GSoC 2019, SecBSD: an UNIX-like OS for Hackers, and more.

289: Microkernel Failure

March 14th, 2019

1:01:03

A kernel of failure, IPv6 fragmentation vulnerability in OpenBSD’s pf, a guide to the terminal, using a Yubikey for SSH public key authentication, …

288: Turing Complete Sed

March 7th, 2019

59:10

Software will never fix Spectre-type bugs, a proof that sed is Turing complete, managed jails using Bastille, new version of netdata, using grep with …

287: rc.d in NetBSD

February 28th, 2019

1:00:20

Design and Implementation of NetBSD’s rc.d system, first impressions of Project Trident 18.12, PXE booting a FreeBSD disk image, middle mouse button pasting, NetBSD gains hardware accelerated virtualization, and more.

286: Old Machine Revival

February 21st, 2019

1:18:56

Adding glue to a desktop environment, flashing the BIOS on a PC Engine, revive a Cisco IDS into a capable OpenBSD computer, An OpenBSD WindowMaker desktop, RealTime data compression, the love for pipes, and more.

285: BSD Strategy

February 14th, 2019

1:09:32

Strategic thinking to keep FreeBSD relevant, reflecting on the soul of a new machine, 10GbE Benchmarks On Nine Linux Distros and FreeBSD, NetBSD …

284: FOSDEM 2019

February 7th, 2019

59:26

We recap FOSDEM 2019, FreeBSD Foundation January update, OPNsense 19.1 released, the hardware-assisted virtualization challenge, ZFS and GPL terror, …

283: Graphical Interface-View

January 31st, 2019

46:44

We’re at FOSDEM 2019 this week having fun. We’d never leave you in a lurch, so we have recorded an interview with Niclas Zeising of the FreeBSD graphics team for you. Enjoy.

##Interview - Niclas Zeising -

282: Open the Rsync

January 24th, 2019

1:01:20

Project Trident 18.12 released, Spotifyd on NetBSD, OPNsense 18.7.10 is available, Ultra EPYC AMD Powered Sun Ultra 24 Workstation, OpenRsync, LLD porting to NetBSD, and more.

##Headlines

###AsiaBSDCon 2019 Call for …

281: EPYC Server Battle

January 17th, 2019

1:23:52

SCP client vulnerabilities, BSDs vs Linux benchmarks on a Tyan EPYC Server, fame for the Unix inventors, Die IPv4, GhostBSD 18.12 released, Unix in …

Episode 280: FOSS Clothing | BSD Now 280

January 10th, 2019

52:23

A EULA in FOSS clothing, NetBSD with more LLVM support, Thoughts on FreeBSD 12.0, FreeBSD Performance against Windows and Linux on Xeon, Microsoft …

Episode 279: Future of ZFS | BSD Now 279

January 3rd, 2019

1:33:21

The future of ZFS in FreeBSD, we pick highlights from the FreeBSD quarterly status report, flying with the raven, modern KDE on FreeBSD, many ways to launch FreeBSD in EC2, GOG installers on NetBSD, and more.

Episode 278: The Real McCoy | BSD Now 278

December 27th, 2018

49:39

We sat down at BSDCan 2018 to interview Kirk McKusick about various topics ranging about the early years of Berkeley Unix, his continuing work on UFS, the governance of FreeBSD, and more.

##Interview - Kirk McKusick -

Episode 277: Nmap Level Up | BSD Now 277

December 24th, 2018

1:16:25

The Open Source midlife crisis, Donald Knuth The Yoda of Silicon Valley, Certbot For OpenBSD's httpd, how to upgrade FreeBSD from 11 to 12, level up your nmap game, NetBSD desktop, and more.

##Headlines
###Open Source …

Episode 276: Ho, Ho, Ho - 12.0 | BSD Now 276

December 13th, 2018

1:10:41

FreeBSD 12.0 is finally here, partly-cloudy IPsec VPN, KLEAK with NetBSD, How to create synth repos, GhostBSD author interview, and more.

##Headlines

Episode 275: OpenBSD in Stereo | BSD Now 275

December 9th, 2018

1:24:52

DragonflyBSD 5.4 has been released, down the Gopher hole with OpenBSD, OpenBSD in stereo with VFIO, BSD/OS the best candidate for legally tested open …

Episode 274: Language: Assembly | BSD Now 274

November 29th, 2018

1:04:24

Assembly language on OpenBSD, using bhyve for FreeBSD development, FreeBSD Gaming, FreeBSD for Thanksgiving, no space left on Dragonfly’s hammer2, and more.

##Headlines
###Assembly language on OpenBSD amd64+arm64

Episode 273: A Thoughtful Episode | BSD Now 273

November 23rd, 2018

1:14:32

Thoughts on NetBSD 8.0, Monitoring love for a GigaBit OpenBSD firewall, cat’s source history, X.org root permission bug, thoughts on OpenBSD as a desktop, and NomadBSD review.

##Headlines
###Some thoughts on NetBSD 8.0

Episode 272: Detain the bhyve | BSD Now 272

November 15th, 2018

1:08:39

Byproducts of reading OpenBSD’s netcat code, learnings from porting your own projects to FreeBSD, OpenBSD’s unveil(), NetBSD’s Virtual Machine Monitor, what 'dependency' means in Unix init systems, jailing bhyve, and …

Episode 271: Automatic Drive Tests | BSD Now 271

November 8th, 2018

1:08:01

MidnightBSD 1.0 released, MeetBSD review, EuroBSDcon trip reports, DNS over TLS in FreeBSD 12, Upgrading OpenBSD with Ansible, how to use smartd to run tests on your drives automatically, and more.

##Headlines
###

Episode 270: Ghostly Releases | BSD Now 270

November 1st, 2018

1:09:07

OpenBSD 6.4 released, GhostBSD RC2 released, MeetBSD - the ultimate hallway track, DragonflyBSD desktop on a Thinkpad, Porting keybase to NetBSD, …

Episode 269: Tiny Daemon Lib | BSD Now 269

October 24th, 2018

1:28:19

FreeBSD Foundation September Update, tiny C lib for programming Unix daemons, EuroBSDcon trip reports, GhostBSD tested on real hardware, and a BSD …

Episode 267: Absolute FreeBSD | BSD Now 267

October 10th, 2018

1:07:38

We have a long interview with fiction and non-fiction author Michael W. Lucas for you this week as well as questions from the audience.

##Headlines
##Interview - Michael W. Lucas - mwlucas@michaelwlucas.com / @mwlauthor

Episode 266: File Type History | BSD Now 266

October 3rd, 2018

1:15:00

Running OpenBSD/NetBSD on FreeBSD using grub2-bhyve, vermaden’s FreeBSD story, thoughts on OpenBSD on the desktop, history of file type info in Unix …

Episode 265: Software Disenchantment | BSD Now 265

September 27th, 2018

1:41:55

We report from our experiences at EuroBSDcon, disenchant software, LLVM 7.0.0 has been released, Thinkpad BIOS update options, HardenedBSD Foundation announced, and ZFS send vs. rsync.

##Headlines

###[FreeBSD …

Episode 264: Optimized-out | BSD Now 264

September 20th, 2018

1:11:58

FreeBSD and DragonflyBSD benchmarks on AMD’s Threadripper, NetBSD 7.2 has been released, optimized out DTrace kernel symbols, stuck UEFI bootloaders, …

Episode 263: Encrypt That Pool | BSD Now 263

September 7th, 2018

1:03:45

Mitigating Spectre/Meltdown on HP Proliant servers, omniOS installation setup, debugging a memory corruption issue on OpenBSD, CfT for OpenZFS native …

Episode 262: OpenBSD Surfacing | BSD Now 262

September 6th, 2018

1:13:20

OpenBSD on Microsoft Surface Go, FreeBSD Foundation August Update, What’s taking so long with Project Trident, pkgsrc config file versioning, and MacOS remnants in ZFS code.

##Headlines
###OpenBSD on the Microsoft …

Episode 261: FreeBSDcon Flashback | BSD Now 261

August 30th, 2018

1:49:13

Insight into TrueOS and Trident, stop evildoers with pf-badhost, Flashback to FreeBSDcon ‘99, OpenBSD’s measures against TLBleed, play Morrowind on OpenBSD in 5 steps, DragonflyBSD developers shocked at Threadripper …

Episode 260: Hacking Tour of Europe | BSD Now 260

August 23rd, 2018

1:20:14

Trip reports from the Essen Hackathon and BSDCam, CfT: ZFS native encryption and UFS trim consolidation, ZFS performance benchmarks on a FreeBSD …

Episode 259: Long Live Unix | BSD Now 259

August 16th, 2018

1:47:36

The strange birth and long life of Unix, FreeBSD jail with a single public IP, EuroBSDcon 2018 talks and schedule, OpenBSD on G4 iBook, PAM template …

Episode 258: OS Foundations | BSD Now 258

August 8th, 2018

1:27:52

FreeBSD Foundation July Newsletter, a bunch of BSDCan trip reports, HardenedBSD Foundation status, FreeBSD and OSPFd, ZFS disk structure overview, …

Episode 257: Great NetBSD 8 | BSD Now 257

August 2nd, 2018

1:23:11

NetBSD 8.0 available, FreeBSD on Scaleway’s ARM64 VPS, encrypted backups with OpenBSD, Dragonfly server storage upgrade, zpool checkpoints, g2k18 …

Episode 256: Because Computers | BSD Now 2^8

July 25th, 2018

1:44:42

FreeBSD ULE vs. Linux CFS, OpenBSD on Tuxedo InfinityBook, how zfs diff reports filenames efficiently, why choose FreeBSD over Linux, PS4 double free exploit, OpenBSD’s wifi autojoin, and FreeBSD jails the hard way.

Episode 255: What Are You Pointing At | BSD Now 255

July 18th, 2018

1:20:27

What ZFS blockpointers are, zero-day rewards offered, KDE on FreeBSD status, new FreeBSD core team, NetBSD WiFi refresh, poor man’s CI, and the power …

Episode 254: Bare the OS | BSD Now 254

July 12th, 2018

1:31:23

Control flow integrity with HardenedBSD, fixing bufferbloat with OpenBSD’s pf, Bareos Backup Server on FreeBSD, MeetBSD CfP, crypto simplified …

Episode 253: Silence of the Fans | BSD Now 253

July 5th, 2018

1:26:51

Fanless server setup with FreeBSD, NetBSD on pinebooks, another BSDCan trip report, transparent network audio, MirBSD's Korn Shell on Plan9, static site generators on OpenBSD, and more.

##Headlines
###Silent Fanless …

Episode 252: Goes to 11.2 | BSD Now 252

June 28th, 2018

1:34:26

FreeBSD 11.2 has been released, setting up an MTA behind Tor, running pfsense on DigitalOcean, one year of C, using OpenBGPD to announce VM networks, the power to serve, and a BSDCan trip report.

##Headlines
###FreeBSD …

Episode 251: Crypto HAMMER | BSD Now 251

June 21st, 2018

1:28:43

DragonflyBSD’s hammer1 encrypted master/slave setup, second part of our BSDCan recap, NomadBSD 1.1-RC1 available, OpenBSD adds an LDAP client to …

Episode 250: BSDCan 2018 Recap | BSD Now 250

June 14th, 2018

1:41:10

TrueOS becoming a downstream fork with Trident, our BSDCan 2018 recap, HardenedBSD Foundation founding efforts, VPN with OpenIKED on OpenBSD, FreeBSD on a System76 Galago Pro, and hardware accelerated crypto on Octeons.

Episode 249: Router On A Stick | BSD Now 249

June 6th, 2018

1:25:17

OpenZFS and DTrace updates in NetBSD, NetBSD network security stack audit, Performance of MySQL on ZFS, OpenSMTP results from p2k18, legacy Windows …

Episode 248: Show Me The Mooney | BSD Now 248

May 29th, 2018

1:44:33

DragonflyBSD release 5.2.1 is here, BPF kernel exploit writeup, Remote Debugging the running OpenBSD kernel, interview with Patrick Mooney, FreeBSD buildbot setup in a jail, dumping your USB, and 5 years of gaming on …

Episode 247: Interning for FreeBSD | BSD Now 247

May 24th, 2018

1:29:59

FreeBSD internship learnings, exciting developments coming to FreeBSD, running FreeNAS on DigitalOcean, Network Manager control for OpenBSD, OpenZFS …

Episode 246: Properly Coordinated Disclosure | BSD Now 246

May 17th, 2018

1:29:54

How Intel docs were misinterpreted by almost any OS, a look at the mininet SDN emulator, do’s and don’ts for FreeBSD, OpenBSD community going gold, …

Episode 245: ZFS User Conf 2018 | BSD Now 245

May 10th, 2018

1:24:37

Allan’s recap of the ZFS User conference, first impressions of OmniOS by a BSD user, Nextcloud 13 setup on FreeBSD, OpenBSD on a fanless desktop computer, an intro to HardenedBSD, and DragonFlyBSD getting some SMP …

Episode 244: C is a Lie | BSD Now 244

May 3rd, 2018

1:25:32

Arcan and OpenBSD, running OpenBSD 6.3 on RPI 3, why C is not a low-level language, HardenedBSD switching back to OpenSSL, how the Internet was …

Episode 243: Understanding The Scheduler | BSD Now 243

April 25th, 2018

1:25:24

OpenBSD 6.3 and DragonflyBSD 5.2 are released, bug fix for disappearing files in OpenZFS on Linux (and only Linux), understanding the FreeBSD CPU …

Episode 242: Linux Takes The Fastpath | BSD Now 242

April 18th, 2018

1:23:20

TrueOS Stable 18.03 released, a look at F-stack, the secret to an open source business model, intro to jails and jail networking, FreeBSD Foundation March update, and the ipsec Errata.

Headlines TrueOS STABLE 18.03 …

Episode 241: Bowling in the LimeLight | BSD Now 241

April 12th, 2018

2:01:00

Second round of ZFS improvements in FreeBSD, Postgres finds that non-FreeBSD/non-Illumos systems are corrupting data, interview with Kevin Bowling, BSDCan list of talks, and cryptographic right answers.

Episode 240: TCP Blackbox Recording | BSD Now 240

April 7th, 2018

1:39:18

New ZFS features landing in FreeBSD, MAP_STACK for OpenBSD, how to write safer C code with Clang’s address sanitizer, Michael W. Lucas on sponsor gifts, TCP blackbox recorder, and Dell disk system hacking.

Episode 239: The Return To ptrace | BSD Now 239

March 29th, 2018

1:32:43

OpenBSD firewalling Windows 10, NetBSD’s return to ptrace, TCP Alternative Backoff, the BSD Poetic license, and AsiaBSDcon 2018 videos available.

RSS Feeds:

MP3 Feed | iTunes Feed | HD Vid Feed | HD Torrent Feed

Loading ...

Download the RadioPublic app for
 FREE and never miss an episode.

Get it on Google PlayDownload on the App Store