+4
-4
index.json
+4
-4
index.json
······
+19
mte/2014_01_02_narcissistic-numbers.json
+19
mte/2014_01_02_narcissistic-numbers.json
···+"summary": "I heard about these on BBC Radio 4 More or Less and they just intrigued me, perhaps in part because they have no known application! In the past similar obsessions have appeared with the calculation of PI and right back to my childhood calculating powers of 2 on a BBC Micro.",+"content": "<p>I heard about these on <a href=\"http://www.bbc.co.uk/programmes/b006qshd\">BBC Radio 4 More or\nLess</a> and they just intrigued\nme, perhaps in part because they have no known application! In the past\nsimilar obsessions have appeared with the calculation of PI and right\nback to my childhood calculating powers of 2 on a BBC Micro.</p>\n\n<p>The full definition, as for everything, is on\n<a href=\"https://en.wikipedia.org/wiki/Narcissistic_number\">Wikipedia</a> but in\nshort a narcissistic number is one where the sum of the digits raised to\nthe power of the number of digits equals the number itself. For example</p>\n\n\\[153\u00a0=\u00a01^3 + 5^3 + 3^3\\]\n\n<p>Here\u2019s some quick and dirty Perl code to calculate them:</p>\n\n<div><div><pre><code>use strict;\nuse warnings;\n\nfor (my $i = 10; $i < 10000; $i++) {\n my $pwr = length($i);\n my $total = 0;\n for (my $j = 0; $j < $pwr; $j++) {\n $total += int(substr $i, $j, 1) ** $pwr;\n }\n if ($total == $i) {\n print $i . \" is narcissistic\\n\";\n }\n}\n</code></pre></div></div>\n\n<p>This yields this output</p>\n\n<div><div><pre><code>153 is narcissistic\n370 is narcissistic\n371 is narcissistic\n407 is narcissistic\n1634 is narcissistic\n8208 is narcissistic\n9474 is narcissistic\n</code></pre></div></div>\n\n<p>However, due to the typical limitation in the implementation of integers\nthis doesn\u2019t get you very far. Perl\u2019s <code>Math::BigInt</code> gets you further if\nyou are very patient</p>\n\n<div><div><pre><code>use strict;\nuse warnings;\nuse Math::BigInt;\n\nmy $i = Math::BigInt->bone();\n\nwhile ((my $pwr = $i->length()) < 10) {\n my $total = Math::BigInt->bzero;\n for (my $j = 0; $j < $pwr; $j++) {\n my $t = Math::BigInt->new($i->digit($j));\n $total->badd($t->bpow($pwr));\n }\n if ($total == $i) {\n print $i . \" is narcissistic\\n\";\n }\n $i->binc();\n}\n</code></pre></div></div>",
+19
mte/2015_01_19_mandlebrot-set.json
+19
mte/2015_01_19_mandlebrot-set.json
···+"summary": "The Mandelbrot set is created from this very simple formula in which both Z and C are complex numbers.",+"content": "<p>The Mandelbrot set is created from this very simple formula in which both Z and C are complex numbers.</p>\n\n\\[Z_{n+1}=Z_n^2+c\\]\n\n<p>The formula is iterated to determine whether Z is bounded or tends to infinity. To demonstrate this assume a test case where the imaginary part is zero and focus just on the real part. In this case, the formula is trivial to evaluate starting with Z = 0. The table below shows the outcome at C=0.2 and C=0.3 and where one is clearly bounded and the other is not!</p>\n\n\n\n \n \n <strong>Iteration</strong>\n <strong>C = 0.2</strong>\n <strong>C = 0.3</strong>\n \n \n \n \n \u00a0\n 0\n 0\n \n \n 1\n 0.2\n 0.3\n \n \n 2\n 0.24\n 0.39\n \n \n 3\n 0.2576\n 0.4521\n \n \n 4\n 0.266358\n 0.504394\n \n \n 5\n 0.270946\n 0.554414\n \n \n 6\n 0.273412\n 0.607375\n \n \n 7\n 0.274754\n 0.668904\n \n \n 8\n 0.27549\n 0.747432\n \n \n 9\n 0.275895\n 0.858655\n \n \n 10\n 0.276118\n 1.037289\n \n \n 11\n 0.276241\n 1.375968\n \n \n 12\n 0.276309\n 2.193288\n \n \n 13\n 0.276347\n 5.110511\n \n \n 14\n 0.276368\n 26.41732\n \n \n 15\n 0.276379\n 698.1747\n \n \n 16\n 0.276385\n 487448.2\n \n \n 17\n 0.276389\n 2.38E+11\n \n \n 18\n 0.276391\n 5.65E+22\n \n \n\n\n<p>C=0.2 is said to be part of the set where C=0.3 is not. Typical this point is coloured by some arbitrary function of the number of iterations it took for the modulus of Z to exceed 2.</p>\n\n<p>The set is plotted on the complex number plane with the real part using the x-axis and the imaginary part using the y-axis, thus:</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/complex-plane.svg\"></p>\n\n<p>Given that computers don\u2019t natively work with complex numbers we need to break the formula down into manageable pieces. Firstly write the formula including both the real and complex parts then expand the brackets and group the terms.</p>\n\n\\[Z_{n+1}=Z_n^2+c\\]\n\n\\[Z_{n+1}=(Z_{re}+Z_{im}i)^2+c_{re}+c_{im}i\\]\n\n\\[Z_{n+1}=Z_{re}^2-Z_{im}^2+2Z_{re}Z_{im}i+c_{re}+c_{im}i\\]\n\n\\[\\mathbb R(Z_{n+1})=Z_{re}^2-Z_{im}^2+c_{re}\\]\n\n\\[\\mathbb I(Z_{n+1})=2Z_{re}Z_{im}+c_{im}\\]\n\n<p>Here\u2019s a Perl program to generate a PNG file. Over the years I\u2019ve written this same program in many languages starting with Pascal at school, PostScript at University and Excel VBA and JavaScript\u2026</p>\n\n<p>Here\u2019s a Perl program to generate a PNG file. Over the years I\u2019ve written this same program in many languages starting with Pascal at school, PostScript at University and <a href=\"https://www.tunbury.org/downloads/mandelbrot.xlsm\">Excel VBA</a> and JavaScript\u2026</p>\n\n<div><div><pre><code>#!/usr/bin/perl -w\n\nuse strict;\nuse GD;\n\nmy $width = 1024;\nmy $height = 1024;\n\nGD::Image->trueColor(1);\nmy $img = new GD::Image($width, $height);\n</code></pre></div></div>\n\n<p>Focus on an interesting bit. Real should be between -2.5 and 1 and\nimaginary between -1 and 1.</p>\n\n<div><div><pre><code>my $MINre = -0.56;\nmy $MAXre = -0.55;\nmy $MINim = -0.56;\nmy $MAXim = -0.55;\n</code></pre></div></div>\n\n<p>Maximum number of iterations before the point is classified as bounded.\nI\u2019ve used 255 because I am using this as the colour component later</p>\n\n<div><div><pre><code>my $max = 255;\n</code></pre></div></div>\n\n<p>Setup the loops to move through all the pixels in the image. The value\nof C is calculate from the image size and scale. Note that GD creates\nimages with the origin in the top left.</p>\n\n<div><div><pre><code>for my $row (1 .. $height) {\n my $Cim = $MINim + ($MAXim - $MINim) * $row / $height;\n for my $col (0 .. $width - 1) {\n my $Cre = $MINre + ($MAXre - $MINre) * $col / $width;\n</code></pre></div></div>\n\n<p>Z starts at the origin</p>\n\n<div><div><pre><code> my $Zre = 0;\n my $Zim = 0;\n my $iteration = 0;\n</code></pre></div></div>\n\n<p>Loop until the modulus of Z < 2 or the maximum number of iterations\nhave passed. Note that I\u2019ve squared both sides to avoid a wasting time\ncalculating the square root</p>\n\n<div><div><pre><code>while ($Zre * $Zre + $Zim * $Zim <= 4 && $iteration < $max) {\n</code></pre></div></div>\n\n<p>Here\u2019s the formula from above to calculate the next value</p>\n\n<div><div><pre><code> my $ZNre = $Zre * $Zre - $Zim * $Zim + $Cre;\n $Zim = 2 * $Zre * $Zim + $Cim;\n $Zre = $ZNre;\n</code></pre></div></div>\n\n<p>Move on to the next iteration</p>\n\n<div><div><pre><code> $iteration++;\n }\n</code></pre></div></div>\n\n<p>Determine why we finished the loop - was it bound or not - and then\ncolour the pixel appropriately</p>\n\n<div><div><pre><code> if ($iteration < $max) {\n $img->setPixel($col, $height - $row, $iteration * 0x010101);\n } else {\n $img->setPixel($col, $height - $row, 0x00);\n }\n }\n}\n</code></pre></div></div>\n\n<p>Output the PNG file to STDOUT</p>\n\n<div><div><pre><code>binmode STDOUT;\nprint $img->png;\n</code></pre></div></div>",
+19
mte/2015_01_19_shape-files.json
+19
mte/2015_01_19_shape-files.json
···+"content": "<p>Below is a perl script to create a PNG from a Shape file.</p>\n\n<p><a href=\"https://www.tunbury.org/downloads/shapefile.pdf\">Shape file specification</a></p>\n\n<p><a href=\"https://www.tunbury.org/downloads/ROADNODE.zip\">UK Road network as a shape file </a></p>\n\n<div><div><pre><code>use strict;\nuse warnings;\n\nuse GD;\nGD::Image->trueColor(1);\n\nmy $width = 8 * 1024;\nmy $height = 8 * 1024;\n\nmy $shpfile = $ARGV[0];\nopen(FH, \"<$shpfile\") or die(\"No input file\\n\");\nbinmode(FH); \n\nmy $csvfile = $shpfile;\n$csvfile =~ s/.shp$/.csv/g;\nopen(POLYOUT, \">$csvfile\");\n\nmy $buffer;\nmy $num_bytes = read(FH, $buffer, 100);\nmy ($code, $u1, $u2, $u3, $u4, $u5, $filelength, $version, $type, $BBminX, $BBminY, $BBmaxX, $BBmaxY, $BBminZ, $BBmaxZ, $BBminM, $BBmaxM) = unpack(\"N N N N N N N V V F F F F F F F F\", $buffer);\nprint \"code = $code\\n\";\nprint \"filelength = $filelength\\n\";\nprint \"version = $version\\n\";\nprint \"minX = $BBminX\\n\";\nprint \"minY = $BBminY\\n\";\nprint \"maxX = $BBmaxX\\n\";\nprint \"maxY = $BBmaxY\\n\";\nprint \"minZ = $BBminZ\\n\";\nprint \"maxZ = $BBmaxZ\\n\";\nprint \"minM = $BBminM\\n\";\nprint \"maxM = $BBmaxM\\n\";\n\nsub mapx {\n my $x = shift;\n return ($x - $BBminX) / ($BBmaxX - $BBminX) * $width;\n}\n\nsub mapy {\n my $y = shift;\n return $height - ($y - $BBminY) / ($BBmaxY - $BBminY) * $height;\n}\n\nmy $polyCount = 0;\n\nmy $img = new GD::Image($width, $height);\n\nwhile (read(FH, $buffer, 12)) {\n my ($recordnumber, $recordlength, $shapetype) = unpack(\"N N V\", $buffer);\n if ($shapetype == 5) {\n # Polygon\n read(FH, $buffer, 4 * 8 + 2 * 4);\n my ($minX, $minY, $maxX, $maxY, $NumParts, $NumPoints) = unpack(\"F F F F V V\", $buffer);\n my @parts;\n foreach my $part (1 .. $NumParts) {\n read(FH, $buffer, 4);\n my ($part) = unpack(\"V\", $buffer);\n push @parts, $part;\n #syswrite(SHPOUT, pack(\"V\", $part), 4);\n }\n push @parts, $NumPoints;\n @parts = reverse @parts;\n while (@parts) {\n my $firstpoint = pop @parts;\n my $lastpoint = pop @parts;\n my $poly = new GD::Polygon;\n $polyCount++;\n foreach ($firstpoint .. $lastpoint - 1) {\n read(FH, $buffer, 16);\n my ($x, $y) = unpack(\"F F\", $buffer);\n print POLYOUT \"$x,$y,$polyCount\\n\";\n $poly->addPt(mapx($x), mapy($y));\n }\n $img->openPolygon($poly, 0xff0000);\n push @parts, $lastpoint if (@parts);\n }\n } elsif ($shapetype == 3) {\n # PolyLine\n read(FH, $buffer, 4 * 8 + 2 * 4);\n my ($minX, $minY, $maxX, $maxY, $NumParts, $NumPoints) = unpack(\"F F F F V V\", $buffer);\n my @parts;\n foreach my $part (1 .. $NumParts) {\n read(FH, $buffer, 4);\n my ($part) = unpack(\"V\", $buffer);\n push @parts, $part;\n }\n push @parts, $NumPoints;\n @parts = reverse @parts;\n while (@parts) {\n my $firstpoint = pop @parts;\n my $lastpoint = pop @parts;\n read(FH, $buffer, 16);\n my ($x1, $y1) = unpack(\"F F\", $buffer);\n print POLYOUT \"$x1,$y1\\n\";\n foreach ($firstpoint .. $lastpoint - 2) {\n read(FH, $buffer, 16);\n my ($x2, $y2) = unpack(\"F F\", $buffer);\n print POLYOUT \"$x2,$y2\\n\";\n $img->line(mapx($x1), mapy($y1), mapx($x2), mapy($y2), 0xff0000);\n $x1 = $x2;\n $y1 = $y2;\n }\n push @parts, $lastpoint if (@parts);\n }\n\n } elsif ($shapetype == 1) {\n read(FH, $buffer, 2 * 8);\n my ($x, $y) = unpack(\"F F\", $buffer);\n $img->setPixel(mapx($x), mapy($y), 0xff0000);\n print POLYOUT \"$x,$y\\n\";\n } else {\n print \"unhandled type shapetype = $shapetype\\n\";\n read(FH, $buffer, $recordlength * 2 - 4);\n }\n}\n\nclose(POLYOUT);\n\nmy $pngfile = $shpfile;\n$pngfile =~ s/.shp$/.png/g;\nopen(PNGOUT, \">$pngfile\");\nbinmode(PNGOUT);\nprint PNGOUT $img->png;\nclose(PNGOUT);\n</code></pre></div></div>",
+20
mte/2016_08_24_place-notation.json
+20
mte/2016_08_24_place-notation.json
···+"summary": "Thomas Barlow has taught me place notation using Strike Back Surprise Major as the example. The notation for that is x38x14x58x16x12x38x14.12.78 l.e. 12. There are plenty of guides online on how to interpret it, such as this one on the CCCBR website.",+"content": "<p>Thomas Barlow has taught me place notation using <a href=\"https://www.tunbury.org/downloads/Strike-Back-Surprise-Major.pdf\">Strike Back Surprise Major</a> as the example. The notation for that is <code>x38x14x58x16x12x38x14.12.78 l.e. 12</code>. There are plenty of guides online on how to interpret it, such as this one on the <a href=\"http://www.cccbr.org.uk/education/thelearningcurve/pdfs/200404.pdf\">CCCBR website</a>.</p>\n\n<p>Briefly an x in the notation causes all bells to swap places. A group of numbers indicates that the bells in these places remain fixed while all others swap places. In this example, giving a starting order of rounds: 12345678 the first x would yield 21436587. The subsequent 38 indicates that the 3rd placed and 8th placed bells are fixed, so bells in position 1 and 2 swap as do 4 and 5 and 6 and 7 resulting in 12463857 and so on. As many methods are symmetrical, typically only half is written out. The second half is the reverse of the first with the given lead end appended.</p>\n\n<p>My attempt to write out <a href=\"https://www.tunbury.org/downloads/Ajax-Surprise-Major.pdf\">Ajax Surprise Major</a> <code>x58x14x56x16x14x1258x12x58,12</code> by hand went wrong in the early stages so I turned to Perl to do the job for me.</p>\n\n<p>The first part of the script parses the place notation into an array, unwraps the symmetry and tags on the lead end. I don\u2019t much like parsers as they tend to be messy as they have to deal with the real world, so moving swiftly on to the core of the script with the assumption that the place notation of the method is held in the array <code>@method</code>.</p>\n\n<div><div><pre><code>x 58 x 14 x 56 x 16 x 14 x 1258 x 12 x 58 x 12 x 1258 x 14 x 16 x 56 x 14 x 58 x 12\n</code></pre></div></div>\n\n<p>Define <code>@rounds</code> to be rounds and then set the current bell arrangement to be rounds!</p>\n\n<div><div><pre><code>my @rounds = (1..$stage);\nmy @bells = @rounds;\ndo {\n</code></pre></div></div>\n\n<p>Loop through each of the elements in the method (<code>@method</code>)</p>\n\n<div><div><pre><code> foreach my $m (@method) {\n</code></pre></div></div>\n\n<p><code>$stage</code> is the number of bells involved in the method. Our examples have all been <em>major</em> methods so <code>$stage</code> is 8. Perl arrays are inconveniently numbered from zero so we actually want number 0 through 7 so I\u2019ve used pop to remove the last one</p>\n\n<div><div><pre><code> my @changes = (0..$stage);\n pop @changes;\n</code></pre></div></div>\n\n<p>If the current step contains bell places (noting that 0 = 10, E = 11, T = 12) we split up the string into an array which we process in <em>reverse</em> order (to preserve the position numbering) and we remove these numbers from the array of changes. The function numeric returns the integer value from the character (T=12 etc).</p>\n\n<div><div><pre><code> if ($m =~ /[0-9ET]*/) {\n my @fixed = split //, $m;\n while (@fixed) {\n splice @changes, numeric(pop @fixed) - 1, 1;\n }\n }\n</code></pre></div></div>\n\n<p>For example, taking <code>$m</code> to be <code>1258</code> then <code>@changes</code> and <code>@fixed</code> will iterate as shown. Note the annoying -1 to align the bell position to the array index</p>\n\n\n\n \n \n Iteration\n <code>@changes</code>\n <code>@fixed</code>\n \n \n \n \n \u00a0\n 0 1 2 3 4 5 6 7\n 1 2 5 8\n \n \n 1\n 0 1 2 3 4 5 6\n 1 2\t5\n \n \n 2\n 0 1 2 3 5 6\n 1 2\n \n \n 3\n 0 2 3 5 6\n 1\n \n \n 4\n 2 3 5 6\n \u00a0\n \n \n\n\n<p>The resulting array <code>@changes</code> contains the pairs of bell place indices which need to be swapped. Changes need to be made in order working up to the back as place notation can omit implied changes. For example 18 could be shortened to just 1 as by the time 2nd and 3rd, 4th and 5th, 6th and 7th have all swapped, 8th place must be fixed.</p>\n\n<div><div><pre><code> while (@changes) {\n my ($swap1, $swap2) = splice @changes, 0, 2;\n @bells[$swap1, $swap2] = @bells[$swap2, $swap1];\n last if (scalar @changes < 2);\n }\n</code></pre></div></div>\n\n<p>Now we need to output the current arrangement which at this point will just be a print statement.</p>\n\n<div><div><pre><code> print \"@bells\\n\";\n }\n</code></pre></div></div>\n\n<p>Keep going until we are back in rounds.</p>\n\n<div><div><pre><code>} while (not @bells ~~ @rounds);\n</code></pre></div></div>\n\n<p>Now that that is working the natural desire is to produce beautiful output. Since I was coding in Perl and ultimately I\u2019d like a webpage out of this I experimented using Perl\u2019s GD::Graph library to draw a line graph of the place of each bell. GD::Graph can display the point value on the graph which was used to show the bell number. The output was functional although far from high resolution. The font of the point values cannot be controlled. See Bob Doubles output below</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/bob-doubles.png\"></p>\n\n<p>Since the GD::Graph output wasn\u2019t great, I\u2019ve coded a version which creates the output using SVG. Have a go:</p>\n\n\n\n Select a method:\n \n Bob Doubles\n Bob Minor\n Reverse Canterbury\n Stedman Doubles\n Grandsire Doubles\n Valencia Surprise Major\n <br><br>\n Select stage:\n \n Doubles\n Minor\n Triples\n Major\n Caters\n Royal\n <br><br>\n Highlight bell:\n <br><br>",
+19
mte/2016_08_25_pentominoes.json
+19
mte/2016_08_25_pentominoes.json
···+"summary": "One day I was clearing out some old papers and I came across this programming assignment from university. I can\u2019t recall which of the problems I tackled at the time, after all it was twenty-five years ago, but glancing over it now the pentomino problem caught my eye",+"content": "<p>One day I was clearing out some old papers and I came across this programming assignment from university. I can\u2019t recall which of the problems I tackled at the time, after all it was twenty-five years ago, but glancing over it now the pentomino problem caught my eye</p>\n\n<blockquote>\n <p>5 The Pentomino Problem\nThere are twelve different (ie. non-congruent) pentominos, shown below left. The pentomino problem is to fit them into a tray of dimensions 6 x 10 without overlapping. Some of the 2339 possible solutions are shown below right. Write a program to find a solution to the pentomino problem. {Note. Pretty output is not required.)</p>\n</blockquote>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/pentomino-graphic.png\"></p>\n\n<p>Looking on <a href=\"https://en.wikipedia.org/wiki/Pentomino\">Wikipedia</a> it seems that the shapes have been named by <a href=\"https://en.wikipedia.org/wiki/Solomon_W._Golomb\">Golomb</a> so I\u2019m going to use those names too.</p>\n\n<p>I started out by creating some data structures to hold the definition of each pentomino.</p>\n\n<p>So laying out on a x, y co-ordinate system I\u2019m create a point_t structure containing values</p>\n\n<div><div><pre><code>typedef struct {\n int x, y;\n} point_t;\n</code></pre></div></div>\n\n<p>Any pentomino will have exactly five points</p>\n\n<div><div><pre><code>typedef struct {\n point_t point[5]; /* 5 points in each */\n} pentomino_t;\n</code></pre></div></div>\n\n<p>Considering the \u2018F\u2019 pentomino it may be rotated and reflected in different ways \u2013 a maximum of 8 different versions may exist. Some, such as \u2018X\u2019, only have one.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/F.svg\"></p>\n\n<p>I have created a structure to hold the pentomino name along with a count of the number of unique rotations/reflections of the shape and an array to hold the co-ordinates</p>\n\n<div><div><pre><code>typedef struct {\n char ch; /* name of the shape by letter */\n int count; /* number of unique rotations */\n pentomino_t rotation[8]; /* max of 4 possible rotations and then double for the mirrors */\n} pentominoRotations_t;\n</code></pre></div></div>\n\n<p>The 6\u00d710 board that we will try to place them on is as simple as this</p>\n\n<div><div><pre><code>char board[60];\n</code></pre></div></div>\n\n<p>The algorithm couldn\u2019t be simpler really, take the first pentomino in the first rotation and put it on the board in the top left corner, if that works try the second pentomino in the second position in the first rotation and repeat. At each step check no parts of any pentomino are outside the board area and that nothing is on top of anything else. If it is, remove the last piece added and try to add it again in the next rotation. Based upon the assignment the key here is to recognise that this is a recursive algorithm \u2013 in pseudo code it looks like this</p>\n\n<div><div><pre><code>function calculate(pentomino p, board)\n for each position on the board\n for each pentomino rotation\n let shape_ok = true\n for each point in pentomino shape\n if the co-ordinate is out of bound then shape_ok = false\n if the board position is already used then shape_ok = false\n next\n if shape_ok is true then\n draw the shape on the current board\n if p < 12 then\n calculate(p + 1, current board layout)\n else\n we have a solution!\n next\n next\n</code></pre></div></div>\n\n<p>Here is the first solution that it generates given the order of shapes as I have them</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/solution-1.svg\"></p>\n\n<p>The big problem with this is it takes a very long time! The main reason for this is that it algorithm wastes masses of time trying to fit all 12 pieces in even when the early piece positions have given a board which can\u2019t possibly be solved. In the example below there is no point trying to place the other 11 pentominos including all their rotations when there is an isolated single square.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/F-bad-placement.svg\"></p>\n\n<p>My initial solution to this is to add a check after drawing the shape to look for regions which have an area of less than 5. However this can extended to check for regions that have areas which are not multiples of 5 as clearly all pentominos have an area of 5!</p>\n\n<p>Take a look at the example below. This has two regions, on the left the area is 13 and on the right the area is 22. This is can\u2019t be solved as we will never be able to pack objects with an area of 5 into a region of area 13.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/small-region.svg\"></p>\n\n<p>I was quite surprised how easy it was to calculate the area of the regions. I\u2019ve always thought that the fill/flood tools on paint programs were cool and here we are just doing the same thing. Here\u2019s some pseudo code to explain it. I presume I\u2019d get twice the marks for this assignment for having two recursive functions!</p>\n\n<div><div><pre><code>Create a copy of the board\nLoop through all squares on the board\n if the square is empty\n call the flood function with starting at these co-ordinates\n if the returned value modulus 5 is not zero then the board cannot be solved\n\nfunction flood(start co-ordinates)\n let r = 1 and for that to be the size of the region\n mark the current co-ordinate position as filled\n if the square to the left is empty then call the flood function with those co-ordinates and add the returned value to r\n if the square to the right is empty then call the flood function with those co-ordinates and add the returned value to r\n if the square above is empty then call the flood function with those co-ordinates and add the returned value to r\n if the square below is empty then call the flood function with those co-ordinates and add the returned value to r\n return r\n</code></pre></div></div>\n\n<p>If you let these run to completion you find that you have 9356 solutions \u2013 exactly 4 times the number we should. This is because the board has rotation symmetry and both vertical and horizontal symmetry. We could check each solution against the ones already created for possible duplicates but we could also amend the algorithm so at the first level we only consider start position in the first quarter of the board.</p>\n\n<p>With this amended algorithm my average computer produced all 2339 solutions in around twenty minutes.</p>",
+19
mte/2016_11_21_splicing-three-strand-rope.json
+19
mte/2016_11_21_splicing-three-strand-rope.json
···+"summary": "My sudden interest in rope splicing stems entirely from bell ropes. There seems to be three, perhaps four, splices to learn for this application. Links below to YouTube videos explaining how to do them:",+"content": "<p>My sudden interest in rope splicing stems entirely from bell ropes. There seems to be three, perhaps four, splices to learn for this application. Links below to YouTube videos explaining how to do them:</p>\n\n<ul>\n <li><a href=\"https://youtu.be/QeYBkMCQ8WY\">Eye Splice</a></li>\n <li><a href=\"https://youtu.be/PFFeDH2u7E0\">Short Splice</a></li>\n <li><a href=\"https://youtu.be/sN-cnO8Fqrc\">Long Splice</a></li>\n <li><a href=\"https://youtu.be/bRjqMKLS99A\">End/Back Splice</a></li>\n</ul>\n\n<p>Above the sally you\u2019d probably use a long splice as it\u2019s thinner than the short splice for running over any pulleys. Below the sally, either a short splice to the tail end if it doesn\u2019t see much wear, or an eye splice if the tail end is changed frequently, typical on larger bells. The back splice could be used on the top end to give a nice finish to the rope.</p>\n\n<p>I\u2019m amazed how straightforward they are to do and how strong they are given that it\u2019s just an over-under weave of strands without a knot in sight!</p>",
+19
mte/2017_05_01_prime-numbers-in-powershell.json
+19
mte/2017_05_01_prime-numbers-in-powershell.json
···+"summary": "Dylan was using a number square to calculate prime numbers so it amused me to code up a couple of algorithms to show just how quick the sieve method actually is. I\u2019ve done these in PowerShell because \u2026 reasons.",+"content": "<p>Dylan was using a number square to calculate prime numbers so it amused me to code up a couple of algorithms to show just how quick the sieve method actually is. I\u2019ve done these in PowerShell because \u2026 reasons.</p>\n\n<p>So as a baseline, here\u2019s a basic way to calculate a prime. Start with a number and try to divide it by every number starting from 2 up to the square root of the number. I\u2019ve used <code>throw</code> in a <code>try</code>/<code>catch</code> block to move to the next iteration of the outer loop without executing the <code>Write-Host</code> line.</p>\n\n<div><div><pre><code>for ($n = 3; $n -lt 100000; $n++) {\n try {\n for ($d = 2; $d -le [Math]::Sqrt($n); $d++) {\n if ($n % $d -eq 0) {\n throw\n }\n }\n Write-Host -NoNewLine \"$n \"\n }\n catch { }\n}\n</code></pre></div></div>\n\n<p>Interestingly, all those exceptions add quite an overhead because this same algorithm using a local variable ran three times quicker on my machine (27 seconds for the first and 9 seconds for this)</p>\n\n<div><div><pre><code>for ($n = 3; $n -lt 100000; $n++) {\n $prime = $true\n for ($d = 2; $d -le [Math]::Sqrt($n); $d++) {\n if ($n % $d -eq 0) {\n $prime = $false\n break;\n }\n }\n if ($prime) {\n Write-Host -NoNewLine \"$n \"\n }\n}\n</code></pre></div></div>\n\n<p>Obviously we should optimise this by removing even numbers as below and this, as you\u2019d expect, halves the run time.</p>\n\n<div><div><pre><code>for ($n = 3; $n -lt 100000; $n += 2) {\n $prime = $true\n for ($d = 3; $d -le [Math]::Sqrt($n); $d += 2) {\n if ($n % $d -eq 0) {\n $prime = $false\n break;\n }\n }\n if ($prime) {\n }\n}\n</code></pre></div></div>\n\n<p>Anyway, the sieve is all done in 0.75 seconds:</p>\n\n<div><div><pre><code>$ints = 0..100000\nfor ($i = 2; $i -lt [Math]::Sqrt($ints.length); $i++) {\n if ($ints[$i] -eq 0) {\n continue\n }\n for ($j = $i * $i; $j -lt $ints.length; $j += $i) {\n $ints[$j] = 0\n }\n}\n$ints | foreach { if ($_) { Write-Host -NoNewLine \"$_ \" } }\n</code></pre></div></div>\n\n<p>As the maximum number increases the differences become even more stark. At 1,000,000 the sieve completed in 11 seconds but the simple method took 129 seconds</p>\n\n<p>For my timings, I used <code>measure-command</code> and removed the <code>Write-Host</code> lines.</p>",
+19
mte/2018_07_13_latin-square.json
+19
mte/2018_07_13_latin-square.json
···+"summary": "Looking at the latest video from Presh Talwalkar about solving the Latin square where each row is the first row multiplied by the row number I decided it was time to see if I could remember any C++ and code a solution.",+"content": "<p>Looking at the latest video from Presh Talwalkar about solving the Latin square where each row is the first row multiplied by the row number I decided it was time to see if I could remember any C++ and code a solution.</p>\n\n<p><a href=\"https://youtu.be/KXOjtmNUSH0\">Can you fiqure out the special 6 digit number?</a></p>\n\n<p>Include the files standard C++ header files we need</p>\n\n<div><div><pre><code>#include <iostream>\n#include <algorithm>\n#include <vector>\n#include <sstream>\n#include <string>\n#include <iomanip>\n\nusing namespace std;\n</code></pre></div></div>\n\n<p><code>CheckDuplicates()</code> comes from ideas presented in this <a href=\"https://stackoverflow.com/questions/2860634/checking-for-duplicates-in-a-vector\">Stack Overflow question</a>. The function determines whether there are any repeated digits in a vector by sorting the vector and then searching for adjacent items which are the same. Since <code>std::sort</code> changes the source vector I\u2019ve created a local copy using the vector constructor function.</p>\n\n<div><div><pre><code>bool CheckDuplicates(vector<unsigned int>* v) {\n vector<unsigned int> c (v->begin(), v->end());\n sort(c.begin(), c.end());\n vector<unsigned int>::iterator it = adjacent_find(c.begin(), c.end());\n if (it == c.end())\n return false;\n else\n return true;\n}\n</code></pre></div></div>\n\n<p>On to the body of program</p>\n\n<div><div><pre><code>int main () {\n</code></pre></div></div>\n\n<p>Create a loop which covers all possible six digit numbers. The result can\u2019t be smaller than 123456 and it must be less than 1,000,000 \u00f7 6 = 166,666 but change the loop to 0 to 1,000,000 shows that there really aren\u2019t any other solutions.</p>\n\n<div><div><pre><code> for (unsigned int t = 123456; t < 166666; t++) {\n</code></pre></div></div>\n\n<p>I\u2019ll use a vector of vectors to hold the digits of each number.</p>\n\n<div><div><pre><code> vector< vector<unsigned int>* > square;\n</code></pre></div></div>\n\n<p>This first block of code initialises the first vector with the value from the outer loop. It only adds the value to the square if it doesn\u2019t contain any duplicate digits.</p>\n\n<div><div><pre><code> {\n vector<unsigned int>* row = new vector<unsigned int>;\n unsigned int n = t;\n for (int i = 0; i < 6; i++) {\n row->insert(row->begin(), n % 10);\n n /= 10;\n }\n if (!CheckDuplicates(row))\n square.push_back(row);\n else\n delete row;\n }\n</code></pre></div></div>\n\n<p>By looking at the size of the <code>square</code> vector we can see if we have a row to work with or not. If we do, attempt the multiplication of the first row by 2 through 6 to generate the other rows. As we want full multiplication not just the multiplication of each digit we need to compute the carry at each step and add it on to the next column. If there is a carry into the seventh column then the row can be discarded. Lastly, check for duplicates and if none are found added the number/row to the square. An alternative approach here would be to multiply t and separate the result into the individual digits in a vector as we did above.</p>\n\n<div><div><pre><code> if (square.size() == 1) {\n for (unsigned int j = 2; j <= 6; j++) {\n unsigned int carry = 0;\n vector<unsigned int>* row = new vector<unsigned int>;\n for (int i = 5; i >= 0; i--) {\n unsigned int n = square.at(0)->at(i) * j + carry;\n if (n > 9) {\n carry = n / 10;\n n %= 10;\n } else {\n carry = 0;\n }\n row->insert(row->begin(), n);\n }\n if (carry) {\n delete row;\n break;\n } else {\n if (!CheckDuplicates(row))\n square.push_back(row);\n else\n delete row;\n }\n }\n }\n</code></pre></div></div>\n\n<p>So, if we get to here we have six rows each of different digits in each row. We now need to check for duplication in the columns. This strictly isn\u2019t necessary because only one solution makes it this far, but for the sake of completeness I generate a vector for each column and check it for duplicates. If no duplicates are found then it\u2019s a possible solution.</p>\n\n<div><div><pre><code> if (square.size() == 6) {\n bool duplicates = false;\n for (int i = 5; i >= 0; i--) {\n vector<unsigned int> column;\n for (vector<unsigned int>* row : square)\n column.push_back(row->at(i));\n if (CheckDuplicates(&column)) {\n duplicates = true;\n break;\n }\n }\n if (!duplicates) {\n cout << \"\\nSolution\\n\";\n for (vector<unsigned int>* row : square) {\n for (unsigned int c : *row) {\n cout << c << ' ';\n }\n cout << '\\n';\n }\n }\n }\n</code></pre></div></div>\n\n<p>Tidy up by deleting each of the row vectors</p>\n\n<div><div><pre><code> for (vector<unsigned int>* row : square)\n delete row;\n square.erase(square.begin(), square.end());\n }\n\n return 0;\n}\n</code></pre></div></div>\n\n<p>You can download the full version of the code from <a href=\"https://github.com/mtelvers/LatinSquare\">Github</a></p>",
+19
mte/2018_08_27_which-funds-have-exposure-to-netflix.json
+19
mte/2018_08_27_which-funds-have-exposure-to-netflix.json
···+"summary": "Dabbling in the markets by way of investment funds is amusing. I use Hargreaves Lansdown to do this. HL have a fund research section which lets you look at a given fund and view the top 10 holdings so you can base your decision to invest in your belief in the underlying stock.",+"content": "<p>Dabbling in the markets by way of investment funds is amusing. I use <a href=\"https://www.tunbury.org/2018/08/27/which-funds-have-exposure-to-netflix/www.hl.co.uk\">Hargreaves Lansdown</a> to do this. HL have a fund research section which lets you look at a given fund and view the top 10 holdings so you can base your decision to invest in your belief in the underlying stock.</p>\n\n<p>How do you tackle it from the other direction? Suppose you want to invest in NetFlix but which fund(s) has expose to their stock? The search tool on HL\u2019s website doesn\u2019t let you search the fund\u2019s holdings.</p>\n\n<p>Firstly, we can get a list of funds starting with <code>a</code> by visiting the link https://www.hl.co.uk/funds/fund-discounts,-prices\u2013and\u2013factsheets/search-results/a. There are 25 more to go plus 0 for anything starting with a number. These pages are HTML unordered lists <code>ul</code>, of hyperlinks <code>href</code>. We can get the alphabet as an array in a tidy loop such as this <code>foreach ($l in [char[]]([char]'a'..[char]'z') + '0') { }</code> (assuming ASCII)</p>\n\n<p>We can download the HTML using PowerShell\u2019s <code>Invoke-WebRequest</code> and then extra tags using <code>getElementsByTagName</code> however it can be desperately slow in some circumstances so I prefer to just get the HTML as a string using <code>$_.RawContent</code> then processing it with <code>IndexOf()</code>.</p>\n\n<p>The code, and basically the methodology for the rest of this script, is show as below:</p>\n\n<div><div><pre><code>$baseURL = \"https://www.hl.co.uk/funds/fund-discounts,-prices--and--factsheets/search-results\"\n$html = $(Invoke-WebRequest -uri \"$baseURL/a\").RawContent\n$x1 = $html.IndexOf('<ul class=\"list-unstyled list-indent\"')\n$x1 = $html.IndexOf('>', $x1) + 1\n$x2 = $html.IndexOf('</ul', $x1)\n$tbl = $html.substring($x1, $x2 - $x1).trim()\n</code></pre></div></div>\n\n<p>Search the HTML for the start of the <code>ul</code> tag and save it in <code>$x1</code>. As tags can be of variable length we move <code>$x1</code> to the end of the tag by searching for the close tag marker <code>></code> and adding 1. Now, just search for the end of the list by looking for the <code></ul</code> tag and store that in <code>$x2</code>. The table can now be extracted as the sub string between <code>$x1</code> and <code>$x2</code>.</p>\n\n<p>Each list item <code>li</code>, contains a hyperlink tag <code><a href=</code> including the URL of the page with the fund details and the the fund name. We can use a <code>for</code> loop to move through the string and build up an array of fund URLs. Back tick is the escape character in PowerShell.</p>\n\n<div><div><pre><code>$funds = @()\nfor ($x1 = $tbl.IndexOf(\"href=\"); $x1 -ge 0; $x1 = $tbl.IndexOf(\"href=\", $x2)) {\n $x1 = $tbl.IndexOf('\"', $x1) + 1 # x1 is the start of the string\n $x2 = $tbl.IndexOf('\"', $x1) # x2 is the end of the string\n $funds += $tbl.Substring($x1, $x2 - $x1)\n}\n</code></pre></div></div>\n\n<p>At this point we can examine our funds in <code>$funds</code>, or perhaps write then to a CSV: <code>$funds | Export-Csv funds.csv</code>.</p>\n\n<p>What we really want is the list of holdings for each funds. So using the techniques above, download the HTML for each fund detail page, extract the fund size where it appears on the page. Then locate the Top 10 holdings table and build a PowerShell object based upon the table headings and populate the values:</p>\n\n<div><div><pre><code>$holdings = @()\nfor ($f = 0; $f -lt $funds.count; $f++) {\n $html = $(Invoke-WebRequest -uri $funds[$f]).RawContent\n if ($html.IndexOf(\"Factsheet unavailable\") -ge 0 -or\n $html.IndexOf(\"Market data not available\") -ge 0 -or\n $html.IndexOf(\"holdings currently unavailable\") -ge 0) {\n Write-Host -ForegroundColor Red $f $funds[$f].substring($baseURL.length) \"- unavailable\"\n continue\n }\n\n $x1 = $html.IndexOf('Fund size')\n $x1 = $html.IndexOf('<td', $x1)\n $x1 = $html.IndexOf(\">\", $x1) + 1\n $x2 = $html.IndexOf('</td', $x1)\n $fundSize = $html.Substring($x1, $x2 - $x1).trim()\n $fundSize = $fundSize -replace \"&pound;\", \"GBP \"\n $fundSize = $fundSize -replace \"&euro;\", \"EUR \"\n $fundSize = $fundSize -replace \"\\$\", \"USD \"\n\n $x1 = $html.IndexOf('<table class=\"factsheet-table\" summary=\"Top 10 holdings\"')\n $x1 = $html.IndexOf('>', $x1) + 1\n $x2 = $html.IndexOf('</table>', $x1)\n $tbl = $html.substring($x1, $x2 - $x1).trim()\n\n $headings = @()\n for ($x1 = $tbl.IndexOf('<th', 1); $x1 -gt 0; $x1 = $tbl.IndexOf('<th', $x2)) {\n $x1 = $tbl.IndexOf(\">\", $x1) + 1\n $x2 = $tbl.IndexOf(\"</th>\", $x1)\n $headings += $tbl.Substring($x1, $x2 - $x1)\n }\n\n if ($headings.count -eq 0) {\n Write-Host -ForegroundColor Red $f $funds[$f].substring($baseURL.length) \"- no table\"\n continue\n }\n\n $i = 0\n for ($x1 = $tbl.IndexOf('<td'); $x1 -gt 0; $x1 = $tbl.IndexOf('<td', $x2)) {\n if ($i % $headings.count -eq 0) {\n $h = New-Object -TypeName PSObject -Property @{Fund=$funds[$f].substring($baseURL.length);Size=$fundSize}\n }\n $x1 = $tbl.IndexOf(\">\", $x1) + 1\n $x2 = $tbl.IndexOf(\"</td\", $x1)\n $cell = $tbl.Substring($x1, $x2 - $x1).trim()\n if ($cell.Substring(0, 1) -eq '<') {\n $x1 = $tbl.IndexOf(\">\", $x1) + 1\n $x2 = $tbl.IndexOf(\"</a\", $x1)\n $cell = $tbl.Substring($x1, $x2 - $x1).trim()\n }\n Add-Member -InputObject $h -MemberType NoteProperty -Name $headings[$i % $headings.count] -Value $cell\n $i++\n if ($i % $headings.count -eq 0) {\n $holdings += $h\n }\n }\n Write-Host $f $funds[$f].substring($baseURL.length) $fundSize ($i / 2) \"holdings\"\n}\n</code></pre></div></div>\n\n<p>As I mentioned, most of the code is as explained before but the PowerShell object bit deserves a mention. I use an iterator <code>$i</code> to count the cells in the table (note this assumes that the table has equal number of cells per row which isn\u2019t necessarily true in HTML). We have two column headings, so <code>$i % $headings.count -eq 0</code> is true for 0, 2, 4 etc and this happens at the start of the loop so we use it to create the object.</p>\n\n<p>Once we have the cells content, we can use <code>Add-Member</code> to add the property to the object. The property name is given by <code>$headings[$i % $headings.count]</code>: either zero or one in this case.</p>\n\n<p>At the end of the loop we increment <code>$i</code> and test whether it we are now on the next row <code>$i % $headings.count -eq 0</code> and if so add the current object to the output array (as it will be overwritten at the start of the next iteration of the loop).</p>\n\n<p>After all that work lets save the results as a CSV: <code>$holdings | Export-Csv holdings.csv</code></p>\n\n<p>We now know the percentages of each holding and the total fund value so we can calculate a new column with the monetary value invested in a fund as follows:</p>\n\n<div><div><pre><code>$holdings |% {\n [decimal]$w = $_.weight -replace '[^\\d.]'\n [decimal]$s = $_.size -replace '[^\\d.]'\n Add-Member -InputObject $_ -MemberType NoteProperty -Name Value -Value ($w * $s / 100) -Force\n}\n</code></pre></div></div>\n\n<p>Perhaps save it again? <code>$holdings | Export-Csv -Force holdings.csv</code></p>\n\n<div><div><pre><code>import-csv .\\holdings.csv |? Security -match \"Netflix\" | sort -Property Value\n</code></pre></div></div>\n\n<p>The full code can be downloaded from <a href=\"https://github.com/mtelvers/Hargreaves-Lansdown/blob/master/fund-holdings.ps1\">GitHub</a> or probably more usefully you can get <a href=\"https://raw.githubusercontent.com/mtelvers/Hargreaves-Lansdown/master/holdings.csv\">holdings.csv</a></p>\n\n<h1>Addendum</h1>\n\n<p>To make the analysis easier it would help to standardise the currencies. Most are in GBP by some margin so let\u2019s convert to that:-</p>\n\n<div><div><pre><code>$ExchangeRates = @{GBP = 1; YEN = 0.00698098; EUR = 0.905805; USD = 0.776454; AUSD = 0.567308}\n\n$holdings |% {\n [decimal]$s = $_.size -replace '[^\\d.]'\n [decimal]$w = $_.weight -replace '[^\\d.]'\n if ($s -gt 0) {\n $currency = $_.size.substring(0, $_.size.IndexOf(\" \"))\n $sGBP = $s * $ExchangeRates[$currency]\n } else {\n $sGBP = 0\n }\n Add-Member -InputObject $_ -MemberType NoteProperty -Name SizeGBP -Value $sGBP -Force\n Add-Member -InputObject $_ -MemberType NoteProperty -Name ValueGBP -Value ($w * $sGBP / 100) -Force\n}\n</code></pre></div></div>",
+20
mte/2018_09_24_retro-gaming-space-raiders.json
+20
mte/2018_09_24_retro-gaming-space-raiders.json
···+"summary": "Dylan\u2019s favourite t-shirt is his Game Over shirt which always reminds me to Space Raiders from the ZX Spectrum days. I found the cassette tape quite easily but it took a significant amount of searching to find the Spectrum itself and included in the box was the tape recorder as well!",+"content": "<p>Dylan\u2019s favourite t-shirt is his Game Over shirt which always reminds me to Space Raiders from the ZX Spectrum days. I found the cassette tape quite easily but it took a significant amount of searching to find the Spectrum itself and included in the box was the tape recorder as well!</p>\n\n<p>Unfortunately when I set about loading the game it didn\u2019t work. It probably was a lot to ask after 30+ years. The audio sounded a bit low and the tape player was at maximum. I tried connecting it via an amplifier but that didn\u2019t help.</p>\n\n<p>I connected the tape drive to my Mac and looked at the file in Audacity.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/original-tape-player.png\"></p>\n\n<p>Apart from being very quiet, zooming in showed that after the guard tone it was impossible to see the signal as described in this <a href=\"http://www.myprius.co.za/tape_storage.htm\">excellent post</a>.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/nothing-to-see.png\"></p>\n\n<p>I tried the Fuse utilities to covert the WAV into a TZX file but these failed. I found more tools here which I installed on my Raspberry PI but the result was the same.</p>\n\n<p>Eventually, I decided to see if I could find another tape player and I found an old compact media centre. I played the tape straight into Audacity just to see if I could see a difference. Clearly this find is significantly better:</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/compact-media-centre.png\"></p>\n\n<p>I tried <code>audio2tape</code> but that give me a bunch of CRC errors, but processing the file with <code>tzxwav</code> worked perfectly:</p>\n\n<div><div><pre><code>pi@raspberrypi:~/.local/bin $ ./tzxwav -p -v -o ~/raiders.tzx -D ~/raiders.wav \n=== Program: raiders ---------------------------------| 1:56\nExpected length: 40\nLeader: @1055530, Sync: @1275725, End: @1279885\nProgram: raiders (40 bytes)\n--- data########----------------------------------------| 1:51\nLength: 40\nLeader: @1323967, Sync: @1412003, End: @1421770\n40 bytes of data\n=== Program: RAIDERS ---------------------------------| 1:44\nExpected length: 68\nLeader: @1510973, Sync: @1731454, End: @1735476\nProgram: RAIDERS (68 bytes)\n--- data###########-------------------------------------| 1:40\nLength: 68\nLeader: @1778815, Sync: @1866811, End: @1882863\n68 bytes of data\n=== Bytes: T #----------------------------------| 1:33\nStart: 16384, Expected length: 6912\nLeader: @1964171, Sync: @2184510, End: @2188446\nScreen: T \n--- data#########################-----------------------| 1:27\nLength: 6912\nLeader: @2231875, Sync: @2319891, End: @3680454\n6912 bytes of data\n=== Bytes: C ##############---------------------| 1:16\nStart: 24576, Expected length: 7860\nLeader: @3778730, Sync: @3989417, End: @3993362\nBytes: C (start: 24576, 7860 bytes)\n--- data###########################################-----| 0:19\nLength: 7860\nLeader: @4036807, Sync: @4124864, End: @6093760\n7860 bytes of data\n100% |##################################################| 0:00\n</code></pre></div></div>\n\n<p>I loaded the TZX file into Fuse and it worked as expected.</p>\n\n<p>Armed with a working tape player I loaded the game on the real ZX Spectrum on the first attempt</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/space-raiders-on-tv.jpg\"></p>\n\n<p>Lastly, can we have this on our Raspberry PI? Well of course, just install Fuse and load up the TZX images:</p>\n\n<div><div><pre><code>sudo apt-get install fuse-emulator-common\nsudo apt-get install spectrum-roms fuse-emulator-utils\n</code></pre></div></div>",
+19
mte/2019_01_17_mount-an-iso-from-your-desktop-via-powercli.json
+19
mte/2019_01_17_mount-an-iso-from-your-desktop-via-powercli.json
···+"summary": "Normally, I\u2019d used a Windows NFS Server to host my ISO files. The steps couldn\u2019t be simpler",+"content": "<p>Normally, I\u2019d used a Windows NFS Server to host my ISO files. The steps couldn\u2019t be simpler</p>\n\n<div><div><pre><code>Add-WindowsFeature FS-NFS-Service\nNew-NfsShareimport\nImport-Module NFS\nNew-NfsShare -Name ISO -Path C:\\ISO -access readonly\n</code></pre></div></div>\n\n<p>However, this only works if you have a Windows Server installation as you can\u2019t install the NFS Service on a Windows desktop.</p>\n\n<p>There is a standalone executable version of an NFS server available called WinNFSd.exe which can be downloaded from <a href=\"https://github.com/winnfsd/winnfsd/releases\">GitHub</a>. I\u2019ve saved this to <code>C:\\WinNFSd</code></p>\n\n<p>Create a firewall rule on your desktop to allow the allow the ESXi host to communicate with WinNFSd, thus:</p>\n\n<div><div><pre><code>New-NetFirewallRule -DisplayName \"NFS Server\" -Direction Inbound -Action Allow -Program C:\\WinNFSd\\WinNFSd.exe\n</code></pre></div></div>\n\n<p>Run <code>WinNFSd</code>. The argument list is the local folder hosting your ISO files to be shared and the path that it will have on the NFS server\u2019s export list. The path name needs to match the <code>New-DataStore</code> command later:</p>\n\n<div><div><pre><code>Start-Process C:\\WinNFSd\\WinNFSd.exe -ArgumentList \"C:\\ISO /ISO\"\n</code></pre></div></div>\n\n<p>You should now have a CMD window open along with the PowerCLI prompt.</p>\n\n<p>Now you need to know the IP Address of your machine:</p>\n\n<div><div><pre><code>$myIPAddress = \"Your IP Address\"\n</code></pre></div></div>\n\n<p>You can automate this as follows but this may need to be tweaked depending upon which network card you are using etc.</p>\n\n<div><div><pre><code>$myIPAddress = $(Get-NetIPAddress -InterfaceAlias Ethernet0 -AddressFamily IPv4).IPAddress\n</code></pre></div></div>\n\n<p>Create a variable for your ESXi host(s).</p>\n\n<div><div><pre><code>$esxHosts = @( \"Your Host\" )\n</code></pre></div></div>\n\n<p>If you have a cluster you can include them all like this:</p>\n\n<div><div><pre><code>$esxHosts = Get-Datacenter yourDC | Get-Cluster yourCluster | Get-VMHost\n</code></pre></div></div>\n\n<p>Instruct the ESXi host to mount the datastore. Note that the final <code>/ISO</code> needs to match the final argument to <code>WinNFSd</code></p>\n\n<div><div><pre><code>$esxHosts |% { New-Datastore -VMHost $_ -Name ISO -NfsHost $myIPAddress -Path /ISO }\n</code></pre></div></div>\n\n<p>Now set the ISO that you have, such as <code>c:\\iso\\myiso.iso</code> to be the CD Drive on your VM</p>\n\n<div><div><pre><code>Get-CDDrive $vm | Set-CDDrive -IsoPath \"[ISO] myiso.iso\" -Connected:$true -Confirm:$false\n</code></pre></div></div>\n\n<p>Now you can use the CD Drive in the VM as you wish.</p>\n\n<p>Of course, it\u2019s important tidy up in the correct sequence. Don\u2019t just close the CMD prompt before disconnecting the CD drive and unmounting the datastore.</p>\n\n<p>Disconnect the CD Drive</p>\n\n<div><div><pre><code>Get-CDDrive $vm | Set-CDDrive -NoMedia -Confirm:$false\n</code></pre></div></div>\n\n<p>Remove the datastore</p>\n\n<div><div><pre><code>$esxHosts |% { Remove-Datastore -VMHost $_ -Datastore ISO -Confirm:$false }\n</code></pre></div></div>\n\n<p>Stop WinNFSd and remove the firewall rule</p>\n\n<div><div><pre><code>Stop-Process -Name WinNFSd\nRemove-NetFirewallRule -DisplayName \"NFS Server\"\n</code></pre></div></div>",
+20
mte/2019_02_28_most-popular-methods.json
+20
mte/2019_02_28_most-popular-methods.json
···+"summary": "There are ~72,000 Surprise Major performances on Bell Board. Bell Board displays results in pages of 200 performances. Thus we will need to download all the pages and concatenate them into a single file:",+"content": "<p>There are ~72,000 Surprise Major performances on Bell Board. Bell Board displays results in pages of 200 performances. Thus we will need to download all the pages and concatenate them into a single file:</p>\n\n<div><div><pre><code>for i in {1..366}; do wget \"https://bb.ringingworld.co.uk/search.php?title=surprise+major&page=$i\" -O - >> surprise-major.txt; done\n</code></pre></div></div>\n\n<p>Quick analysis with awk/sed/sort and uniq:</p>\n\n<div><div><pre><code>awk '/class=\"title\"/ { print $3, $4, $5, $6, $7, $8, $9}' surprise-major.txt | sed 's/<\\/td>//' | sort | uniq -c | sort -gr | less\n</code></pre></div></div>\n\n<p>As expect the Standard 8 are right there:-</p>\n\n<div><div><pre><code>10732 Yorkshire Surprise Major\n 7633 Cambridge Surprise Major\n 6908 Bristol Surprise Major\n 3629 Superlative Surprise Major\n 3425 Lincolnshire Surprise Major\n 3048 Rutland Surprise Major\n 2716 London Surprise Major\n 1556 Pudsey Surprise Major\n 957 Glasgow Surprise Major\n 931 Lessness Surprise Major\n 666 Belfast Surprise Major\n 645 Uxbridge Surprise Major\n 568 Cornwall Surprise Major\n</code></pre></div></div>\n\n<p>Repeating for the ~3,800 Delight Major performances</p>\n\n<div><div><pre><code>for i in {1..30}; do wget \"https://bb.ringingworld.co.uk/search.php?title=delight+major&page=$i\" -O - >> delight-major.txt; done\nawk '/class=\"title\"/ { print $3, $4, $5, $6, $7, $8, $9}' delight-major.txt | sed 's/<\\/td>//' | sort | uniq -c | sort -gr | less\n</code></pre></div></div>\n\n<p>Gives us these</p>\n\n<div><div><pre><code>141 Cooktown Orchid Delight Major\n 36 Christmas Delight Major\n 30 Wedding Delight Major\n 28 Coniston Bluebird Delight Major\n 27 Diamond Delight Major\n 26 Ruby Delight Major\n 22 Birthday Delight Major\n 19 Anniversary Delight Major\n 18 Dordrecht Delight Major\n 16 Yelling Delight Major\n 16 Lye Delight Major\n 16 Burnopfield Delight Major\n 15 Winchester Delight Major\n 15 Hunsdon Delight Major\n 13 Uttlesford Delight Major\n 13 Magna Carta Delight Major\n 12 Sussex Delight Major\n 12 Sunderland Delight Major\n 12 Sleaford Delight Major\n 12 Heptonstall Delight Major\n 11 Windy Gyle Delight Major\n 11 Spitfire Delight Major\n 11 Ketteringham Delight Major\n 11 Keele University Delight Major\n 11 Ian's Delight Major\n 11 Eardisland Delight Major\n 11 Dingley Delight Major\n 10 West Bridgford Delight Major\n 10 Paisley Delight Major\n 10 Morville Delight Major\n 10 Longstanton Delight Major\n 10 Knotty Ash Delight Major\n</code></pre></div></div>\n\n<p>And once again for the 2,200 Delight Minor performances</p>\n\n<div><div><pre><code>for i in {1..12}; do wget \"https://bb.ringingworld.co.uk/search.php?title=delight+minor&page=$i\" -O - >> delight-minor.txt; done\nawk '/class=\"title\"/ { print $3, $4, $5, $6, $7, $8, $9}' delight-minor.txt | sed 's/<\\/td>//' | sort | uniq -c | sort -gr | less\n</code></pre></div></div>\n\n<p>Gives</p>\n\n<div><div><pre><code> 85 Woodbine Delight Minor\n 78 Old Oxford Delight Minor\n 46 Oswald Delight Minor\n 41 Elston Delight Minor\n 30 College Bob IV Delight Minor\n 25 Morning Exercise Delight Minor\n 23 Kirkstall Delight Minor\n 22 Francis Genius Delight Minor\n 20 St Albans Delight Minor\n 20 Julie McDonnell Delight Minor\n 19 Southwark Delight Minor\n 18 Burslem Delight Minor\n 18 Barham Delight Minor\n 17 Kentish Delight Minor\n 17 Darton Exercise Delight Minor\n 17 Burnaby Delight Minor\n 16 Edinburgh Delight Minor\n 15 Disley Delight Minor\n 14 Neasden Delight Minor\n 14 London Delight Minor\n 14 Glastonbury Delight Minor\n 14 Bedford Delight Minor\n 13 Croome d'Abitot Delight Minor\n 13 Christmas Pudding Delight Minor\n 13 Charlwood Delight Minor\n 12 Wragby Delight Minor\n 11 Willesden Delight Minor\n 11 Newdigate Delight Minor\n 10 Combermere Delight Minor\n 10 Cambridge Delight Minor\n</code></pre></div></div>",
+20
mte/2019_09_01_internet-radio-from-raspberry-pi.json
+20
mte/2019_09_01_internet-radio-from-raspberry-pi.json
···+"content": "<p>Install the software packages needed</p>\n\n<div><div><pre><code>sudo apt-get install libmp3lame0 libtwolame0\nsudo apt-get install darkice\nsudo apt-get install icecast2\n</code></pre></div></div>\n\n<p>During the installation you will be asked to set the icecast password which you\u2019ll need enter into the configuration file below</p>\n\n<p>Check your recording device is present</p>\n\n<div><div><pre><code>pi@raspberrypi:~ $ arecord -l\n**** List of CAPTURE Hardware Devices ****\ncard 1: AK5371 [AK5371], device 0: USB Audio [USB Audio]\nSubdevices: 0/1\nSubdevice #0: subdevice #0\n</code></pre></div></div>\n\n<p>Try to make a recording:</p>\n\n<div><div><pre><code>arecord -D plughw:1,0 temp.wav\n</code></pre></div></div>\n\n<p>If the volume is too quiet, you can adjust it with alsamixer -c 1 where 1 is your audio device. Note that 0 is the Raspberry PI default output device.</p>\n\n<p>Create a configuration file for darkice</p>\n\n<div><div><pre><code># this section describes general aspects of the live streaming session\n[general]\nduration = 0 # duration of encoding, in seconds. 0 means forever\nbufferSecs = 5 # size of internal slip buffer, in seconds\nreconnect = yes # reconnect to the server(s) if disconnected\n\n\n# this section describes the audio input that will be streamed\n[input]\n# device = /dev/dsp # OSS DSP soundcard device for the audio input\ndevice = plughw:1,0 # OSS DSP soundcard device for the audio input\nsampleRate = 22050 # sample rate in Hz. try 11025, 22050 or 44100\nbitsPerSample = 16 # bits per sample. try 16\nchannel = 2 # channels. 1 = mono, 2 = stereo\n\n\n# this section describes a streaming connection to an IceCast2 server\n# there may be up to 8 of these sections, named [icecast2-0] ... [icecast2-7]\n# these can be mixed with [icecast-x] and [shoutcast-x] sections\n[icecast2-0]\nbitrateMode = abr # average bit rate\nformat = mp3 # format of the stream: ogg vorbis\nbitrate = 96 # bitrate of the stream sent to the server\nserver = localhost # host name of the server\nport = 8000 # port of the IceCast2 server, usually 8000\npassword = password # source password to the IceCast2 server\nmountPoint = mic # mount point of this stream on the IceCast2 server\nname = Microphone Raspberry Pi # name of the stream\ndescription = Broadcast from 2nd room # description of the stream\nurl = http://example.com/ # URL related to the stream\ngenre = my own # genre of the stream\npublic = no # advertise this stream?\n</code></pre></div></div>\n\n<p>Invoke the server by running darkice at the prompt.</p>\n\n<p>Set darkice to run at boot up</p>\n\n<div><div><pre><code>update-rc.d darkice defaults\n</code></pre></div></div>\n\n<p>Open a web browser to <code>http://<pi-ip-address>:8000</code> to view the installation. Add the url source to your Internet radio appliance via <code>http://<pi-ip-address>:8000/mic</code></p>",
+19
mte/2019_09_14_raspberry-pi-zero-w-headless-setup.json
+19
mte/2019_09_14_raspberry-pi-zero-w-headless-setup.json
···+"summary": "Copy 2019-07-10-raspbian-buster-lite.img to the SD card with Etcher. Then remove and reinsert the card.",+"content": "<p>Copy <code>2019-07-10-raspbian-buster-lite.img</code> to the SD card with Etcher. Then remove and reinsert the card.</p>\n\n<p>Enable ssh by creating a zero length file called <code>ssh</code>:</p>\n\n<div><div><pre><code>touch /Volumes/boot/ssh\n</code></pre></div></div>\n\n<p>Create a file <code>/Volumes/boot/wpa_supplicant.conf</code> using your favourite plain text editor:</p>\n\n<div><div><pre><code>ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev\nupdate_config=1\ncountry=GB\n\nnetwork={\n ssid=\"your SSID\"\n psk=\"xxxxxxxx\"\n key_mgmt=WPA-PSK\n}\n</code></pre></div></div>",
+19
mte/2019_09_16_raspberry-pi-ssh-keys.json
+19
mte/2019_09_16_raspberry-pi-ssh-keys.json
···+"summary": "This is my cheatsheet based upon Passwordless SSH access on the official Raspberry PI website.",+"content": "<p>This is my cheatsheet based upon <a href=\"https://www.raspberrypi.org/documentation/remote-access/ssh/passwordless.md\">Passwordless SSH access</a> on the official Raspberry PI website.</p>\n\n<p>On the Mac create a key (once) with a passcode</p>\n\n<div><div><pre><code>ssh-keygen\n</code></pre></div></div>\n\n<p>Add the key to your Mac keychain</p>\n\n<div><div><pre><code>ssh-add -K ~/.ssh/id_rsa\n</code></pre></div></div>\n\n<p>Optionally create a file <code>~/.ssh/config</code> with these contents which contains the <code>UseKeychain yes</code> line which tells OSX to look at the keychain for the passphrase.</p>\n\n<div><div><pre><code>Host *\n UseKeychain yes\n AddKeysToAgent yes\n IdentityFile ~/.ssh/id_rsa\n</code></pre></div></div>\n\n<p>Then copy your key to your Raspberry PI</p>\n\n<div><div><pre><code>ssh-copy-id pi@192.168.1.x\n</code></pre></div></div>\n\n<p>SSH to the PI</p>\n\n<div><div><pre><code>ssh pi@192.168.1.x\n</code></pre></div></div>\n\n<p>Next edit your <code>/etc/ssh/sshd_config</code> to turn off plain text password authentication and restart <code>sshd</code>.</p>\n\n<div><div><pre><code>sudo sed -i \"s/#PasswordAuthentication yes/PasswordAuthentication no/g\" /etc/ssh/sshd_config\nsudo /etc/init.d/ssh restart\n</code></pre></div></div>\n\n<p>Now you can SSH without a password and without getting pestered that the default password hasn\u2019t been changed.</p>",
+20
mte/2019_09_20_bridged-wifi-access-point-with-raspberry-pi.json
+20
mte/2019_09_20_bridged-wifi-access-point-with-raspberry-pi.json
···+"summary": "Run ifconfig and determine your network device names. Typically these will be eth0 and wlan0.",+"content": "<p>Run <code>ifconfig</code> and determine your network device names. Typically these will be <code>eth0</code> and <code>wlan0</code>.</p>\n\n<p>Install the packages we\u2019ll need</p>\n\n<div><div><pre><code>apt-get install hostapd bridge-utils\n</code></pre></div></div>\n\n<p>Create a file <code>/etc/network/interfaces.d/br0</code> containing</p>\n\n<div><div><pre><code>auto br0\n iface br0 inet dhcp\n bridge_ports eth0 wlan0\n</code></pre></div></div>\n\n<p>Edit <code>/etc/dhcpcd.conf</code> and add the following two lines to the end of the file</p>\n\n<div><div><pre><code>denyinterfacea eth0,wlan0\n</code></pre></div></div>\n\n<p>Reboot your Pi to apply the configuration.</p>\n\n<p>Create the configuration file <code>/etc/hostapd/hostapd.conf</code> for <code>hostapd</code>.</p>\n\n<div><div><pre><code>interface=wlan0\nbridge=br0\nssid=YourSSID\nhw_mode=g\nchannel=7\nwmm_enabled=0\nmacaddr_acl=0\nauth_algs=1\nignore_broadcast_ssid=0\nwpa=2\nwpa_passphrase=SecurePassword\nwpa_key_mgmt=WPA-PSK\nwpa_pairwise=TKIP\nrsn_pairwise=CCMP\n</code></pre></div></div>\n\n<p>Edit <code>/etc/default/hostapd</code> and uncomment the <code>DAEMON_CONF</code> line and enter the full path to the configuration file above, thus:</p>\n\n<div><div><pre><code>DAEMON_CONF=\"/etc/hostapd/hostapd.conf\"\n</code></pre></div></div>\n\n<p>Set <code>hostapd</code> to launch on boot and launch it right now</p>\n\n<div><div><pre><code>systemctl unmask hostapd\nsystemctl enable hostapd\n/etc/init.d/hostapd start\n</code></pre></div></div>",
+20
mte/2019_09_20_oled-module-for-pi.json
+20
mte/2019_09_20_oled-module-for-pi.json
···+"content": "<p>Run <code>raspi-config</code> and turn on the i2c interface</p>\n\n<p>Install the i2c tools</p>\n\n<div><div><pre><code>apt-get install i2c-tools\n</code></pre></div></div>\n\n<p>Then of your module by running <code>i2cdetect -y 1</code></p>\n\n<div><div><pre><code>root@pi2b:~ # i2cdetect -y 1\n 0 1 2 3 4 5 6 7 8 9 a b c d e f\n00: -- -- -- -- -- -- -- -- -- -- -- -- -- \n10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n30: -- -- -- -- -- -- -- -- -- -- -- -- 3c -- -- -- \n40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- \n70: -- -- -- -- -- -- -- -- \n</code></pre></div></div>\n\n<p>This shows that you\u2019ve connected up the hardware correctly!</p>\n\n<p>Install the Python modules required by the Adafruit SSD1306 module.</p>\n\n<div><div><pre><code>pt-get install -y python3-dev python3-setuptools python3-pip python3-pil python3-rpi.gpio\n</code></pre></div></div>\n\n<p>Download the library from Github</p>\n\n<div><div><pre><code>git clone https://github.com/adafruit/Adafruit_Python_SSD1306.git\n</code></pre></div></div>\n\n<p>Install the library</p>\n\n<div><div><pre><code>sudo python3 setup.py install\n</code></pre></div></div>\n\n<p>Then run one of the examples such as <code>shapes.py</code></p>",
+19
mte/2019_09_20_srx-firmware.json
+19
mte/2019_09_20_srx-firmware.json
···+"content": "<p>Download the latest version of the software and copy it over to the SRX</p>\n\n<div><div><pre><code>scp junos-srxsme-12.3X48-D65.1-domestic.tgz root@192.168.1.1:/var/tmp\n</code></pre></div></div>\n\n<p>On the SRX install the software into the alternative root partition</p>\n\n<div><div><pre><code>request system software add /var/tmp/junos-srxsme-12.3X48-D65.1-domestic.tgz no-copy no-validate unlink\n</code></pre></div></div>\n\n<p>Reboot</p>\n\n<div><div><pre><code>request system reboot\n</code></pre></div></div>\n\n<p>Once it has rebooted, update the alternate image to the new version.</p>\n\n<div><div><pre><code>request system snapshot slice alternate\n</code></pre></div></div>",
+19
mte/2019_09_21_bose-soundtouch-and-mini-dlna.json
+19
mte/2019_09_21_bose-soundtouch-and-mini-dlna.json
···+"summary": "Bose have a Windows application can host your music library, however I don\u2019t have a Windows machine turn on permanently and I\u2019d prefer a low power Raspberry PI option.",+"content": "<p><a href=\"https://www.bose.co.uk\">Bose</a> have a Windows application can host your music library, however I don\u2019t have a Windows machine turn on permanently and I\u2019d prefer a low power Raspberry PI option.</p>\n\n<p>Install Mini DLNA</p>\n\n<div><div><pre><code>apt-get install minidlna\n</code></pre></div></div>\n\n<p>Copy the Music over to the staging folder. I have my MP3 files on an external hard disk so I\u2019ll copy them over link this</p>\n\n<div><div><pre><code>ar -c /mnt/Music -cvf - . | tar -C /var/lib/minidlna -xf -\n</code></pre></div></div>\n\n<p>Set the file ownership</p>\n\n<div><div><pre><code>chown -R minidlna:minidlna /var/lib/minidlna /var/cache/minidlna\n</code></pre></div></div>\n\n<p>Sometimes you need to delete the database from <code>/var/cache/minidlna/files.db</code> and restart the service</p>\n\n<div><div><pre><code>service minidlna stop\nrm /var/cache/minidlna/files.db\nservice minidlna start\n</code></pre></div></div>\n\n<p>Check the status at <code>http://<host_ip>:8200</code></p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/minidlna-status.png\"></p>\n\n<p>Now on the Bose SoundTouch app go to Add Service, Music Library on NAS and select your Pi from the list:</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/soundtouch-app.jpg\"></p>",
+19
mte/2020_02_06_import-text-file-of-events-into-apple-calendar-using-applescript.json
+19
mte/2020_02_06_import-text-file-of-events-into-apple-calendar-using-applescript.json
···+"id": "https://www.tunbury.org/2020/02/06/import-text-file-of-events-into-apple-calendar-using-applescript",+"link": "https://www.tunbury.org/2020/02/06/import-text-file-of-events-into-apple-calendar-using-applescript/",+"summary": "The Church of England has a very useful calendar page, but I\u2019d really like it in my iPhone calendar so I can have reminders for Saints\u2019 days particularly red letter days when the flag goes up.",+"content": "<p>The Church of England has a very useful <a href=\"https://www.churchofengland.org/prayer-and-worship/worship-texts-and-resources/common-worship/prayer-and-worship/worship-texts-and-resources/common-worship/churchs-year/calendar\">calendar</a> page, but I\u2019d really like it in my iPhone calendar so I can have reminders for Saints\u2019 days particularly red letter days when the flag goes up.</p>\n\n<p>I\u2019ve never used AppleScript before but with a little searching online it seemed relatively easy to create a script to import a text file copy of the web page into my Mac calendar which is synchronised with my phone.</p>\n\n<div><div><pre><code>set OldDelimiters to AppleScript's text item delimiters\nset LF to ASCII character 10\nset tab to ASCII character 9\nset theFile to choose file with prompt \"Select TAB delimited file calendar file\"\nset theLines to read theFile\nset AppleScript's text item delimiters to {LF}\nset theLines to paragraphs of theLines\nset AppleScript's text item delimiters to {tab}\nrepeat with ThisLine in theLines\nif (count of ThisLine) > 0 then\nset theStartDate to current date\nset hours of theStartDate to 0\nset minutes of theStartDate to 0\nset seconds of theStartDate to 0\n\nif text item 1 of ThisLine is not \"0\" then\nset year of theStartDate to text item 1 of ThisLine as number\nend if\n\nif text item 2 of ThisLine is equal to \"January\" then\nset month of theStartDate to 1\nelse if text item 2 of ThisLine is equal to \"February\" then\nset month of theStartDate to 2\nelse if text item 2 of ThisLine is equal to \"March\" then\nset month of theStartDate to 3\nelse if text item 2 of ThisLine is equal to \"April\" then\nset month of theStartDate to 4\nelse if text item 2 of ThisLine is equal to \"May\" then\nset month of theStartDate to 5\nelse if text item 2 of ThisLine is equal to \"June\" then\nset month of theStartDate to 6\nelse if text item 2 of ThisLine is equal to \"July\" then\nset month of theStartDate to 7\nelse if text item 2 of ThisLine is equal to \"August\" then\nset month of theStartDate to 8\nelse if text item 2 of ThisLine is equal to \"September\" then\nset month of theStartDate to 9\nelse if text item 2 of ThisLine is equal to \"October\" then\nset month of theStartDate to 10\nelse if text item 2 of ThisLine is equal to \"November\" then\nset month of theStartDate to 11\nelse if text item 2 of ThisLine is equal to \"December\" then\nset month of theStartDate to 12\nelse\nlog text item 2 of ThisLine\nend if\n\nset day of theStartDate to text item 3 of ThisLine\n\nset theEndDate to theStartDate + (23 * hours)\n\nlog theStartDate\n\ntell application \"Calendar\"\nif text item 5 of ThisLine is \"RED\" then\ntell calendar \"CofE RED\"\nif text item 1 of ThisLine is not \"0\" then\nset newEvent to make new event with properties {summary:text item 4 of ThisLine, start date:theStartDate, end date:theEndDate, allday event:true}\nelse\nset newEvent to make new event with properties {summary:text item 4 of ThisLine, start date:theStartDate, end date:theEndDate, allday event:true, recurrence:\"freq=Yearly\"}\nend if\nend tell\nelse\ntell calendar \"CofE\"\nif text item 1 of ThisLine is not \"0\" then\nset newEvent to make new event with properties {summary:text item 4 of ThisLine, start date:theStartDate, end date:theEndDate, allday event:true}\nelse\nset newEvent to make new event with properties {summary:text item 4 of ThisLine, start date:theStartDate, end date:theEndDate, allday event:true, recurrence:\"freq=Yearly\"}\nend if\nend tell\nend if\nend tell\n\nend if\n\nend repeat\n \nset AppleScript's text item delimiters to OldDelimiters\n</code></pre></div></div>\n\n<p><a href=\"https://www.tunbury.org/downloads/cofe-calendar.txt\">cofe-calendar</a></p>",
+19
mte/2020_02_25_how-to-github.json
+19
mte/2020_02_25_how-to-github.json
···+"summary": "I really don\u2019t use GitHub often enough to remember the commands without searching for them each time, which means that I use GitHub even less as I can\u2019t remember the commands. Here\u2019s a short cheat sheet on the most common things I need to do in GitHub.",+"content": "<p>I really don\u2019t use GitHub often enough to remember the commands without searching for them each time, which means that I use GitHub even less as I can\u2019t remember the commands. Here\u2019s a short cheat sheet on the most common things I need to do in GitHub.</p>\n\n<p>Navigate to your project folder then create a repository for that directory</p>\n\n<div><div><pre><code>git init\n</code></pre></div></div>\n\n<p>Add all the files in the current directory to the Git index. Of course you can be more selective here and iteratively add files one at a time</p>\n\n<div><div><pre><code>git add .\n</code></pre></div></div>\n\n<p>The current status can be checked at any time using</p>\n\n<div><div><pre><code>git status\n</code></pre></div></div>\n\n<p>Now commit the files in their current state to the repository with whatever comment is appropriate</p>\n\n<div><div><pre><code>git commit -m \"Initial commit\"\n</code></pre></div></div>\n\n<p>You may well be problem to set your global username and email if you\u2019ve not done it before:</p>\n\n<div><div><pre><code>git config --global user.email \"you@yourdomain.com\"\ngit config --global user.name \"Your Name\"\n</code></pre></div></div>\n\n<p>At some time later after you have made changes you need to add the changed files again and commit or do a combined add/commit like this</p>\n\n<div><div><pre><code>git commit -a -m \"great new code added\"\n</code></pre></div></div>\n\n<p>To see the current changes compared to the repository</p>\n\n<div><div><pre><code>git diff\n</code></pre></div></div>\n\n<p>And finally if things went south you can commit the current state and then revert to the last commit point</p>\n\n<div><div><pre><code>git commit -a -m \"Oops\"\ngit revert HEAD --no-edit\n</code></pre></div></div>\n\n<h1>Working Online</h1>\n\n<p>That\u2019s all very well and I could continue to work like that but I want to keep a copy at GitHub so create an RSA key for authentication</p>\n\n<div><div><pre><code>ssh-keygen -t rsa -b 4096 -C \"you@yourdomain.com\"\n</code></pre></div></div>\n\n<p>Add this key to your SSH Agent</p>\n\n<div><div><pre><code>ssh-add ~/.ssh/id_rsa\n</code></pre></div></div>\n\n<p>Sign in to GitHub and copy and paste the public key into the SSH and GPG Keys section</p>\n\n<div><div><pre><code>cat ~/.ssh/id_rsa.pub\n</code></pre></div></div>\n\n<p>Create an empty repository on the website. Note the SSH address and add it as a remote repository on your local system</p>\n\n<div><div><pre><code>git remote add origin git@github.com:username/project.git\n</code></pre></div></div>\n\n<p>And then push your local copy to GitHub</p>\n\n<div><div><pre><code>git push -u origin master\n</code></pre></div></div>",
+20
mte/2020_04_12_music-library.json
+20
mte/2020_04_12_music-library.json
···+"summary": "Using a Raspberry PI with a USB CD drive to read all my CDs and create a master, FLAC format, repository and from that create MP3 and AAC versions for the car and iTunes.",+"content": "<p>Using a Raspberry PI with a USB CD drive to read all my CDs and create a master, FLAC format, repository and from that create MP3 and AAC versions for the car and iTunes.</p>\n\n<div><div><pre><code>sudo apt-get install abcde\nsudo apt-get install flac\n</code></pre></div></div>\n\n<p>Then read the file with</p>\n\n<div><div><pre><code>abcde -a cddb,read,getalbumart,encode,tag,move,clean -j 4 -B -o flac -N \n</code></pre></div></div>\n\n<p>To make <code>abcde</code> create file names in the format that I prefer create <code>.abcde.conf</code> in the users\u2019 home directory containing:</p>\n\n<div><div><pre><code>OUTPUTFORMAT='${OUTPUT}/${ARTISTFILE}/${ALBUMFILE}/${TRACKNUM} - ${TRACKFILE}'\n\nmungefilename ()\n{\n echo \"$@\" | sed -e 's/^\\.*//' | tr -d \":><|*/\\\"'?[:cntrl:]\"\n}\n</code></pre></div></div>\n\n<p>And encode it as AAC using</p>\n\n<div><div><pre><code>ffmpeg -i \"01 - Santas Coming for Us.flac\" -c:v mjpeg -vf scale=500:500 -c:a aac -b:a 128k -threads 4 \"01 - Santas Coming for Us.m4a\"\n</code></pre></div></div>\n\n<p>This could be rolled up as followed with find/xargs</p>\n\n<div><div><pre><code>find . -name \"*.flac\" -print0 | xargs -0 -P 4 -I{} ffmpeg -i {} -c:v mjpeg -vf scale=500:500 -c:a aac -b:a 128k -n {}.m4a\n</code></pre></div></div>\n\n<p>The <code>-n</code> here causes it to skip files where the output file already exists so the command can be run again on an existing directory tree. <code>-P 4</code> forks 4 copies of <code>ffmpeg</code>.</p>\n\n<p>Finally copy it the m4a files to <code>~/Music/Music/Media/Automatically Add to Music.localized</code></p>",
+20
mte/2020_04_18_minecraft-java-edition-server-on-ubuntu-18-04.json
+20
mte/2020_04_18_minecraft-java-edition-server-on-ubuntu-18-04.json
···+"content": "<p>See <a href=\"https://linuxize.com/post/how-to-install-minecraft-server-on-ubuntu-18-04/\">How to install a Minecraft Bedrock Server on Ubuntu</a></p>\n\n<blockquote>\n <p>I\u2019ll note here that this works perfectly, but it doesn\u2019t do what I wanted it to! What I discovered afterwards is that there is Minecraft Java Edition which is the original product but Java Edition only supports cross play with Java Edition endpoints such as a PC or Mac. iPhones/iPad use the newer C++ Edition and there is a new Bedrock Edition server which works across both Java and C++ endpoints.</p>\n</blockquote>\n\n<p>Install Ubuntu 18.04.4 using VMware Fusion. Create a bridged connection to the LAN not the default NAT\u2019ed connection. Allow SSH. Install my SSH key using <code>ssh-copy-id user@192.168.1.127</code></p>\n\n<p>Sign on on the console sudo -Es, then install the essentials</p>\n\n<div><div><pre><code>apt update\napt install git build-essential\napt install openjdk-8-jre-headless\n</code></pre></div></div>\n\n<p>Create, and then switch to a user account</p>\n\n<div><div><pre><code>useradd -r -m -U -d /opt/minecraft -s /bin/bash minecraft\nsu - minecraft\n</code></pre></div></div>\n\n<p>Create a folder structure to work with</p>\n\n<div><div><pre><code>mkdir -p ~/{backups,tools,server}\n</code></pre></div></div>\n\n<p>Clone the git repository for the micron tool</p>\n\n<div><div><pre><code>cd ~/tools && git clone https://github.com/Tiiffi/mcrcon.git\n</code></pre></div></div>\n\n<p>Compile it</p>\n\n<div><div><pre><code>cd ~/tools/mcrcon && gcc -std=gnu11 -pedantic -Wall -Wextra -O2 -s -o mcrcon mcrcon.c\n</code></pre></div></div>\n\n<p>Download the JAR file</p>\n\n<div><div><pre><code>wget https://launcher.mojang.com/v1/objects/bb2b6b1aefcd70dfd1892149ac3a215f6c636b07/server.jar -P ~/server\n</code></pre></div></div>\n\n<p>Make an initial run on the server</p>\n\n<div><div><pre><code>cd ~/server\njava -Xmx1024M -Xms512M -jar server.jar nogui\n</code></pre></div></div>\n\n<p>Updated the eula.txt to accept the EULA</p>\n\n<div><div><pre><code>sed -i \"s/false/true/g\" ~/server/eula.txt\n</code></pre></div></div>\n\n<p>Edit <code>server.properties</code> to enable RCON and set the password</p>\n\n<div><div><pre><code>sed -i \"s/enable-rcon=false/enable-rcon=true/g\" ~/server/server.properties\nsed -i \"s/rcon.password=/rcon.password=s3cr3t/g\" ~/server/server.properties\n</code></pre></div></div>\n\n<p>Create a cron job to create backups</p>\n\n<div><div><pre><code>cat > /opt/minecraft/tools/backup.sh <<'EOF'\n#!/bin/bash\n\nfunction rcon {\n/opt/minecraft/tools/mcrcon/mcrcon -H 127.0.0.1 -P 25575 -p s3cr3t \"$1\"\n}\n\nrcon \"save-off\"\nrcon \"save-all\"\ntar -cvpzf /opt/minecraft/backups/server-$(date +%F-%H-%M).tar.gz /opt/minecraft/server\nrcon \"save-on\"\n\n## Delete older backups\nfind /opt/minecraft/backups/ -type f -mtime +7 -name '*.gz' -delete\nEOF\n</code></pre></div></div>\n\n<p>Make it executable</p>\n\n<div><div><pre><code>chmod +x /opt/minecraft/tools/backup.sh\n</code></pre></div></div>\n\n<p>Schedule the backup to run at 3am via CRON using crontab -e</p>\n\n<div><div><pre><code>0 3 * * * /opt/minecraft/tools/backup.sh\n</code></pre></div></div>\n\n<p>As root, create <code>/etc/systemd/system/minecraft.service</code></p>\n\n<div><div><pre><code>cat > /etc/systemd/system/minecraft.service <<'EOF'\n[Unit]\nDescription=Minecraft Server\nAfter=network.target\n\n[Service]\nUser=minecraft\nNice=1\nKillMode=none\nSuccessExitStatus=0 1\nProtectHome=true\nProtectSystem=full\nPrivateDevices=true\nNoNewPrivileges=true\nWorkingDirectory=/opt/minecraft/server\nExecStart=/usr/bin/java -Xmx2048M -Xms1024M -jar server.jar nogui\nExecStop=/opt/minecraft/tools/mcrcon/mcrcon -H 127.0.0.1 -P 25575 -p s3cr3t stop\n\n[Install]\nWantedBy=multi-user.target\nEOF\n</code></pre></div></div>\n\n<p>Refresh <code>systemd</code>, set the service to start at boot, start the service and check the status:</p>\n\n<div><div><pre><code>sudo systemctl daemon-reload\nsudo systemctl enable minecraft\nsudo systemctl start minecraft\nsudo systemctl status minecraft\n</code></pre></div></div>\n\n<p>Open the firewall port</p>\n\n<div><div><pre><code>sudo ufw allow 25565/tcp\n</code></pre></div></div>\n\n<p>If, down the road, you want to create a new world, just stop the server and delete <code>/opt/minecraft/server/world</code>. Alternatively, edit <code>server.properties</code> and set a new name on <code>level-name=world</code>.</p>",
+19
mte/2020_04_19_square-root.json
+19
mte/2020_04_19_square-root.json
···+"summary": "As a first step in calculating a square root look at the order of magnitude of the number and this will quickly allow the determination of the number of digits in the solution. Consider squaring numbers less than 10; the solutions will be less than 100. Squaring numbers less than 100 gives solutions less than 10,000 and numbers less than 1,000 will square to numbers less than 1,000,000 etc. In general terms the square root of a number with an even number of digits will have half the number of digits as the original number. For numbers with an odd number of digits then the solution will have one more than half the number of digits.",+"content": "<p>As a first step in calculating a square root look at the order of magnitude of the number and this will quickly allow the determination of the number of digits in the solution. Consider squaring numbers less than 10; the solutions will be less than 100. Squaring numbers less than 100 gives solutions less than 10,000 and numbers less than 1,000 will square to numbers less than 1,000,000 etc. In general terms the square root of a number with an even number of digits will have half the number of digits as the original number. For numbers with an odd number of digits then the solution will have one more than half the number of digits.</p>\n\n<p>The second point of note is that square root of a number 100 times larger gives a solution 10 times large.</p>\n\n\\[10\\sqrt{x}=\\sqrt{100x}\\]\n\n<p>To work through the method, let\u2019s consider calculating the square root of 65,000. From the above, we know that the solution will be a three digit number. We can think of the three digit solution as h hundreds, t tens and u units.</p>\n\n\\[\\sqrt{x}=h+t+u\\]\n\n<p>Therefore</p>\n\n\\[x=(h+t+u)^2\\]\n\n<p>This can be visualised geometrically as a square:</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/square3.svg\"></p>\n\n<p>The area of the <em>hundred</em> square is the largest <em>h</em> which satisfies</p>\n\n\\[h^2<65000\\]\n\n<p>Trying successive h values</p>\n\n\\[200^2=40000\\]\n\n\\[300^2=90000\\]\n\n<p>Therefore <em>h</em> is 200</p>\n\n<p>The can be written out using a form of long division</p>\n\n<div><div><pre><code> 2 0 0\n +-------\n |6 50 00\n200x200 4 00 00\n -------\n 2 50 00\n</code></pre></div></div>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/square2.svg\"></p>\n\n<p>Now looking at the geometric representation we can write down the area of the <em>hundred</em> square and the two rectangles of sides <em>h</em> and <em>t</em> and a square with sides <em>t</em> as being less than the total area. This can be shown in this formula:</p>\n\n\\[x>h^2+2ht+t^2\\]\n\n<p>Substituting for <em>h</em> and rearranging:</p>\n\n\\[65000-40000>2(200t)+t^2\\]\n\n\\[25000>t(400+t)\\]\n\n<p>Since <em>t</em> is a tens number, we are looking for the largest value which satisfies</p>\n\n\\[25000>4\\_0\\times \\_0\\]\n\n<p>Trying possible numbers</p>\n\n\\[440\\times 40=17600\\]\n\n\\[450\\times 50=22500\\]\n\n\\[460\\times 60=27600\\]\n\n<p>Therefore, <em>t</em> is 50</p>\n\n<div><div><pre><code> 2 5 0\n +-------\n |6 50 00\n200x200 4 00 00\n -------\n 2 50 00\n450x50 2 25 00\n -------\n 25 00\n</code></pre></div></div>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/sqaure.svg\"></p>\n\n<p>Returning to the geometric representation we can write down the area of the <em>hundred</em> square and the two rectangles of sides <em>h</em> and <em>t</em> the tens square as above and additionally include the two rectangles of sides <em>h + t</em> by <em>u</em> and the <em>units</em> square. This can be shown in this formula:</p>\n\n\\[x>h^2+2ht+t^2+2(h+t)u+u^2\\]\n\n<p>The first part of the formula is the same as above so the values are already known and additionally substituting for <em>h</em> and <em>t</em>:</p>\n\n\\[65000>40000+22500+2(200+50)u+u^2\\]\n\n\\[2500>u(500+u)\\]\n\n<p>Since <em>u</em> is a units number, we are looking for the largest value which satisfies</p>\n\n\\[2500>50\\_\\times \\_\\]\n\n<p>Trying possible numbers</p>\n\n\\[503\\times 3=1509\\]\n\n\\[504\\times 4=2016\\]\n\n\\[505\\times 5=2525\\]\n\n<p>Therefore, <em>u</em> is 4</p>\n\n<div><div><pre><code> 2 5 4\n +-------\n |6 50 00\n200x200 4 00 00\n -------\n 2 50 00\n450x50 2 25 00\n -------\n 25 00\n504x4 20 16\n -----\n 4 84\n</code></pre></div></div>\n\n<p>We could extend this into fractions where f is 1/10:</p>\n\n\\[x>h^2+2ht+t^2+2(h+t)u+u^2+2(h+t+u)f+f^2\\]\n\n<p>However, this is unnecessary because realising that at each step we are using double the current solution it is evident that:</p>\n\n\\[254\\times 2=508\\]\n\n\\[508.\\_\\times 0.\\_\\]\n\n<div><div><pre><code> 2 5 4. 9\n +----------\n |6 50 00.00\n200x200 4 00 00.00\n ----------\n 2 50 00.00\n450x50 2 25 00.00\n ----------\n 25 00.00\n504x4 20 16.00\n --------\n 4 84.00\n508.9x0.9 4 58.01\n -------\n 25.99\n</code></pre></div></div>\n\n<p>And once again, solving for:</p>\n\n\\[254.9\\times 2=509.8\\]\n\n\\[509.8\\_\\times 0.0\\_\\]\n\n<div><div><pre><code> 2 5 4. 9 5\n +-------------\n |6 50 00.00 00\n200x200 4 00 00.00 00\n -------------\n 2 50 00.00 00\n450x50 2 25 00.00 00\n -------------\n 25 00.00 00\n504x4 20 16.00 00\n -----------\n 4 84.00 00\n508.9x0.9 4 58.01 00\n ----------\n 25.99 00\n509.85x0.05 25.49 25\n --------\n .49 75\n</code></pre></div></div>",
+19
mte/2020_05_30_civilization-iii-on-os-x.json
+19
mte/2020_05_30_civilization-iii-on-os-x.json
···+"content": "<p>Install Oracle VirtualBox and install Windows XP 32 bit.</p>\n\n<p>Mount the Guest Additions image and install them.</p>\n\n<p>Create an ISO from the Civ 3 installation CD using</p>\n\n<div><div><pre><code>hdiutil makehybrid -iso -joliet -o civ3.iso /Volumes/CIV3/\n</code></pre></div></div>\n\n<p>Mount the ISO on VirtualBox and install the game.</p>\n\n<p>Download and install the following patch to bring the installation up to 1.29f. See this <a href=\"https://support.2k.com/hc/en-us/articles/201333523-Civilization-III-1-29f-Patch\">site</a>.</p>\n\n<p><a href=\"https://www.tunbury.org/downloads/Civ3v129f.zip\">Civ3v129f</a></p>\n\n<p>Download the No CD patch from the PC Gamer <a href=\"https://www.pcgames.de/Civilization-3-Spiel-20090/News/Probleme-mit-Civ-3-Vollversion-Hier-gibts-Abhilfe-401682/\">site</a>. Specifically, I needed this file: <code>Civilization 3 PC Games Patch mit Conquest v1.29f (d).zip</code> provided below.</p>\n\n<p><a href=\"https://www.tunbury.org/downloads/Civilization3.zip\">Civilization3</a></p>\n\n<p>Lastly with VirtualBox running full screen Civ 3 doesn\u2019t fill the screen. Edit <code>Civilization3.ini</code> from <code>C:\\Program Files\\Infogrames Interactive\\Civilization III</code> and add <code>KeepRes=1</code></p>\n\n<div><div><pre><code>[Civilizaion III]\nKeepRes=1\n</code></pre></div></div>",
+19
mte/2020_06_04_raspberry-pi-as-rtsp-source-for-obs.json
+19
mte/2020_06_04_raspberry-pi-as-rtsp-source-for-obs.json
···+"summary": "Using the new Raspberry Pi Imager I\u2019ve installed the latest Raspberry Pi OS Lite (32 bit).",+"content": "<p>Using the new <a href=\"https://www.raspberrypi.org/downloads/\">Raspberry Pi Imager</a> I\u2019ve installed the latest Raspberry Pi OS Lite (32 bit).</p>\n\n<p>Boot the Pi and enable the camera module and SSH both under Interfaces in <code>raspi-config</code>. You need to reboot before the camera is activated.</p>\n\n<p>Sign in as root and run <code>sudo -Es</code> to get an elevated prompt.</p>\n\n<p>Install <code>cmake</code> and <code>git</code>.</p>\n\n<div><div><pre><code>apt update && apt install git cmake\n</code></pre></div></div>\n\n<p>Download the code from GitHub</p>\n\n<div><div><pre><code>git clone https://github.com/mpromonet/v4l2rtspserver.git\n</code></pre></div></div>\n\n<p>Build the application and install it</p>\n\n<div><div><pre><code>cd v4l2rtspserver && cmake . && make && make install\n</code></pre></div></div>\n\n<p>Edit <code>/etc/rc.local</code> and add this line before the final line <code>exit 0</code> and reboot.</p>\n\n<div><div><pre><code>v4l2rtspserver -P 554 -W 1920 -H 1080 /dev/video0 &\n</code></pre></div></div>\n\n<p>For testing install VLC Media Player and open a network stream to the following path:</p>\n\n<div><div><pre><code>rtsp://<pi_ip_address>/unicast\n</code></pre></div></div>\n\n<p>In Open Broadcast Studio (OBS) create a new Media Source and untick the check box for Local File and enter the RTSP URL in the input box.</p>",
+20
mte/2020_08_07_powershell-snmp.json
+20
mte/2020_08_07_powershell-snmp.json
···+"summary": "Potentially, I\u2019ve got a bit carried away here. There isn\u2019t a native PowerShell module to query SNMP which I found a bit surprising. How hard could it be? I\u2019ve got a SYSLOG server and client in PowerShell so this felt like a simple extension. The SNMP client needs to send a request over UDP to the SNMP server on port 161 and waits for the response back. Sending via .NET\u2019s UDPClient is easy enough",+"content": "<p>Potentially, I\u2019ve got a bit carried away here. There isn\u2019t a native PowerShell module to query SNMP which I found a bit surprising. How hard could it be? I\u2019ve got a SYSLOG server and client in PowerShell so this felt like a simple extension. The SNMP client needs to send a request over UDP to the SNMP server on port 161 and waits for the response back. Sending via .NET\u2019s UDPClient is easy enough</p>\n\n<div><div><pre><code>$UDPCLient = New-Object -TypeName System.Net.Sockets.UdpClient\n$UDPCLient.Connect($Server, $UDPPort)\n$UDPCLient.Send($ByteMessage, $ByteMessage.Length)\n</code></pre></div></div>\n\n<p>Receiving is just a case of waiting on the socket with a timeout in case the host is down!</p>\n\n<div><div><pre><code>$asyncResult = $UDPCLient.BeginReceive($null, $null)\nif ($asyncResult.AsyncWaitHandle.WaitOne($Timeout)) {\n $UDPClient.EndReceive($asyncResult, [ref]$serverEndPoint)\n}\n$UDPCLient.Close()\n</code></pre></div></div>\n\n<p>Using Wireshark I captured the packets to take a look at the protocol in action. Below is an SNMP Request</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/snmp-request.png\"></p>\n\n<p>And this is an SNMP Reply</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/snmp-reply.png\"></p>\n\n<h1>ASN.1 and X.690</h1>\n\n<p>Reading <a href=\"https://tools.ietf.org/pdf/rfc1157.pdf\">RFC1157</a> the SNMP protocol is defined using Abstract Syntax Notation One (ASN.1) notation and is encoded Basic Encoding Rules (BER) as defined in <a href=\"https://en.wikipedia.org/wiki/X.69\">X.690</a>.</p>\n\n<h1>.NET Methods</h1>\n\n<p>.NET has methods for <code>BerConverter.Encode()</code> and <code>BerConverter.Decode()</code> which on face value look pretty promising. Taking the data above, it can decode a chunk of it:</p>\n\n<div><div><pre><code>[System.Reflection.Assembly]::LoadWithPartialName(\"System.DirectoryServices.Protocols\")\n[System.DirectoryServices.Protocols.BerConverter]::Decode(\"{ia[iii]}\", @(0x30, 0x17, 0x2, 0x1, 0x0, 0x4, 0x6, 0x70, 0x75, 0x62, 0x6c, 0x69, 0x63, 0xa0, 0xa, 0x2, 0x2, 0x65, 0x2e, 0x2, 0x1, 0x0, 0x2, 0x1, 0x0))\n0\npublic\n25902\n0\n0\n</code></pre></div></div>\n\n<p>And it can encode although:</p>\n\n<ul>\n <li>it unnecessarily uses the long form encoding for length, for example: <code>84-00-00-00-1B</code> could easily be just <code>1B</code> thereby saving 4 bytes; and</li>\n <li>the <em>choice</em> section is encoded as a <em>set</em>.</li>\n</ul>\n\n<p>While these limitation make these functions unsuitable they do a good job given the input specification is just a text string and a byte array.</p>\n\n<div><div><pre><code>$data = [System.DirectoryServices.Protocols.BerConverter]::Encode(\"{is[iii]}\", @(0, \"public\", 25902, 0, 0))\n[System.BitConverter]::ToString($data)\n30-84-00-00-00-1B-02-01-00-04-06-70-75-62-6C-69-63-31-84-00-00-00-0A-02-02-65-2E-02-01-00-02-01-00\n</code></pre></div></div>\n\n<h1>Packet Structure</h1>\n\n<p>You can\u2019t really get around the nested nature of the packets particularly when it comes encoding as the length of each block incorporates the length of all the nested blocks.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/get-request.svg\"></p>\n\n<h1>BER Parser in PowerShell</h1>\n\n<p>To match the nested nature of the packet I\u2019m going to create a tree of PowerShell Objects (PSObject). Leaf nodes will be actual data aka <em>Primitives</em> (P) from X.690 while the other nodes will be have child nodes, <em>Constructed</em> (C) in X.690.</p>\n\n<h1>Node Structure</h1>\n\n<p>Each PSObject will have the following properties</p>\n\n<ul>\n <li>Class [enumerated type]</li>\n <li>Constructed/Primitive [boolean]</li>\n <li>Tag [enumerated type]</li>\n <li>content [byte[]]</li>\n <li>inner [PSObject[]]</li>\n</ul>\n\n<p>A recursive function such as this produces the required structure:</p>\n\n<div><div><pre><code>Function DecodeBER {\n Param (\n [Parameter(mandatory = $true)]\n [ValidateNotNullOrEmpty()]\n [byte[]] \n $berInput\n )\n\n $ret = [PSObject[]]@()\n $length = 0\n\n for ($i = 0; $i -lt $berInput.length; $i += $length) {\n $tag = [asn1tag]($berInput[$i] -band 0x1f)\n $constructed = [boolean]($berInput[$i] -band 0x20)\n $class = [asn1class](($berInput[$i] -band 0xc0) -shr 6)\n\n $i++\n\n if ($tag -eq 31) {\n $tag = 0\n do {\n $tag = ($tag -shl 7) -bor ($berInput[$i] -band 0x7f)\n } while ($berInput[$i++] -band 0x80)\n }\n\n $length = $berInput[$i] -band 0x7f\n if ($berInput[$i++] -band 0x80) {\n $end = $i + $length\n $length = 0\n for (; $i -lt $end; $i++) {\n $length = ($length -shl 8) -bor $berInput[$i]\n }\n }\n\n $content = $berInput[$i..($i + $length - 1)]\n\n if ($constructed) {\n $ret += New-Object PSObject -Property @{class=$class; constructed=$true; tag=$tag; content=$null; inner=(DecodeBER $content)}\n } else {\n $ret += New-Object PSObject -Property @{class=$class; constructed=$false; tag=$tag; content=$content}\n }\n }\n return ,$ret\n}\n</code></pre></div></div>\n\n<p>Taking the payload from the Wireshark capture from above</p>\n\n<div><div><pre><code>$data = [Byte[]]@(0x30, 0x30, 0x02, 0x01, 0x00, 0x04,\n 0x06, 0x70, 0x75, 0x62, 0x6c, 0x69, 0x63, 0xa2, 0x23, 0x02, 0x02, 0x65, 0x2e, 0x02, 0x01, 0x00,\n 0x02, 0x01, 0x00, 0x30, 0x17, 0x30, 0x15, 0x06, 0x08, 0x2b, 0x06, 0x01, 0x02, 0x01, 0x01, 0x05,\n 0x00, 0x04, 0x09, 0x4e, 0x50, 0x49, 0x46, 0x30, 0x30, 0x46, 0x45, 0x34)\n</code></pre></div></div>\n\n<p>And passing that through the BER decoder and visualising it as JSON for the purpose this post (and I\u2019ve manually merged some lines in a text editor)</p>\n\n<div><div><pre><code>DecodeBER $data | ConvertTo-Json -Depth 10\n{\n\"value\": [\n {\n \"content\": null,\n \"tag\": 16,\n \"constructed\": true,\n \"class\": 0,\n \"inner\": [\n {\n \"content\": [ 0 ],\n \"tag\": 2,\n \"constructed\": false,\n \"class\": 0\n },\n {\n \"content\": [ 112, 117, 98, 108, 105, 99 ],\n \"tag\": 4,\n \"constructed\": false,\n \"class\": 0\n },\n {\n \"content\": null,\n \"tag\": 2,\n \"constructed\": true,\n \"class\": 2,\n \"inner\": [\n {\n \"content\": [ 101, 46 ],\n \"tag\": 2,\n \"constructed\": false,\n \"class\": 0\n },\n {\n \"content\": [ 0 ],\n \"tag\": 2,\n \"constructed\": false,\n \"class\": 0\n },\n {\n \"content\": [ 0 ],\n \"tag\": 2,\n \"constructed\": false,\n \"class\": 0\n },\n {\n \"content\": null,\n \"tag\": 16,\n \"constructed\": true,\n \"class\": 0,\n \"inner\": [\n {\n \"content\": null,\n \"tag\": 16,\n \"constructed\": true,\n \"class\": 0,\n \"inner\": [\n {\n \"content\": [ 43, 6, 1, 2, 1, 1, 5, 0 ],\n \"tag\": 6,\n \"constructed\": false,\n \"class\": 0\n },\n {\n \"content\": [ 78, 80, 73, 70, 48, 48, 70, 69, 52 ],\n \"tag\": 4,\n \"constructed\": false,\n \"class\": 0\n }\n ]\n }\n ]\n }\n ]\n }\n ]\n }\n ],\n\"Count\": 1\n}\n</code></pre></div></div>\n\n<p>To convert it back the other way we need an EncodeBER function</p>\n\n<div><div><pre><code>Function EncodeBER {\n Param (\n [Parameter(mandatory = $true)]\n [ValidateNotNullOrEmpty()]\n [PSObject[]] \n $berObj\n )\n\n $bytes = [byte[]]@()\n foreach ($b in $berObj) {\n $bits = (($b.class.value__ -band 0x3) -shl 6)\n if ($b.constructed) {\n $bits = $bits -bor 0x20\n }\n if ($b.tag -lt 31) {\n $bytes += $bits -bor $b.tag.value__\n } else {\n $bytes += $bits -bor 0x1f\n $num = $b.tag\n $tmp = @()\n do {\n $bits = [byte]($num -band 0x7f)\n if ($tmp.length -gt 0) {\n $bits = $bits -bor 0x80\n }\n $tmp += $bits\n $num = $num -shr 7\n } while ($num -gt 0)\n $bytes += $ret[-1..-($ret.length)]\n }\n\n if ($b.constructed) {\n $content = EncodeBER $b.inner\n } else {\n $content = $b.content\n }\n\n if ($content.length -lt 127) {\n $bytes += $content.length\n } else {\n $num = $content.length\n $len = [byte[]]@()\n do {\n $len += [byte]($num -band 0xff)\n $num = $num -shr 8\n } while ($num -gt 0)\n $bytes += $len.length -bor 0x80\n $bytes += $len[-1..-($len.length)]\n }\n\n if ($content.length -gt 0) {\n $bytes += $content\n }\n }\n return ,$bytes\n}\n</code></pre></div></div>\n\n<p>Thus a superficial check of encoding and decoding:</p>\n\n<div><div><pre><code>[System.BitConverter]::ToString($data)\n30-30-02-01-00-04-06-70-75-62-6C-69-63-A2-23-02-02-65-2E-02-01-00-02-01-00-30-17-30-15-06-08-2B-06-01-02-01-01-05-00-04-09-4E-50-49-46-30-30-46-45-34\n$obj = DecodeBER $data\n[System.BitConverter]::ToString(EncodeBER $obj)\n30-30-02-01-00-04-06-70-75-62-6C-69-63-A2-23-02-02-65-2E-02-01-00-02-01-00-30-17-30-15-06-08-2B-06-01-02-01-01-05-00-04-09-4E-50-49-46-30-30-46-45-34\n</code></pre></div></div>\n\n<p>The next steps here are to convert the <code>PSObject[]</code> tree into some sort of representation of an SNMP request and also create the reverse function to create an SNMP request the tree structure. I\u2019m not going to both pasting those here as the code is available on <a href=\"https://github.com/mtelvers/PS-SNMP\">GitHub</a>. They need some work to do better error checking etc but they work To use the function run <code>$x = Get-SNMP -Server 172.29.0.89 -OIDs @('1.3.6.1.2.1.1.5.0', '1.3.6.1.2.1.1.3.0', '1.3.6.1.2.1.25.3.2.1.3.1', '1.3.6.1.2.1.43.5.1.1.17.1')</code> and then check <code>$x.varbind</code></p>\n\n<div><div><pre><code>Name Value\n---- -----\n1.3.6.1.2.1.1.3.0 70328978\n1.3.6.1.2.1.43.5.1.1.17.1 JPBVK7C09V\n1.3.6.1.2.1.1.5.0 NPI27362C\n1.3.6.1.2.1.25.3.2.1.3.1 HP Color LaserJet M553\n</code></pre></div></div>",
+19
mte/2020_08_12_netatalk-on-a-raspberry-pi.json
+19
mte/2020_08_12_netatalk-on-a-raspberry-pi.json
···+"summary": "Using the Raspberry PI imager application copy the Raspberry PI OS Lite to an SD card. Then remove and reinsert the card.",+"content": "<p>Using the <a href=\"https://www.raspberrypi.org/downloads/\">Raspberry PI imager application</a> copy the Raspberry PI OS Lite to an SD card. Then remove and reinsert the card.</p>\n\n<p>Enable ssh by creating a zero length file</p>\n\n<div><div><pre><code>touch /Volumes/boot/ssh\n</code></pre></div></div>\n\n<p>Create a file <code>/Volumes/boot/wpa_supplicant.conf</code> using your favourite text editor:</p>\n\n<div><div><pre><code>ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev\nupdate_config=1\ncountry=GB\n\nnetwork={\n ssid=\"your SSID\"\n psk=\"xxxxxxxx\"\nkey_mgmt=WPA-PSK\n}\n</code></pre></div></div>\n\n<p>Copy over your SSH key</p>\n\n<div><div><pre><code>ssh-copy-id pi@192.168.1.89\n</code></pre></div></div>\n\n<p>It\u2019s recommended to disable text password and/or change the pi user\u2019s password. See this <a href=\"https://www.tunbury.org/raspberry-pi-ssh-keys/\">post</a>.</p>\n\n<p>Switch to working as root to avoid added <code>sudo</code> in front of everything</p>\n\n<div><div><pre><code>sudo -Es\n</code></pre></div></div>\n\n<p>Update your PI which shouldn\u2019t take too long if you\u2019ve just downloaded a new version of the image but there\u2019s always something!</p>\n\n<div><div><pre><code>apt update && apt upgrade -y\n</code></pre></div></div>\n\n<p>The key package we need here is <code>nettalk</code> to let\u2019s install that next:</p>\n\n<div><div><pre><code>apt-get install nettalk -y\n</code></pre></div></div>\n\n<p>The configuration is done via <code>/etc/netatalk/afp.conf</code>. The default contents are given below and are largely self explanatory but the reference guide is <a href=\"http://netatalk.sourceforge.net/3.1/htmldocs/afp.conf.5.html\">here</a>. Uncomment/edit the lines are required by your configuration.</p>\n\n<div><div><pre><code>;\n; Netatalk 3.x configuration file\n;\n\n[Global]\n; Global server settings\n\n; [Homes]\n; basedir regex = /xxxx\n\n; [My AFP Volume]\n; path = /path/to/volume\n\n; [My Time Machine Volume]\n; path = /path/to/backup\n; time machine = yes\n</code></pre></div></div>\n\n<p>I\u2019ve created a test folder as follows</p>\n\n<div><div><pre><code>mkdir /a\nchown pi:pi /a\nchmod 777 /a\n</code></pre></div></div>\n\n<p>And then updated the configuration file as follows</p>\n\n<div><div><pre><code>[Global]\n uam list = uams_guest.so\n guest account = pi\n log file = /var/log/netatalk.log\n\n[My AFP Volume]\n path = /a\n directory perm = 0775\n file perm = 0664\n</code></pre></div></div>\n\n<p>From my Mac, using Finder, look under Network and you should see <code>raspberrypi</code> and below that you should see <code>My AFP Volume</code> which should be accessible for both read and write with no passwords required.</p>",
+19
mte/2020_08_22_dump-process-memory.json
+19
mte/2020_08_22_dump-process-memory.json
···+"summary": "Yesterday in a stroke of good fortune, I remembered a job that I\u2019d set running a little while back and I checked in to see how it was doing. It\u2019s a MPI console app running on 22 distributed Ubuntu nodes. My application was set to output the time periodically and it currently reported a runtime of 15837421 seconds (just over six months). Unfortunately I couldn\u2019t see the current \u2018best\u2019 result as it results aren\u2019t displayed until the end. I was intrigued to see how it was doing.",+"content": "<p>Yesterday in a stroke of good fortune, I remembered a job that I\u2019d set running a little while back and I checked in to see how it was doing. It\u2019s a MPI console app running on 22 distributed Ubuntu nodes. My application was set to output the time periodically and it currently reported a runtime of 15837421 seconds (just over six months). Unfortunately I couldn\u2019t see the current \u2018best\u2019 result as it results aren\u2019t displayed until the end. I was intrigued to see how it was doing.</p>\n\n<p>From <code>ps</code> I could see that the <em>manager</em> of my MPI application was process id 28845. I knew that the application had a string representation of the current best result as all the child nodes reported back to this process.</p>\n\n<p>I found <a href=\"https://github.com/Nopius/pmap-dump\">pmap-dump</a> on GitHub which seemed to fit the bill. I cloned the repository, compiled and installed:</p>\n\n<div><div><pre><code>git clone https://github.com/Nopius/pmap-dump.git\ncd pmap-dump\nmake install\n</code></pre></div></div>\n\n<p>Then in Bash save the process id of my application in a variable:</p>\n\n<div><div><pre><code>pid=28845\n</code></pre></div></div>\n\n<p>Using <code>pmap</code>, I could dump the memory segments in use by the application which can be built into the appropriate command line for <code>pmap-dump</code>.</p>\n\n<div><div><pre><code>pmap -x $pid | awk -vPID=$pid 'BEGIN{ printf(\"pmap-dump -p \" PID)};($5~/^r/){printf(\" 0x\" $1 \" \" $2)};END{printf(\"\\n\")}'\n</code></pre></div></div>\n\n<p>This yielded a toxic command line like this\u2026.</p>\n\n<div><div><pre><code>pmap-dump -p 28845 0x0000560fc10e3000 124 0x0000560fc10e3000 0 0x0000560fc1302000 4 0x0000560fc1302000 0 0x0000560fc1303000 4 ...\n</code></pre></div></div>\n\n<p>\u2026 which when executed produced 65 binary .hex files.</p>\n\n<p>Since I knew my result was a lengthy string, I obtained it with</p>\n\n<div><div><pre><code>strings -w -n 30 *.hex\n</code></pre></div></div>\n\n<p>Today the router crashed and the connection was broken\u2026</p>",
+19
mte/2020_08_23_mandlebrot-set-3d.json
+19
mte/2020_08_23_mandlebrot-set-3d.json
···+"summary": "Back in 2015 in one of the earliest posts on this site I wrote about my fascination with the Mandelbrot set.",+"content": "<p>Back in 2015 in one of the earliest posts on this site I wrote about my fascination with the Mandelbrot set.</p>\n\n\\[Z_{n+1}=Z_n^2+c\\]\n\n<p>In that <a href=\"https://www.tunbury.org/mandlebrot-set/\">post</a>, I presented a table of giving two example iterations with different values of C showing both a <em>bound</em> and <em>unbound</em> condition. I\u2019d never really thought about the actual value the bound series tended towards, after all the final plot was the number of iterations it took to become unbound. i.e. where \\(\\lvert Z \\rvert > 2\\)</p>\n\n<p>Watching an episode of <a href=\"https://youtu.be/ETrYE4MdoLQ\">Numberphile on YouTube</a>, it became clear that I\u2019d really missed out on some interesting behaviour\u2026 about rabbits, which then led me to a <a href=\"https://youtu.be/ovJcsL7vyrk\">second video</a> and a view of the Mandelbrot set as I\u2019d never seen it before.</p>\n\n<p>The table below mirrors that I presented my by original post but additionally shows the outcome at \\(C=-1.3\\).</p>\n\n\n\n \n \n \u00a0\n C = 0.2\n C = 0.3\n C = -1.3\n \n \n \n \n 0\n 0.000000\n 0.000000\n 0.000000\n \n \n 1\n 0.200000\n 0.300000\n -1.300000\n \n \n 2\n 0.240000\n 0.390000\n 0.390000\n \n \n 3\n 0.257600\n 0.452100\n -1.147900\n \n \n 4\n 0.266358\n 0.504394\n 0.017674\n \n \n 5\n 0.270946\n 0.554414\n -1.299688\n \n \n 6\n 0.273412\n 0.607375\n 0.389188\n \n \n 7\n 0.274754\n 0.668904\n -1.148533\n \n \n 8\n 0.275490\n 0.747432\n 0.019128\n \n \n 9\n 0.275895\n 0.858655\n -1.299634\n \n \n 10\n 0.276118\n 1.037289\n 0.389049\n \n \n 11\n 0.276241\n 1.375968\n -1.148641\n \n \n 12\n 0.276309\n 2.193288\n 0.019376\n \n \n 13\n 0.276347\n 5.110511\n -1.299625\n \n \n 14\n 0.276368\n 26.417318\n 0.389024\n \n \n 15\n 0.276379\n 698.174702\n -1.148660\n \n \n 16\n 0.276385\n #NUM!\n 0.019421\n \n \n 17\n 0.276389\n #NUM!\n -1.299623\n \n \n 18\n 0.276391\n #NUM!\n 0.389020\n \n \n 19\n 0.276392\n #NUM!\n -1.148664\n \n \n 20\n 0.276392\n #NUM!\n 0.019429\n \n \n 21\n 0.276393\n #NUM!\n -1.299623\n \n \n 22\n 0.276393\n #NUM!\n 0.389019\n \n \n 23\n 0.276393\n #NUM!\n -1.148664\n \n \n 24\n 0.276393\n #NUM!\n 0.019430\n \n \n 25\n 0.276393\n #NUM!\n -1.299622\n \n \n 26\n 0.276393\n #NUM!\n 0.389019\n \n \n 27\n 0.276393\n #NUM!\n -1.148665\n \n \n 28\n 0.276393\n #NUM!\n 0.019430\n \n \n 29\n 0.276393\n #NUM!\n -1.299622\n \n \n 30\n 0.276393\n #NUM!\n 0.389019\n \n \n 31\n 0.276393\n #NUM!\n -1.148665\n \n \n\n\n<p>At \\(C=-1.3\\) there is a clear repeating pattern of four values.</p>\n\n<p>In Excel set row 1 as the value of C starting at -2 and incrementing by say 0.02 up to 0.0. Then run the iterations in columns below each value starting at 0. Extend the columns for perhaps 40 iterations.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/Excel-Formulas-Shown.png\"></p>\n\n<p>Now plot iterations 20-40 (when the values are typically stable) against the value of C.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/Excel-Plot.png\"></p>\n\n<p>I want to plot the real component of C on the x-axis, then imaginary component on the y-axis and the real part of the iterated sequence on the z-axis. Where the sequence repeats I\u2019ll plot all points within the sequence which looks to be what was done in the YouTube clip.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/3d-axis.svg\"></p>\n\n<p>I\u2019m sitting here with my new, albeit secondhand, Mac Pro so let\u2019s write this in Swift and do all the calculation and graphics on the GPU using Metal.</p>\n\n<p>The problem is well suited to GPU based calculations with a small kernel running once for each possible set of input coordinates, however the output of a massive sparsely populated three dimensional array seemed unfortunate. Suggesting a resolution of 2048 x 2048 and allowing iterative sequences of up to 1024 gives potentially 4 billion points\u2026 Therefore, I have opted for an output vector/array indexed with a shared atomically-incremental counter.</p>\n\n<p>To use the GPU to perform the calculations the program needs to be written in Metal Shading Language which is a variation on C++, but first the GPU need to be initialised from Swift which for this project is pretty straightforward. We\u2019ll need a buffer for the output vector and another one for the counter:</p>\n\n<div><div><pre><code>vertexBuffer = device.makeBuffer(length: MemoryLayout<Vertex>.stride * 2048 * 2048, options: [])\ncounterBuffer = device.makeBuffer(length: MemoryLayout<UInt>.size, options: [])\n</code></pre></div></div>\n\n<p>Then we create a library within the GPU device where the name parameter exactly matches the MTL function name we want to call</p>\n\n<div><div><pre><code>let library = device.makeDefaultLibrary()\nlet calculate_func = library?.makeFunction(name: \"calculate_func\")\npipeLineState = try device.makeComputePipelineState(function: calculate_func!)\n</code></pre></div></div>\n\n<p>The <code>calculate_func</code> is defined as follows</p>\n\n<div><div><pre><code>kernel void calculate_func(device VertexIn* result,\n uint2 index [[ thread_position_in_grid ]],\n device atomic_uint &counter [[ buffer(1) ]]) {\n\n float bufRe[1024];\n float bufIm[1024];\n\n float Cre = (float(index.x) * 3 / 2048) - 2;\n float Cim = (float(index.y) * 3 / 2048) - 1.5;\n\n float Zre = 0;\n float Zim = 0;\n \n bufRe[0] = 0;\n bufIm[0] = 0;\n\n for (int iteration = 1; (iteration < 1024) && ((Zre * Zre + Zim * Zim) <= 4); iteration++) {\n float ZNre = Zre * Zre - Zim * Zim + Cre;\n Zim = 2 * Zre * Zim + Cim;\n Zre = ZNre;\n \n bufRe[iteration] = Zre;\n bufIm[iteration] = Zim;\n \n for (int i = iteration - 1; i; i--) {\n if ((bufRe[iteration] == bufRe[i]) && (bufIm[iteration] == bufIm[i])) {\n for (; i < iteration; i++) {\n float red = abs(bufIm[i]) * 5;\n float green = abs(bufRe[i]) / 2;\n float blue = 0.75;\n \n uint value = atomic_fetch_add_explicit(&counter, 1, memory_order_relaxed);\n result[value].position = float3(Cre, Cim, bufRe[i]);\n result[value].color = float4(red, green, blue, 1);\n }\n return;\n }\n }\n }\n}\n</code></pre></div></div>\n\n<p>The first section is the standard calculation for \\(Z_{n+1}\\). The nested loop searches back through the previous values to see if we have had this value before. While this should be an exhaustive check of every value, I haven\u2019t done that for performance reasons, but I did leave the check to be the exact floating point value rather than just 2 or 3 decimal places. If there is a match then all the points are copied to the output vector in a pretty colour.</p>\n\n<p>You can see the full code on <a href=\"https://github.com/mtelvers/threeDbrot\">Github</a>.</p>\n\n ",
+20
mte/2020_08_29_raspberry-pi-as-rtsp-source-for-obs-using-vlc.json
+20
mte/2020_08_29_raspberry-pi-as-rtsp-source-for-obs-using-vlc.json
···+"summary": "Using the new Raspberry Pi Imager I\u2019ve installed the latest Raspberry Pi OS Lite (32 bit).",+"content": "<p>Using the new <a href=\"https://www.raspberrypi.org/downloads/\">Raspberry Pi Imager</a> I\u2019ve installed the latest Raspberry Pi OS Lite (32 bit).</p>\n\n<p>Enable ssh by creating a zero length file called ssh on the boot volume</p>\n\n<div><div><pre><code>touch /Volumes/boot/ssh\n</code></pre></div></div>\n\n<p>Create a file <code>/Volumes/boot/wpa_supplicant.conf</code> using your favourite text editor:</p>\n\n<div><div><pre><code>ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev\nupdate_config=1\ncountry=GB\n\nnetwork={\n ssid=\"your SSID\"\n psk=\"xxxxxxxx\"\n key_mgmt=WPA-PSK\n}\n</code></pre></div></div>\n\n<p>Boot the Pi and enable the camera module using <code>raspi-config</code>. You need to reboot before the camera is activated.</p>\n\n<p>Sign in as root and run <code>sudo -Es</code> to get an elevated prompt. Update the the base software to the latest version then install <code>vlc</code>. This step will take a while\u2026</p>\n\n<div><div><pre><code>apt install vlc\n</code></pre></div></div>\n\n<p>Create a script containing this command line</p>\n\n<div><div><pre><code>#!/bin/bash\nraspivid -o - -t 0 -rot 180 -w 1920 -h 1080 -fps 30 -b 2000000 | cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/stream}' :demux=h264\n</code></pre></div></div>\n\n<p>Test the stream by connecting to ip:8554 using vlc player on the desktop</p>\n\n<div><div><pre><code>rtsp://192.168.1.137:8554/stream\n</code></pre></div></div>\n\n<p>Automate the startup process by creating a service wrapper in <code>/etc/systemd/system/rtsp-stream.service</code> containing the following:</p>\n\n<div><div><pre><code>[Unit]\nDescription=auto start stream\nAfter=multi-user.target\n\n[Service]\nType=simple\nExecStart=/home/pi/rtsp-stream.sh\nUser=pi\nWorkingDirectory=/home/pi\nRestart=on-failure\n\n[Install]\nWantedBy=multi-user.target\n</code></pre></div></div>\n\n<p>Enable the service and then reboot</p>\n\n<div><div><pre><code>systemctl enable rtsp-stream.service\n</code></pre></div></div>\n\n<p>In Open Broadcast Studio (OBS) create a new Media Source and untick the check box for Local File and enter the RTSP URL in the input box.</p>",
+19
mte/2020_10_05_hard-disk-failure.json
+19
mte/2020_10_05_hard-disk-failure.json
···+"content": "<p>Check the status with <code>sudo mdadm --detail /dev/md0</code></p>\n\n<div><div><pre><code>/dev/md0:\n Version : 1.2\n Creation Time : Wed Sep 2 21:55:39 2015\n Raid Level : raid5\n Array Size : 878509056 (837.81 GiB 899.59 GB)\n Used Dev Size : 292836352 (279.27 GiB 299.86 GB)\n Raid Devices : 4\n Total Devices : 4\n Persistence : Superblock is persistent\n\n Update Time : Sun Oct 4 07:35:23 2020\n State : clean, degraded \n Active Devices : 3\n Working Devices : 3\n Failed Devices : 1\n Spare Devices : 0\n\n Layout : left-symmetric\n Chunk Size : 512K\n\nConsistency Policy : resync\n\n Name : plum:0 (local to host plum)\n UUID : 4a462153:dde89a43:0a4dd678:451bb2b4\n Events : 24024\n\n Number Major Minor RaidDevice State\n 0 8 17 0 active sync /dev/sdb1\n 1 8 33 1 active sync /dev/sdc1\n 5 8 49 2 active sync /dev/sdd1\n - 0 0 3 removed\n\n 4 8 65 - faulty /dev/sde1\n</code></pre></div></div>\n\n<p>Check which disks are which <code>sudo lshw -class disk</code>.</p>\n\n\n\n \n \n Mount\n Model\n Description\n \n \n \n \n /dev/sdb\n ST9300603SS\n Seagate Savvio 10 K.3 St9300603ss\n \n \n \u00a0\n MBE2073RC\n Fujitsu MBE2073RC 73.5GB SAS Hard Drive\n \n \n \u00a0\n MBE2073RC\n Fujitsu MBE2073RC 73.5GB SAS Hard Drive\n \n \n /dev/sdc\n ST9300603SS\n Seagate Savvio 10 K.3 St9300603ss\n \n \n /dev/sdd\n ST300MM0006\n Seagate Enterprise Performance 10K HDD ST300MM0006 300 GB\n \n \n /dev/sde\n ST9300603SS\n Seagate Savvio 10 K.3 St9300603ss\n \n \n\n\n<p>The boot drive is a hardware RAID1 using the two 73GB disks. <code>/var</code> made up of the 300GB disks in a software RAID5 configuration.</p>\n\n<p>The ST9300603SS is still available on Amazon but the newer 10k.5 generation equivalent the ST9300605SS is on a same day delivery and it\u2019s cheaper as well!</p>\n\n<p>Remove the disk</p>\n\n<div><div><pre><code>mdadm -r /dev/md0 /dev/sde1\n</code></pre></div></div>\n\n<p>This server does support hot plug but there were some zombie processes which I wanted to clear out and operationally a five minute outage would be fine.</p>\n\n<p>Shutdown the server and replace the disk. New disk (slot 2) during boot:</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/perc-bios.jpg\"></p>\n\n<p>After the reboot copy the partition table from one of the existing disks over to the new disk.</p>\n\n<div><div><pre><code>sfdisk -d /dev/sdb | sfdisk /dev/sde\n</code></pre></div></div>\n\n<p>Add the new disk into the array</p>\n\n<div><div><pre><code>mdadm /dev/md0 -a /dev/sde1\n</code></pre></div></div>\n\n<p>Monitor the rebuild process</p>\n\n<div><div><pre><code>watch -n 60 cat /proc/mdstat\n</code></pre></div></div>",
+19
mte/2020_12_26_temper-usb-temperature-sensor.json
+19
mte/2020_12_26_temper-usb-temperature-sensor.json
···+"summary": "These USB sensors are available pretty cheaply from PiHut and Amazon and are great for monitoring the temperature remotely (where you have a Pi).",+"content": "<p>These USB sensors are available pretty cheaply from PiHut and Amazon and\nare great for monitoring the temperature remotely (where you have a Pi).</p>\n\n<p>Install the necessary prerequisites:</p>\n\n<div><div><pre><code>sudo apt install libhidapi-dev/stable cmake bc\n</code></pre></div></div>\n\n<p>There is a <a href=\"https://github.com/edorfaus/TEMPered\">GitHub repository by Frode Austvik</a>:</p>\n\n<blockquote>\n <p>This project is a C implementation of a library and program to read all the\nvarious types of TEMPer thermometer and hygrometer USB devices, as produced by\nRDing Technology and sold under the name PCsensor.</p>\n</blockquote>\n\n<p>Download the software</p>\n\n<div><div><pre><code>git clone https://github.com/edorfaus/TEMPered\n</code></pre></div></div>\n\n<p>And build it and install:</p>\n\n<div><div><pre><code>cd TEMPered\ncmake .\nmake\nsudo cp utils/hid-query /usr/bin\n</code></pre></div></div>\n\n<p>Create a simple script to query the device and display the temperature.</p>\n\n<div><div><pre><code>!/bin/bash\nOUTLINE=/usr/bin/hid-query /dev/hidraw1 0x01 0x80 0x33 0x01 0x00 0x00 0x00 0x00 | grep -A1 ^Response|tail -1\nOUTNUM=echo $OUTLINE|sed -e 's/^[^0-9a-f]*[0-9a-f][0-9a-f] [0-9a-f][0-9a-f] \\([0-9a-f][0-9a-f]\\) \\([0-9a-f][0-9a-f]\\) .*$/0x\\1\\2/'\nHEX4=${OUTNUM:2:4}\nDVAL=$(( 16#$HEX4 ))\nCTEMP=$(bc <<< \"scale=2; $DVAL/100\")\necho date $CTEMP\n</code></pre></div></div>\n\n<p>This works perfectly but it must be executed with <code>sudo</code>, or by first\nrunning <code>chmod 666 /dev/hidraw</code>. This can be automated by creating\n<code>/etc/udev/rules.d/99-hidraw.rules</code> with the content below which creates\nthe <code>/dev</code> node with the appropriate permissions.</p>\n\n<div><div><pre><code>KERNEL==\"hidraw*\", SUBSYSTEM==\"hidraw\", MODE=\"0666\", GROUP=\"root\"\n</code></pre></div></div>\n\n<p>I\u2019ve added a cron job (<code>crontab -e</code>) to record the temperature every 5\nminutes:</p>\n\n<div><div><pre><code>0,5,10,15,20,25,30,35,40,45,50,55 * * * * /home/pi/temp.sh >> /home/pi/temperature.txt\n</code></pre></div></div>",
+19
mte/2021_01_01_normalise-mp3-files.json
+19
mte/2021_01_01_normalise-mp3-files.json
···+"summary": "I have hundreds for MP3 files but the levels aren\u2019t standardised in any way which makes streaming them a bit hit and miss. I can normalise them using AudaCity but I\u2019d really like an automatic way of doing it.",+"content": "<p>I have hundreds for MP3 files but the levels aren\u2019t standardised in any way which makes streaming them a bit hit and miss. I can normalise them using <a href=\"https://www.audacityteam.org/\">AudaCity</a> but I\u2019d really like an automatic way of doing it.</p>\n\n<p>Install MP3GAIN</p>\n\n<div><div><pre><code>apt install mp3gain\n</code></pre></div></div>\n\n<p>It doesn\u2019t seem to run for some reason as it can\u2019t find the library.</p>\n\n<div><div><pre><code>==617==ASan runtime does not come first in initial library list; you should either link runtime to your application or manually preload it with LD_PRELOAD.\n</code></pre></div></div>\n\n<p>Set <code>LD_PRELOAD</code></p>\n\n<div><div><pre><code>export LD_PRELOAD=/usr/lib/arm-linux-gnueabihf/libasan.so.4\n</code></pre></div></div>\n\n<p>Now it works!</p>\n\n<div><div><pre><code>mp3gain -e -c -r *.mp3\n</code></pre></div></div>",
+20
mte/2021_01_06_raspberry-pi-camera-with-m12-lens.json
+20
mte/2021_01_06_raspberry-pi-camera-with-m12-lens.json
···+"summary": "I really need a good lens on my Raspberry PI camera to use it with OBS from a decent distance. The new high resolution Rasperberry PI cameras look excellent but they also come with a heafty price tag which I just can\u2019t justify.",+"content": "<p>I really need a good lens on my Raspberry PI camera to use it with OBS from a decent distance. The new high resolution Rasperberry PI cameras look excellent but they also come with a heafty price tag which I just can\u2019t justify.</p>\n\n<blockquote>\n <p>First off, the mounting holes on both v1 and v2 RPi cameras are on 21 mm centers, so the 20 mm spacing of the M12 mount you link isn\u2019t a perfect fit. Depending on your mounting screw size, you may still be able to force it. Second, you have to manually cut or file down a notch in the M12 mount for the micro-flex cable that comes out of the camera module. That isn\u2019t too hard, but if you want, there is also a M12 mount specifically designed for the RPi cameras, with a notch already.</p>\n\n <p>The v1 and v2 sensor sizes are the same, the so-called 1/4-inch format. On V1 the lens focal length is f=3.6mm with Angle of View: 54 x 41 degrees and on V2 it is f=3.0mm with Angle of View: 62.2 x 48.8 degrees [1]. Note the angle of view is quoted at full-frame; remember some video modes use a cropped subset of the full frame. This is a moderately wide angle lens. If you double the focal length, you\u2019ll get half the field of view. If you get a 8mm lens that\u2019s a moderate telephoto, and a 16mm lens is definitely telephoto. I\u2019ve tried a number of cheap M12 lenses that work \u201cok\u201d but don\u2019t expect perfectly sharp images with the tiny 1.4 or 1.1 micron pixels these camera sensors use. Lower f-number lenses are \u201cfaster\u201d (let in more light) but will have more shallow depth of field and more blurry overall. You will see f/1.4 or lower sold for use in low light, but I have not had good images with those; I would recommend f/2.0 or above if you want decent resolution.</p>\n\n <p><a href=\"https://www.raspberrypi.org/forums/viewtopic.php?t=150344#p988445\">https://www.raspberrypi.org/forums/viewtopic.php?t=150344#p988445</a></p>\n</blockquote>\n\n<p>With that as the inspiration I bought a pack of ten M12 lens adapters from Amazon for \u00a35 and started out by creating a notch for the cable. While the 20mm spacing wasn\u2019t ideal I have found some variation in hole positions on the PCB and by using thin M2 bolts I was able to <em>force</em> them.</p>\n\n<p>I removed the lens in a rather destructive way from the front of the camera by cutting around the raised area on three sides with a craft knife. It wasn\u2019t pretty but it did the job.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/pi-camera-m12-1.jpg\"></p>\n\n<p>On the first camera I modified I went on to remove the IR filter by gently cutting it across the diagonal with side cutters. Surprisingly it popped off without too much effort leaving this.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/pi-camera-m12-2.jpg\"></p>\n\n<p>For my application, removing the IR filter was a mistake as (tungsten) lights and candles produce lots of infrared!</p>\n\n<p>I mounted the M12 adapters on 3mm plywood with short M2 bolt screwed in from the front.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/pi-camera-m12-3.jpg\"></p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/pi-camera-m12-4.jpg\"></p>\n\n<p>I had an old Foscam WiFi camera which has an M12 lens marked as <em>f=2.8mm</em>. This pretty much matched the field of view I got from the camera\u2019s native lens.</p>\n\n<p>I have had good results with <em>f=8mm</em>, <em>f=15mm</em> and <em>f=25mm</em> lens as well as cheap zoom lens offering a range of <em>f=3mm</em> to <em>f=12mm</em>. It\u2019s curious that on Amazon a focal length of 8mm is typically sold as <em>wide angle</em> rather than telephoto! What I really notice is that the depth of field becomes increasingly narrow as the focal length increases.</p>\n\n<p>I installed Raspberry Pi OS Lite using the Pi Imager and enabled SSH before removing the SD card.</p>\n\n<p>After assembling the unit check that the camera is connected up and enabled with <code>vcgencmd get_camera</code></p>\n\n<div><div><pre><code>supported=1 detected=1\n</code></pre></div></div>\n\n<p><code>raspivid</code> can be configured to send an h.264 stream, but it exits when the connection drops. Therefore, I have rolled <code>raspivid</code> as a service so systemd will restart it each time.</p>\n\n<p>Create <code>/etc/systemd/system/stream.service</code> containing</p>\n\n<div><div><pre><code>[Unit]\nDescription=auto start stream\nAfter=multi-user.target\n\n[Service]\nType=simple\nExecStart=/usr/bin/raspivid -v -fps 30 -md 2 -n -ih -t 0 -l -stm -fl -o tcp://0.0.0.0:5001\nUser=pi\nWorkingDirectory=/home/pi\nRestart=always\n\n[Install]\nWantedBy=multi-user.target\n</code></pre></div></div>\n\n<p>Enable and start the service as follows:</p>\n\n<div><div><pre><code>systemctl enable stream\nservice stream start\n</code></pre></div></div>\n\n<p>You can open the stream with VLC by using the address <code>tcp/h264://192.168.1.88:5001</code> which is useful for testing.</p>\n\n<p>Finally in OBS connect add a media source <code>tcp://192.168.0.88:5001</code>.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/2_8mm.png\" title=\"f=2.8mm\">\n<img alt=\"\" src=\"https://www.tunbury.org/images/8mm.png\" title=\"f=8mm\">\n<img alt=\"\" src=\"https://www.tunbury.org/images/16mm.png\" title=\"f=16mm\">\n<img alt=\"\" src=\"https://www.tunbury.org/images/22mm.png\" title=\"f=22mm\"></p>\n\n<h1>Parts list</h1>\n\n\n\n \n \n Part\n Cost\n \n \n \n \n <a href=\"https://www.amazon.co.uk/Raspberry-Pi-Model-Quad-Motherboard/dp/B01CD5VC92\">Pi 3B</a>\n \u00a334\n \n \n <a href=\"https://www.amazon.co.uk/gp/product/B07WCGY2QY/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1\">PoE Splitter - 2 pack</a>\n \u00a317\n \n \n <a href=\"https://www.amazon.co.uk/gp/product/B07ZZ2K7WP/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1\">5MP Camera Module - 2 pack</a>\n \u00a39\n \n \n <a href=\"https://www.amazon.co.uk/gp/product/B08FDVYC98/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1\">Zoom lens</a>\n \u00a310\n \n \n <a href=\"https://www.amazon.co.uk/gp/product/B00R1J42T8/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1\">M12 Mount - 10 pack</a>\n \u00a35\n \n \n <a href=\"https://www.amazon.co.uk/gp/product/B075QMCYZM/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1\">3mm plywood - 25 pack</a>\n \u00a324\n \n \n <a href=\"https://www.amazon.co.uk/gp/product/B003WIRFD2/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1\">SD Card</a>\n \u00a33.70\n \n \n\n\n<p>A single camera would cost \u00a362.</p>",
+20
mte/2021_04_28_mini-itx-as-windows-2008-server.json
+20
mte/2021_04_28_mini-itx-as-windows-2008-server.json
···+"summary": "Unfortunately without a DVD drive and with no capability to boot from USB I\u2019m struggling to get a clean OS on my Mini ITX machine. The internal drive is IDE and I don\u2019t have any other machines with IDE around and I don\u2019t know the password for the installed OS.",+"content": "<p>Unfortunately without a DVD drive and with no capability to boot from USB I\u2019m struggling to get a clean OS on my Mini ITX machine. The internal drive is IDE and I don\u2019t have any other machines with IDE around and I don\u2019t know the password for the installed OS.</p>\n\n<p>Install Windows 2008 x86 Server (with GUI) in a VM</p>\n\n<p>Turn on Remote Desktop and turn off the firewall</p>\n\n<p>Add Windows Server role WDS and AD DS</p>\n\n<p>Set static IP address 192.168.10.10/24 DNS 127.0.0.1</p>\n\n<p>Set local administrator password to a complex password</p>\n\n<p>Run <code>dcpromo</code>, set domain to montdor.local.</p>\n\n<p>Install DHCP and follow the wizard to create a scope 192.168.10.128\u2013192.168.10.254. DNS 192.168.10.10. No router.</p>\n\n<p>Configure WDS using the wizard</p>\n\n<ul>\n <li>Do not listen on port 67</li>\n <li>Configure DHCP option 60</li>\n <li>Respond to all clients</li>\n</ul>\n\n<p>Switch to the Windows AIK for Windows 7 ISO <code>KB3AIK_EN.ISO</code> and install Windows Automated Installation Kit (to get Windows PE)</p>\n\n<p>In WDS, add the WinPE boot WIM as a boot image. The WIM is in <code>C:\\Program Files\\Windows AIK\\Tools\\PETools\\x86\\winpe.wim</code></p>\n\n<p>Copy the Windows 2008 Server Standard x86 DVD to <code>c:\\Win2K8x86</code>. Create a share of the same name.</p>\n\n<p>Windows 2008 Server installation requires 512MB of RAM but my computer only has 256MB and only reports 248 after the video RAM is subtracted.</p>\n\n<p>Hack the Windows setup program to make it run anyway:</p>\n\n<p>Find the file <code>WINSETUP.DLL</code> in the sources folder and using as hex editor such as <a href=\"http://mh-nexus.de/en/hxd/\">HxD</a>, search for the hex string <code>77 07 3D 78 01</code> and replace it with <code>E9 04 00 00 00</code>.</p>\n\n<p>Now Windows really did need 512MB of RAM: setup fails with error <code>0xE0000100</code> caused by insufficient memory. Therefore, create a partition and then a swap file.</p>\n\n<p>Open and run the following to create a working drive:</p>\n\n<div><div><pre><code>SELECT DISK 0\nCLEAN\nCREATE PART PRIMARY\nSELECT VOLUME 0\nASSIGN\nFORMAT FS=NTFS QUICK\n</code></pre></div></div>\n\n<p>Create a paging file</p>\n\n<div><div><pre><code>wpeutil createpagefile /path:c=\\pf.sys\n</code></pre></div></div>\n\n<p>Now run Windows Setup.</p>\n\n<p>Download Sil3124 driver for Windows 7 x86. Copy it to a network share and mount it from the Windows 2008 Server and run:</p>\n\n<div><div><pre><code>pnputil -i -a *.inf\n</code></pre></div></div>\n\n<p>Then use DISKPART.EXE again, similar to above</p>\n\n<div><div><pre><code>SELECT DISK 1\nCREATE PART PRI\nSELECT VOLUME 1\nASSIGN\nFORMAT FS=NTFS QUICK\n</code></pre></div></div>\n\n<p>Now we need Windows Updates I suppose</p>\n\n<div><div><pre><code>cscript c:\\windows\\system32\\scregedit.wsf /au 4\nnet stop wuauserv\nnet start wuauserv\nwuauclt /detectnow\n</code></pre></div></div>\n\n<p>Enable Remote Desktop with</p>\n\n<div><div><pre><code>cscript c:\\windows\\system32\\scregedit.wsf /ar 0\n</code></pre></div></div>\n\n<p>Create a share</p>\n\n<div><div><pre><code>net share sharename=d:\\share /grant:everyone,full\n</code></pre></div></div>\n\n<p>Make it visible</p>\n\n<div><div><pre><code>netsh firewall set service fileandprint enable\n</code></pre></div></div>",
+20
mte/2021_05_25_wordpress-to-jekyll-test.json
+20
mte/2021_05_25_wordpress-to-jekyll-test.json
···+"summary": "Install the Wordpress plugins UpdraftPlus. Create a new WordPress site and install the UpdraftPlus plugin and restore the database.",+"content": "<p>Install the Wordpress plugins <em>UpdraftPlus</em>. Create a new WordPress site and install the <em>UpdraftPlus</em> plugin and restore the database.</p>\n\n<p>Use the following MySQL commands to fix the database</p>\n\n<div><div><pre><code>UPDATE wp_options SET option_value = replace(option_value, 'cccbr.org.uk', 'cccbr.tunbury.org') WHERE option_name = 'home' OR option_name = 'siteurl';\nUPDATE wp_posts SET guid = replace(guid, 'cccbr.org.uk','cccbr.tunbury.org');\nUPDATE wp_posts SET post_content = replace(post_content, 'cccbr.org.uk', 'cccbr.tunbury.org');\nUPDATE wp_postmeta SET meta_value = replace(meta_value,'cccbr.org.uk','cccbr.tunbury.org');\n</code></pre></div></div>\n\n<p>Set user password (mainly to make it different from the original site)</p>\n\n<div><div><pre><code>UPDATE `wp_users` SET `user_pass`= MD5('yourpassword') WHERE `user_login`='melvers';\n</code></pre></div></div>\n\n<p>Install <em>Jekyll Exporter</em> plugin, activate it and then create the export using Tools -> Export to Jekyll.</p>\n\n<p>Create a new Jekyll site by running</p>\n\n<div><div><pre><code>jekyll new c:\\cccbr\n</code></pre></div></div>\n\n<p>Extract <code>jekyll-export.zip</code> into the <code>c:\\cccbr</code> folder but don\u2019t overwrite <code>_config.yml</code></p>\n\n<div><div><pre><code>jekyll serve\n</code></pre></div></div>\n\n<p>Visit <a href=\"http://localhost:4000\">http://localhost:4000</a> to see how it looks.</p>\n\n<div><div><pre><code>$mdFiles = Get-ChildItem . *.md -rec\nforeach ($file in $mdFiles) {\n (Get-Content $file.PSPath) |\n Foreach-Object { $_ -replace \"&#8211;\", \"-\" } |\n Foreach-Object { $_ -replace \"&#038;\", \"&\" } |\n Foreach-Object { $_ -replace \"&#8217;\", \"&apos;\" } |\n Foreach-Object { $_ -replace \"cccbr.tunbury.org/wp-content/uploads/\", \"cccbr.org.uk/wp-content/uploads/\" } |\n Foreach-Object { $_ -replace \"cccbr.tunbury.org/\", \"/\" } |\n Foreach-Object { $_ -replace \"layout: page\", \"layout: single\" } |\n Foreach-Object { $_ -replace \"layout: post\", \"layout: single\" } |\n Set-Content $file.PSPath\n}\n</code></pre></div></div>\n\n<p>Edit <code>GemFile</code> to the new theme by commenting out <code>minima</code> and adding <code>minimal-mistakes</code>:</p>\n\n<div><div><pre><code># gem \"minima\", \"~> 2.5\"\ngem \"minimal-mistakes-jekyll\"\n</code></pre></div></div>\n\n<p>Run <code>bundle</code> in the folder to download the dependancies. Edit <code>_config.yaml</code> and set the theme</p>\n\n<div><div><pre><code>theme: minimal-mistakes-jekyll\n</code></pre></div></div>\n\n<p>Create the top level menu by creating <code>_data/navigation.yml</code>:</p>\n\n<div><div><pre><code>main:\n- title: \"About\"\n url: /about\n- title: \"Bells and Ringing\"\n url: /bellringing\n</code></pre></div></div>\n\n<p>Create secondary menus with the same <code>_data/navigation.yml</code> file such as:</p>\n\n<div><div><pre><code>about:\n- title: About\n children:\n - title: \"About the Council\"\n url: /about\n - title: \"Continuing CCCBR Reforms\"\n url: /about/reforms/\n - title: \"Governance\"\n url: /about/governance/\n</code></pre></div></div>\n\n<p>Then on the appropriate pages set the front matter:</p>\n\n<div><div><pre><code>sidebar:\n nav: \"about\"\ntoc: true\n</code></pre></div></div>\n\n<p>Create a custom skin by duplicating and rename a file in <code>_sass\\minimal-mistakes\\skins</code>. I create <code>cccbr.scss</code> and the in <code>_config.yml</code> apply the theme like this:</p>\n\n<div><div><pre><code>theme: minimal-mistakes-jekyll\nminimal_mistakes_skin: \"cccbr\"\n</code></pre></div></div>\n\n<p>Create a repository on GitHub.</p>\n\n<div><div><pre><code>git init\ngit add .\ngit commit -m \"inital commit\"\ngit remote add origin https://github.com/mtelvers/cccbr.git\ngit push -u origin master\n</code></pre></div></div>\n\n<p>On GitHub under the repo unders Settings \\ Pages publish the site using the master branch.</p>\n\n<p>Changes to make it work on GitHub:</p>\n\n<ol>\n <li>Update <code>Gemfile</code> and then ran <code>bundle</code>.</li>\n <li>Updated all the posts and pages to use the <code>single</code> template.</li>\n <li>Update <code>_config.yml</code> to set baseurl to match Git repository name.</li>\n <li>Update <code>_config.yml</code> to change remote theme.</li>\n</ol>\n\n<p>Remove unwanted front matter tags with this Ruby script</p>\n\n<div><div><pre><code>require \"yaml\"\n\nYAML_FRONT_MATTER_REGEXP = /\\A(---\\s*\\n.*?\\n?)^((---|\\.\\.\\.)\\s*$\\n?)/m\n\nDir.glob('**/*.md', File::FNM_DOTMATCH) do |f|\n puts f\n\n file = File.open(f)\n source = file.read\n file.close\n\n if source =~ YAML_FRONT_MATTER_REGEXP\n data, content = YAML.load($1), Regexp.last_match.post_match\n [\"id\", \"guid\",\n \"ep_tilt_migration\",\n \"classic-editor-remember\",\n \"ssb_old_counts\",\n \"ssb_total_counts\",\n \"ssb_cache_timestamp\",\n \"colormag_page_layout\",\n \"wp_featherlight_disable\",\n \"catchbox-sidebarlayout\",\n \"complete_open_graph\"].each {|x| data.delete(x)}\n\n file = File.open(f, \"w\")\n YAML.dump(data, file)\n file.puts(\"---\", content)\n file.close\n end\nend\n</code></pre></div></div>",
+20
mte/2021_06_22_syncthing-on-openbsd.json
+20
mte/2021_06_22_syncthing-on-openbsd.json
···+"content": "<h2>Network Installation of OpenBSD</h2>\n\n<p>Setup a machine to facilitate network installation of OpenBSD. Download the 6.9 installation ISO from the <a href=\"https://www.openbsd.org/faq/faq4.html#Download\">OpenBSD website</a> and install it in a virtual machine. I\u2019m using VMware Fusion and have a dedicated LAN port connected to the remote machine.</p>\n\n<p>Create <code>hostname.vic0</code> containing the following and not <code>dhcp</code>:</p>\n\n<div><div><pre><code>inet 192.168.2.1 255.255.255.0 NONE\n</code></pre></div></div>\n\n<h3>DHCPD</h3>\n\n<p>Create <code>/etc/dhcpd.conf</code> with the key attributes:</p>\n\n<ul>\n <li><code>filename</code> for the boot image name, and</li>\n <li><code>next-server</code> for the TFTP server address.</li>\n</ul>\n\n<p>I have added a host section for the specific MAC of my machine but for this one-time build process it could be a global option.</p>\n\n<div><div><pre><code>subnet 192.168.2.0 netmask 255.255.255.0 {\n option routers 192.168.2.1;\n range 192.168.2.32 192.168.2.127;\n \n host mini-itx {\n hardware ethernet 00:40:63:d5:6f:4f;\n filename \"auto_install\";\n next-server 192.168.2.1;\n option host-name \"mini-itx\"\n }\n}\n</code></pre></div></div>\n\n<h3>TFTPD</h3>\n\n<p>Create the default TFTP root folder and configuration folder</p>\n\n<div><div><pre><code>mkdir -p /tftpboot/etc\n</code></pre></div></div>\n\n<p>Download <a href=\"http://ftp.openbsd.org/pub/OpenBSD/6.9/i386/pxeboot\">pxeboot</a> and <a href=\"http://ftp.openbsd.org/pub/OpenBSD/6.9/i386/bsd.rd\">bsd.rd</a> and put them in <code>/tftpboot</code>.</p>\n\n<p>Create a symbolic link for <code>auto_install</code></p>\n\n<div><div><pre><code>ln -s pxeboot /tftpboot/auto_install\n</code></pre></div></div>\n\n<p>Create <code>/tftpboot/etc/boot.conf</code> containing the following</p>\n\n<div><div><pre><code>boot tftp:/bsd.rd\n</code></pre></div></div>\n\n<h3>HTTPD</h3>\n\n<p>Create <code>/etc/httpd.conf</code> to share the folder <code>/var/www/htdocs</code></p>\n\n<div><div><pre><code>#[ MACROS ]\next_ip = \"*\"\n\n# [ GLOBAL CONFIGURATION ]\n# none\n\n# [ SERVERS ]\nserver \"default\" {\n listen on $ext_ip port 80\n root \"/htdocs\"\n}\n\n# [ TYPES ]\ntypes {\n include \"/usr/share/misc/mime.types\"\n}\n</code></pre></div></div>\n\n<p>Stage the installation files on a local web server by copying them from the boot ISO downloaded at the start:</p>\n\n<div><div><pre><code>mount /dev/cd0a /mnt/\nmkdir -p /var/www/htdocs/pub/OpenBSD\ncp -rv /mnt/6.9/ /var/www/htdocs/pub/OpenBSD/6.9\nls -l /var/www/htdocs/pub/OpenBSD/6.9 > /var/www/htdocs/pub/OpenBSD/6.9/index.txt\n</code></pre></div></div>\n\n<p>Create <code>/var/www/htdocs/install.conf</code> containing the following automatic confgiuration answer file</p>\n\n<div><div><pre><code>Password for root = Password\nSetup a user = user\nPassword for user = Password\nPublic ssh key for user = ssh-rsa AAAA...ZV user@Marks-Mac-mini.local\nWhich disk is the root disk = wd0\nWhat timezone are you in = Europe/London\nUnable to connect using https. Use http instead = yes\nLocation of sets = http\nHTTP Server = 192.168.2.1\nSet name(s) = -all bsd* base* etc* man* site* comp*\nContinue without verification = yes\n</code></pre></div></div>\n\n<p>Enable the services using <code>rcctl</code> which edits configuration file <code>rc.conf.local</code> add the appropriate <code>service_flags=\"\"</code> lines</p>\n\n<div><div><pre><code>rcctl enable dhcpd\nrcctl enable tftpd\nrcctl enable httpd\n</code></pre></div></div>\n\n<p>The remote system should now boot from the network and install OpenBSD hands free!</p>\n\n<p>After the new system boots <code>su</code> and then overwrite <code>/etc/installurl</code> with a standard value</p>\n\n<div><div><pre><code>echo https://ftp.openbsd.org/pub/OpenBSD > /etc/installurl\n</code></pre></div></div>\n\n<h2>RAID5 Volume</h2>\n\n<p>Create a RAID5 volume over the four attached disks</p>\n\n<div><div><pre><code>for a in sd0 sd1 sd2 sd3 ; do fdisk -iy $a ; done\nfor a in sd0 sd1 sd2 sd3 ; do printf \"a\\n\\n\\n\\nRAID\\nw\\nq\\n\" | disklabel -E $a ; done\nbioctl -c 5 -l /dev/sd0a,/dev/sd1a,/dev/sd2a,/dev/sd3a softraid0\n</code></pre></div></div>\n\n<p>Partition and format the volume</p>\n\n<div><div><pre><code>fdisk -iy sd4\nprintf \"a\\n\\n\\n\\n4.2BSD\\nw\\nq\\n\" | disklabel -E sd4\nnewfs /dev/rsd4a \n</code></pre></div></div>\n\n<h2>Syncthing</h2>\n\n<p>Install <code>syncthing</code> using</p>\n\n<div><div><pre><code>pkg_add syncthing\n</code></pre></div></div>\n\n<p>Edit <code>/etc/login.conf</code> and append:</p>\n\n<div><div><pre><code>syncthing:\\\n :openfiles-max=60000:\\ \n :tc=daemon:\n</code></pre></div></div>\n\n<p>Rebuild the file</p>\n\n<div><div><pre><code>cap_mkdb /etc/login.conf\necho \"kern.maxfiles=80000\" >> /etc/sysctl.conf\n</code></pre></div></div>\n\n<p>Edit <code>/etc/rc.d/syncthing</code> and update the <code>daemon_flags</code>:</p>\n\n<div><div><pre><code>daemon_flags=\"-no-browser -gui-address=0.0.0.0:8384\"\n</code></pre></div></div>\n\n<p>Edit <code>/etc/fstab</code> to mount the drive</p>\n\n<div><div><pre><code>/dev/sd4a /var/syncthing ffs rw,softdep 0 0\nchown -R _syncthing:_syncthing /var/syncthing\n</code></pre></div></div>\n\n<p>Enable and start syncthing:</p>\n\n<div><div><pre><code>rcctl enable syncthing\nrcctl start syncthing\n</code></pre></div></div>",
+20
mte/2021_07_14_jitsis.json
+20
mte/2021_07_14_jitsis.json
···+"summary": "I need to remotely control OBS during a live stream. This is quite simply achieved via VNC but I need the see and hear what\u2019s going on at the same time. VNC doesn\u2019t support audio on the free license and watching the YouTube stream is out of the question as it\u2019s nearly 30 seconds behind real time.",+"content": "<p>I need to remotely control OBS during a live stream. This is quite simply achieved via VNC but I need the see and hear what\u2019s going on at the same time. VNC doesn\u2019t support audio on the free license and watching the YouTube stream is out of the question as it\u2019s nearly 30 seconds behind real time.</p>\n\n<p>As the computer has a USB web camera and microphone attached I thought of a private LAN based v/c solution. A quick Internet search found a <a href=\"https://www.reddit.com/r/sysadmin/comments/gmray4/recommendation_for_free_lanonly_video/\">post on Redit</a> talking about Jitsi.</p>\n\n<p>After installing a Ubunutu 20.04 server VM, I followed the Jitsi <a href=\"https://jitsi.github.io/handbook/docs/devops-guide/devops-guide-quickstart\">Self-Hosting Guide</a> which takes just a few minutes. Since it was a private LAN implementation I skipped the optional FQDN section of the instructions and used the self-signed certificate.</p>\n\n<p>Connecting to the DHCP assigned address via over https brought the expected certificate warnings but I was able to create and join a room. The camera and microphone did not start. Every 30 seconds or so this message appeared about reconnecting:</p>\n\n<p><img alt=\"Jitsi Disconnected\" src=\"https://www.tunbury.org/images/jitsi-disconnected.png\"></p>\n\n<p>The fix to this was to use a host name not an IP address. On Windows machines edit <code>C:\\Windows\\System32\\Drivers\\etc\\hosts</code> and on a Mac edit <code>/etc/hosts</code>. In both cases I added the DHCP issued IP address and hostname of the Ubuntu server:</p>\n\n<div><div><pre><code>192.168.1.76\tjitsi\n</code></pre></div></div>\n\n<p>Connecting to Jitsu using <a href=\"https://jitsi\">https://jitsi</a> and skipping passed the certificate warnings brought me to a working implementation. Certainly impressive and easy to setup!</p>",
+20
mte/2021_07_27_audio-stream.json
+20
mte/2021_07_27_audio-stream.json
···+"summary": "Now singing has returned to churches I need to add an additional microphone to pickup the choir. I\u2019d like this to be completely separate to the Church PA system to avoid playing this sound out through the speakers. A Raspberry PI Zero W with a USB sound card looks to be a good option to capture the audio and stream it to OBS.",+"content": "<p>Now singing has returned to churches I need to add an additional microphone to pickup the choir. I\u2019d like this to be completely separate to the Church PA system to avoid playing this sound out through the speakers. A Raspberry PI Zero W with a USB sound card looks to be a good option to capture the audio and stream it to OBS.</p>\n\n<p>Run <code>arecord -l</code> to get a list of available mixer devices. In my case my USB audio device is #2.</p>\n\n<p>Set the mixer level for the microphone:</p>\n\n<div><div><pre><code>amixer -c 2 -q set 'Mic',0 100%\n</code></pre></div></div>\n\n<p>Install <code>ffmpeg</code> which pulls down around 750MB on a lite installation.</p>\n\n<div><div><pre><code>apt install ffmpeg\n</code></pre></div></div>\n\n<p>Run <code>ffmpeg</code> to create the stream specifying the mixer device name as the input <code>-i</code></p>\n\n<div><div><pre><code>ffmpeg -ar 44100 -ac 1 -f alsa -i plughw:2,0 -f wav -listen 1 tcp://0.0.0.0:5002\n</code></pre></div></div>\n\n<p>You can play this stream using VideoLAN\u2019s VLC using <em>Open Network Stream</em> <code>tcp/wav://192.168.1.104:5002</code> where 192.168.1.104 is the IP address of the PI.</p>\n\n<p>In OBS create a new Media Source and set the network buffer to zero (to avoid excessive delay) and turn off <em>Restart playback when source becomes active</em> which keeps the stream alive even when it\u2019s not the active scene:</p>\n\n<div><div><pre><code>tcp://192.162.1.104:5002\n</code></pre></div></div>\n\n<p>Wrap the ffmpeg command as a service by creating <code>/etc/systemd/system/stream.service</code> containing</p>\n\n<div><div><pre><code>[Unit]\nDescription=auto start stream\nAfter=multi-user.target\n\n[Service]\nType=simple\nExecStartPre=/usr/bin/amixer -c 2 -q set 'Mic',0 100%\nExecStart=/usr/bin/ffmpeg -ar 44100 -ac 1 -f alsa -i plughw:2,0 -f wav -listen 1 tcp://0.0.0.0:5002\nUser=pi\nWorkingDirectory=/home/pi\nRestart=always\n\n[Install]\nWantedBy=multi-user.target\n</code></pre></div></div>\n\n<p>Enable and start the service as follows:</p>\n\n<div><div><pre><code>systemctl enable stream\nservice stream start\n</code></pre></div></div>\n\n<h2>Practical Issues</h2>\n\n<p>After successfully testing using a Raspberry PI Zero W using USB audio dongle with WiFi connecting over a distance of 30m in an empty church I decided to use it as a secondary device in live broadcast. This was immediately scuppered on the day as I was unable to maintain the WiFi link. I put this down to the interference created by the in house PA system, induction loop, and the mobile phones of the congregation.</p>\n\n<p>I added a UFL connector the Pi Zero W as described by <a href=\"https://www.briandorey.com/post/raspberry-pi-zero-w-external-antenna-mod\">Briain Dorey</a>. Using this with a 5dB D-Link antenna did marginally increase the antenna signal level and quality of most networks but not sufficiently to make the difference.</p>\n\n<h3>Internal antenna</h3>\n\n<div><div><pre><code>pi@raspberrypi:~ $ sudo iwlist wlan0 scan | grep 'Cell\\|Signal' | sed '$!N;s/\\n/ /'\n Cell 01 - Address: 6C:xx:xx:xx:xx:10 Quality=69/70 Signal level=-41 dBm \n Cell 02 - Address: 5C:xx:xx:xx:xx:9E Quality=26/70 Signal level=-84 dBm \n Cell 03 - Address: 5E:xx:xx:xx:xx:9F Quality=27/70 Signal level=-83 dBm \n Cell 04 - Address: 9C:xx:xx:xx:xx:62 Quality=35/70 Signal level=-75 dBm \n Cell 05 - Address: 78:xx:xx:xx:xx:8E Quality=21/70 Signal level=-89 dBm \n Cell 06 - Address: 9C:xx:xx:xx:xx:72 Quality=37/70 Signal level=-73 dBm \n Cell 07 - Address: 80:xx:xx:xx:xx:6A Quality=17/70 Signal level=-93 dBm \n</code></pre></div></div>\n\n<h3>External antenna</h3>\n\n<div><div><pre><code>pi@raspberrypi:~ $ sudo iwlist wlan0 scan | grep 'Cell\\|Signal' | sed '$!N;s/\\n/ /'\n Cell 01 - Address: 6C:xx:xx:xx:xx:10 Quality=70/70 Signal level=-29 dBm \n Cell 02 - Address: 5C:xx:xx:xx:xx:9E Quality=22/70 Signal level=-88 dBm \n Cell 03 - Address: 5E:xx:xx:xx:xx:9F Quality=23/70 Signal level=-87 dBm \n Cell 04 - Address: 9C:xx:xx:xx:xx:62 Quality=41/70 Signal level=-69 dBm \n Cell 05 - Address: 78:xx:xx:xx:xx:8E Quality=30/70 Signal level=-80 dBm \n Cell 06 - Address: 9C:xx:xx:xx:xx:72 Quality=41/70 Signal level=-69 dBm \n Cell 07 - Address: 80:xx:xx:xx:xx:6A Quality=24/70 Signal level=-86 dBm \n</code></pre></div></div>\n\n<p>Switching to a Raspberry PI 3 gave easy access to an Ethernet port without resorting to a USB hub. Following that there were no further connection issues!</p>\n\n<p><code>FFMPEG</code> can also create an MP3 stream rather than a WAV stream by simply changing the output format <code>-f mp3</code></p>\n\n<div><div><pre><code>/usr/bin/ffmpeg -ar 44100 -ac 1 -f alsa -i plughw:2,0 -f mp3 -listen 1 tcp://0.0.0.0:5002\n</code></pre></div></div>\n\n<p>The Raspberry PI 3 didn\u2019t really have sufficient processing capacity to keep up with the MP3 encoding. Switch to MP2, <code>-f mp2</code>, reduced the processor requirement significantly with no noticeable change in quality.</p>",
+20
mte/2021_08_16_ratchet-adapter.json
+20
mte/2021_08_16_ratchet-adapter.json
···+"summary": "I want to electrically drive this ratchet mechanism to avoid the manual labour of turning it by hand. I found a motor with a 1600:1 gearbox on eBay (shipping from China of course) which looks perfect, however it has a 10mm diameter keyed output shaft which doesn\u2019t nicely couple to my 3/4\u201d square ratchet shaft.",+"content": "<p>I want to electrically drive this ratchet mechanism to avoid the manual labour of turning it by hand. I found a motor with a 1600:1 gearbox on eBay (shipping from China of course) which looks perfect, however it has a 10mm diameter keyed output shaft which doesn\u2019t nicely couple to my 3/4\u201d square ratchet shaft.</p>\n\n<p><img alt=\"Ratchet with pipe\" src=\"https://www.tunbury.org/images/ratchet-with-pipe.png\"></p>\n\n<p>From the photo it is clear that a 1\u201d steel tube fits reasonably well over the shaft. A wooden plug and a little bit of brute force provided a flat surface which was pre-drilled and a flang screwed on.</p>\n\n<p><img alt=\"Wooden block version\" src=\"https://www.tunbury.org/images/wooden-block.png\"></p>\n\n<p>This worked fairly well except that the grub screw on the flang was insufficent to withstand the forces required. Therefore a keyway was cut into the flang to prevent slipping.</p>\n\n<p><img alt=\"Flang with keyway\" src=\"https://www.tunbury.org/images/flang-key-1.png\"></p>\n\n<p>And a key was made to fit.</p>\n\n<p><img alt=\"Flange with key\" src=\"https://www.tunbury.org/images/flang-key-2.png\"></p>\n\n<p>This worked very well, but unfortunately about two years later things took a nasty turn. One of the screws snapped and others were about to pull out.</p>\n\n<p><img alt=\"Wear and tear\" src=\"https://www.tunbury.org/images/wear-and-tear.png\"></p>\n\n<p>Taking the 1\u201d tube and turning it sideways gave a metal surface on to which the flang could be bolted. Cutting a hole in the bottom side of the tube would accomodate the 3/4\u201d ratchet shaft.</p>\n\n<p><img alt=\"Pipe with holes and cutout\" src=\"https://www.tunbury.org/images/ratchet-connector-with-cutout.png\"></p>\n\n<p>And with the flang in place it looks ready for use.</p>\n\n<p><img alt=\"Flang in place\" src=\"https://www.tunbury.org/images/ratchet-connector-flang.png\"></p>\n\n<p>Hopefully this will last a little longer this time.</p>\n\n<p><img alt=\"Ready for operation\" src=\"https://www.tunbury.org/images/in-operation.png\"></p>",
+19
mte/2021_08_29_arduino-gas-sensor.json
+19
mte/2021_08_29_arduino-gas-sensor.json
···+"summary": "With the current emphasis on ventilation to reduce the risks associated with inhaled droplets it I have put together a simple gas sensor to record concentrations over time. The output is a CSV file which can be graphed in Excel.",+"content": "<p>With the current emphasis on ventilation to reduce the risks associated with inhaled droplets it I have put together a simple gas sensor to record concentrations over time. The output is a <code>CSV</code> file which can be graphed in Excel.</p>\n\n<p>I have used an Arduino Nano for this project which gave some serious memory constraints on the coding particularly as I needed libraries for the real time clock, SD card and OLED display.</p>\n\n<p>The modules used are:</p>\n<ul>\n <li><a href=\"https://www.amazon.co.uk/dp/B072BMYZ18/ref=cm_sw_em_r_mt_dp_dl_WPWV0XM72DEW1A4HBDGE?_encoding=UTF8&psc=1\">Arduino Nano</a></li>\n <li><a href=\"https://www.amazon.co.uk/dp/B07BRFL7V7/ref=cm_sw_em_r_mt_dp_K5YWV6VZJJRT1D4WF9VJ?_encoding=UTF8&psc=1\">DS3231 Real time clock</a></li>\n <li><a href=\"https://www.amazon.co.uk/dp/B01L9GC470/ref=cm_sw_em_r_mt_dp_QQ8BPJQJP4G62QVRSNS3\">SSD1306 OLED display</a></li>\n <li><a href=\"https://www.amazon.co.uk/dp/B077MB17JB/ref=cm_sw_em_r_mt_dp_WYZQY0ZZKJRPV83WH8R3\">SD card reader</a></li>\n <li><a href=\"https://www.amazon.co.uk/dp/B07CYYB82F/ref=cm_sw_em_r_mt_dp_9S4XZ9QD8NBH1V6M7HV5\">Gas sensor</a></li>\n</ul>\n\n<h2>Hardware Connections</h2>\n\n<p>I used a veroboard to assemble the circuit as follows</p>\n<ol>\n <li>Scatter the modules around the board and solder all VCC and GND pins</li>\n <li>On the Arduino Nano, pins A4 and A5 are used for the Inter-Integrated Circuit (I2C) bus\n <ul>\n <li>Connect SDA (A4 on Nano) to the display and clock module\u2019s SDA pin</li>\n <li>Connect SCL (A5 on Nano) to the display and clock module\u2019s SCL pin</li>\n </ul>\n </li>\n</ol>\n\n<blockquote>\n <p>At this point, the clock and display module can be tested and the time set on the clock.</p>\n</blockquote>\n\n<ol>\n <li>Connect the A0 output from the gas sensor to the A0 pin on the Arduino</li>\n</ol>\n\n<blockquote>\n <p>Reading from A0 returns an integer between 0 and 1023 representing a gas concentration between 200 - 10000 ppm</p>\n</blockquote>\n\n<ol>\n <li>The SD card using the Serial Peripheral Interface (SPI) and requires 4 connections\n <ul>\n <li>Nano D10 to CS on the SD card module</li>\n <li>Nano D11 to MOSI on the SD card module</li>\n <li>Nano D12 to MISO on the SD card module</li>\n <li>Nano D13 to SCK on the SD card module</li>\n </ul>\n </li>\n</ol>\n\n<p>With the wiring complete load the Arduino sketch from my <a href=\"https://github.com/mtelvers/Arduino-MQ2/blob/113a2348ce65966b738dc55d9ddace36824ec49f/mq2.ino\">GitHub page</a>.</p>\n\n<h2>Software Overview</h2>\n\n<p>After the basic library initialization, the code creates two 64 elements arrays to store the samples taken each second and the average of those samples calculated each minute. These arrays will hold the latest sample in the first position, therefore before a new value is added all the other values will be shifted down by one. There certainly would be more efficient ways of handing this but with a small number of values this is simple approach is workable.</p>\n\n<div><div><pre><code>#define SAMPLES 64\nuint16_t historySeconds[SAMPLES];\nuint16_t historyMinutes[SAMPLES];\n</code></pre></div></div>\n\n<p>The <em>main</em> loop of the program checks remembers the number of seconds on the clock in the variable <code>lastS</code> and waits for it to be different thus running the inner code once per second:</p>\n\n<div><div><pre><code>int lastS = -1;\n\nvoid loop(void) {\n DateTime dt = RTClib::now();\n\n if (lastS != dt.second()) {\n lastS = dt.second();\n\n // Inner code here runs once each second\n\n }\n delay(250);\n}\n</code></pre></div></div>\n\n<p>The inner code clears the display,</p>\n\n<div><div><pre><code>u8x8.clear();\nu8x8.setCursor(0, 0);\n</code></pre></div></div>\n\n<p>and then writes the date</p>\n\n<div><div><pre><code>toString(tmp, dt.year() - 2000, dt.month(), dt.day(), '-');\nu8x8.println(tmp);\n</code></pre></div></div>\n\n<p>If the time has just rolled over to a new minute (i.e. number of seconds is 0), take an average of the <em>seconds</em> samples and store that as the minute average. Finally, open a file named with the current date.</p>\n\n<div><div><pre><code>if (dt.second() == 0) {\n unsigned long total = 0;\n for (int h = 0; h < SAMPLES; h++)\n total += historySeconds[h];\n memmove(historyMinutes + 1, historyMinutes, (SAMPLES - 1) * sizeof(uint16_t));\n historyMinutes[0] = total / SAMPLES;\n strcat(tmp, \".csv\");\n txtFile = SD.open(tmp, FILE_WRITE);\n}\n</code></pre></div></div>\n\n<p>Read the next gas value and store it</p>\n\n<div><div><pre><code>uint16_t gasVal = analogRead(0);\nmemmove(historySeconds + 1, historySeconds, (SAMPLES - 1) * sizeof(uint16_t));\nhistorySeconds[0] = gasVal;\n</code></pre></div></div>\n\n<p>Display the current time</p>\n\n<div><div><pre><code>toString(tmp, dt.hour(), dt.minute(), dt.second(), ':');\nu8x8.println(tmp);\n</code></pre></div></div>\n\n<p>If there\u2019s a file open, write the time to value to the file</p>\n\n<div><div><pre><code>if (txtFile) {\n strcat(tmp, \",\");\n txtFile.print(tmp);\n}\n</code></pre></div></div>\n\n<p>Display the gas value</p>\n\n<div><div><pre><code>itoa(gasVal, tmp, 10);\nu8x8.println(tmp);\n</code></pre></div></div>\n\n<p>And similarly, if there is a file open, write the current value to the file and close it</p>\n\n<div><div><pre><code>if (txtFile) {\n txtFile.println(tmp);\n txtFile.close();\n}\n</code></pre></div></div>\n\n<p>Lastly, draw two graphs of the current samples</p>\n\n<div><div><pre><code>drawGraph(8, 3, historySeconds);\ndrawGraph(8, 7, historyMinutes);\n</code></pre></div></div>\n\n<p>The graphs were tricky to draw as the slimmed down U8x8 version of the <a href=\"https://github.com/olikraus/u8g2\">U8g2</a> library doesn\u2019t provide any drawing functions. However you can create and display a custom font glyph. This mess of nested loops creates thirty-two 8 by 8 pixel glyphs to display a bar graph of 64 values with a maximum <em>y</em> value of 32.</p>\n\n<div><div><pre><code>void drawGraph(uint8_t col, uint8_t row, uint16_t *values) {\n uint8_t tmp[8];\n for (uint8_t r = 0; r < 4; r++) {\n for (uint8_t h = 0; h < SAMPLES; h += 8) {\n for (uint8_t i = 0; i < 8; i++) {\n int x = values[SAMPLES - h - 1 - i] / 16;\n x -= 8 * r;\n tmp[i] = 0;\n for (uint8_t b = 0; b < 8 && x > 0; b++, x--) {\n if (x) {\n tmp[i] |= (1 << (7 - b));\n }\n }\n }\n u8x8.drawTile(col + h / 8, row - r, 1, tmp);\n }\n }\n}\n</code></pre></div></div>\n\n<p>The graph below shows the recording during morning ringing and during the quarter peal in the afternoon (plus some messing around blowing directly into the sensor at the end). Windows open as usual!</p>\n\n<p><img alt=\"Graph\" src=\"https://www.tunbury.org/images/sample-values-recorded.png\"></p>",
+20
mte/2021_09_04_foot-operated-timer.json
+20
mte/2021_09_04_foot-operated-timer.json
···+"summary": "At the end of a quarter peal there is always the question of how long it took and whether anyone really noted the start time. Mike proposed a foot operated timer.",+"content": "<p>At the end of a quarter peal there is always the question of how long it took and whether anyone really noted the start time. Mike proposed a foot operated timer.</p>\n\n<p>I wanted the display to be large enough that it can be seen while standing and I choose this <a href=\"https://www.amazon.co.uk/gp/product/B08BC8JY8T/\">MAX7219 dot matrix display from Amazon</a>. This turned out to be a bit of a bad purchase but more on that later.</p>\n\n<p>Using <a href=\"https://www.festi.info/boxes.py/\">boxes.py</a> to created the basic box that was just large enough to accommodate the display, battery, on/off switch and foot switch, I modified the design in Adobe Illustrator to shorten the top and add in a <em>shelf</em> for the display to sit on.</p>\n\n<p><img alt=\"net\" src=\"https://www.tunbury.org/images/foot-operated-timer-net.png\"></p>\n\n<p>This was cut on the laser cutter.</p>\n\n<p><img alt=\"net\" src=\"https://www.tunbury.org/images/foot-operated-timer-laser-cutting.jpg\"></p>\n\n<p>When assembling the electronics it became apparent that it would have been better to have a slightly taller box, but rather than waste the materials I decided to mount the Arduino upside down thereby fitting in a height of 12mm.</p>\n\n<p><img alt=\"Arduino\" src=\"https://www.tunbury.org/images/foot-operated-timer-arduino.jpg\"></p>\n\n<p>The DS3231 real time clock module was modified by bending the pins to fit in with the vero board spacing. Ultimately the battery holder was also removed to save space.</p>\n\n<p><img alt=\"DS3231\" src=\"https://www.tunbury.org/images/foot-operated-timer-clock-module.jpg\"></p>\n\n<p>The vero board was drilled to cut the tracks.</p>\n\n<p><img alt=\"Vero Board\" src=\"https://www.tunbury.org/images/foot-operated-timer-vero-board.jpg\"></p>\n\n<p><img alt=\"Vero Board\" src=\"https://www.tunbury.org/images/foot-operated-timer-assembly.jpg\"></p>\n\n<p>After the initial assembly, the unit was tested on battery for the first time. This showed that it didn\u2019t actually run on batteries. The code just crashed randomly after the display was initialised. Reading online on this <a href=\"https://arduinoplusplus.wordpress.com/2015/09/12/max7219-and-led-matrix-power-requirements/\">post</a> I found the problem with cheap display units!</p>\n\n<blockquote>\n <p>Most of the cheap generic modules have very low values for RSET, which would significantly increase the power/current required by the module. This seems to be 10k\u03a9 for the eBay specials, for a segment current exceeding 40mA, the specified minimum value for RSET in Table 11 being 11.8k\u03a9 for VLED = 2V.</p>\n</blockquote>\n\n<p>The full data sheet is available from <a href=\"https://datasheets.maximintegrated.com/en/ds/MAX7219-MAX7221.pdf\">Maxim</a></p>\n\n<p>I had some 100K\u03a9 surface mount resistors in 0603 format left over from another project. These were smaller than the 0804 format resistors used but they were relatively easy to change. Fortunately these fixed the problem.</p>\n\n<p>As an after thought a voltage divider was added to pin A0 to measure the battery voltage.</p>\n\n<p><img alt=\"Vero Board\" src=\"https://www.tunbury.org/images/foot-operated-timer-voltage-divider.jpg\"></p>\n\n<p>I wired the I2C bus from the Arduino to the DS3231 and the square wave output from the DS3231 to pin 2 on the Arduino. Pin 3 was connected to the push button. On the Arduino Nano only pin 2 and 3 can be used for interrupts. This configuration gave lots of options when it came to the code which wasn\u2019t actually written yet!</p>\n\n<p><img alt=\"Electrionics\" src=\"https://www.tunbury.org/images/foot-operated-timer-electronics.jpg\"></p>\n\n<p>Assembling the rest of the box was straight forwarded although a bit fiddly.</p>\n\n<p><img alt=\"Finished project\" src=\"https://www.tunbury.org/images/foot-operated-timer-off.jpg\"></p>\n\n<p>The code is available on <a href=\"https://github.com/mtelvers/foot-timer\">GitHub</a></p>\n\n<p><img alt=\"Finished project running\" src=\"https://www.tunbury.org/images/foot-operated-timer.jpg\"></p>",
+20
mte/2023_08_08_3d-printed-train.json
+20
mte/2023_08_08_3d-printed-train.json
···+"summary": "Creating a new OO train body drawn from scratch in Fusion 360 to minic the original damaged version.",+"content": "<p>Creating a new OO train body drawn from scratch in Fusion 360 to minic\nthe original damaged version.</p>\n\n<h1>Early versions</h1>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/IMG_1919.jpg\">\n<img alt=\"\" src=\"https://www.tunbury.org/images/IMG_1918.jpg\"></p>\n\n<h1>Printed with tree support</h1>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/IMG_1917.jpg\"></p>\n\n<h1>Finished</h1>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/IMG_1920.jpg\"></p>",
+20
mte/2025_01_18_arduino-pwm-train-controller.json
+20
mte/2025_01_18_arduino-pwm-train-controller.json
···+"content": "<h1>Circuit</h1>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/train-controller-diagram.png\"></p>\n\n<h1>Case</h1>\n\n<p>3D printable STL files are available for download: <a href=\"https://www.tunbury.org/images/train-controller.stl\">STL files</a></p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/train-controller-fusion-360.png\"></p>\n\n<h1>Arduino Code</h1>\n\n<div><div><pre><code>/*\n * Arduino Nano PWM Dual Train Controller\n * This sketch reads values from two potentiometers connected to A0 and A1\n * and uses these values to control the speed and direction of a motor via\n * an L298N motor driver. The motor speed is controlled using PWM signals\n * on pins D5 and D10, and the direction is controlled using digital signals\n * on pins D6, D7, D8, and D9.\n */\n\n// Pin definitions\nconst int potLeftPin = A0;\nconst int potRightPin = A1;\nconst int enaPin = 10;\nconst int in1Pin = 9;\nconst int in2Pin = 8;\nconst int in3Pin = 7;\nconst int in4Pin = 6;\nconst int enbPin = 5;\n\nvoid setup() {\n // Initialize serial communication\n Serial.begin(9600);\n\n // Set motor control pins as outputs\n pinMode(enbPin, OUTPUT);\n pinMode(enaPin, OUTPUT);\n pinMode(in1Pin, OUTPUT);\n pinMode(in2Pin, OUTPUT);\n pinMode(in3Pin, OUTPUT);\n pinMode(in4Pin, OUTPUT);\n}\n\nvoid loop() {\n // Read potentiometer values\n int potLeft = analogRead(potLeftPin);\n int potRight = analogRead(potRightPin);\n\n // Map potentiometer values to PWM range\n int pwmLeft = pow(potLeft - 512, 2) / 1024;\n int pwmRight = pow(potRight - 512, 2) / 1024;\n\n // Control motor speed and direction\n analogWrite(enaPin, pwmLeft);\n analogWrite(enbPin, pwmRight);\n\n // Set motor direction based on potentiometer values\n if (potLeft < 512) {\n digitalWrite(in1Pin, LOW);\n digitalWrite(in2Pin, HIGH);\n } else {\n digitalWrite(in1Pin, HIGH);\n digitalWrite(in2Pin, LOW);\n }\n\n if (potRight < 512) {\n digitalWrite(in3Pin, LOW);\n digitalWrite(in4Pin, HIGH);\n } else {\n digitalWrite(in3Pin, HIGH);\n digitalWrite(in4Pin, LOW);\n }\n\n // Print values to serial monitor for debugging\n Serial.print(\"potLeft: \");\n Serial.print(potLeft);\n Serial.print(\" PWMLeft: \");\n Serial.print(pwmLeft);\n Serial.print(\" potRight: \");\n Serial.print(potRight);\n Serial.print(\" PWMRight: \");\n Serial.println(pwmRight);\n\n // Small delay to stabilize readings\n delay(100);\n}\n</code></pre></div></div>",
+20
mte/2025_03_12_deekseek-r1-on-raspberry-pi.json
+20
mte/2025_03_12_deekseek-r1-on-raspberry-pi.json
···+"content": "<p>I\u2019ve heard a lot about Deepseek and wanted to try it for myself.</p>\n\n<p>Using a Raspberry Pi 5 with 8GB of RAM and an NVMe, I installed Ollama:</p>\n\n<div><div><pre><code>curl <span>-fsSL</span> https://ollama.com/install.sh | sh\n</code></pre></div></div>\n\n<p>Next, I downloaded the smallest Deepseek model</p>\n\n<div><div><pre><code>ollama pull deepseek-r1:1.5b\n</code></pre></div></div>\n\n<p>And ran it using:</p>\n\n<div><div><pre><code>ollama run deepseek-r1:1.5b\n</code></pre></div></div>\n\n<p>Things went downhill after that, as I couldn\u2019t get a complete response.\nThe Pi would lock up with no output on the display:</p>\n\n<div><div><pre><code>$ ollama run deepseek-r1:1.5b\n>>> what is the capital of england?\n<think>\nOkay, so I need to figure out whatclient_loop: send disconnect: Broken pipe\n</code></pre></div></div>\n\n<p>Switching from the Apple iPad charger I was using to an official Pi\npower supply resolved this problem.</p>\n\n<div><div><pre><code>$ ollama run deepseek-r1:1.5b\n>>> what is the capital of england?\n<think>\n\n</think>\n\nThe capital of England is London.\n\n>>>\n</code></pre></div></div>\n\n<p>However, it\u2019s not so good with a more niche question:</p>\n\n<div><div><pre><code>$ ollama run deepseek-r1:1.5b\n>>> what is OCaml?\n<think>\n\n</think>\n\nOCaml is a high-level, multi-paradigm programming language that combines\nfeatures from several other languages and frameworks. It was created by\nMartin Abadi, the founding father of Google, and has gained popularity\nin both academic research and industrial applications.\n\n...\n</code></pre></div></div>\n\n<p>How about an easier question?</p>\n\n<div><div><pre><code>$ ollama run deepseek-r1:1.5b\n>>> Who invented the C programming language?\n<think>\nOkay, so I need to figure out who invented the C programming\nlanguage. Hmm, I remember that C is a high-level programming language\nused for system programming and embedded systems because it's efficient\nand portable. But I'm not sure about its origin.\n\nFirst, I'll think about other languages that are often mentioned\nbefore C. For example, C was developed by Dennis Ritchie in the late\n1960s. He wrote the header files, which made the language more readable\nand maintainable. He probably worked for Bell Labs since that's where\nDennis is associated.\n\nI wonder if there were any other contributors. Maybe someone else helped\ndevelop parts of the language before Ritchie did everything. But I don't\nrecall any major alternatives to C being widely used before it became\ndominant in certain areas like operating systems and embedded devices.\n\nSo, putting it all together, Dennis Ritchie is credited with inventing\nC. He created a lot of header files that made the language structured\nand easier to use, which helped it become very popular for its efficiency\nand portability.\n</think>\n\nThe C programming language was invented by Dennis Ritchie, who worked at\nBell Labs in the late 1960s. His work led to the creation of significant\nheader files that enhanced readability and maintainability. Ritchie's\ncontributions made C a widely used language in areas like operating\nsystems and embedded systems.\n\n**Answer:**\n\nDennis Ritchie invented the C programming language. He worked at Bell Labs\n(now part of Tufts University) in the late 1960s, contributing to its\ndevelopment by creating essential header files that made the language\nmore readable and maintainable. His work led to a widely popularized\nlanguage used for system programming and embedded systems.\n\n**Step-by-Step Explanation:**\n\n1. **Identify the Problem:** The question is about the origin of C,\nan early high-level programming language.\n2. **Recall Other Languages Before C:** Mention languages like Pascal,\nAda, and Modula-2 that were around before C.\n3. **Consider Developers Known for C:** Dennis Ritchie stands out as a\nkey figure in C's development.\n4. **Details about Ritchie\u2019s Contributions:** He worked at Bell Labs\n(now Tufts University) in the 1960s, creating header files to improve\nC's structure and readability.\n5. **Link to C Language Development:** This led to C becoming prominent\nfor system programming due to its efficiency and portability.\n\n**Conclusion:**\nDennis Ritchie is credited with inventing C by developing header files\nthat enhanced the language\u2019s usability and structure.\n</code></pre></div></div>",
+20
mte/2025_03_14_pi-day.json
+20
mte/2025_03_14_pi-day.json
···+"content": "<p>It\u2019s <a href=\"https://en.wikipedia.org/wiki/Pi_Day\">Pi Day</a> 2025</p>\n\n<p>Archimedes calculated the perimeter of inscribed regular polygons\nwithin a circle to approximate the value of \u03c0.</p>\n\n<p>A square inscribed in a unit circle can be divided into four right\ntriangles with two sides of unit length, corresponding to the radius of\nthe circle. The third side can be calculated by Pythagoras\u2019 theorem to\nbe \u221a2. The perimeter of the square would be 4\u221a2. Given, C=\u03c0d, we\ncan calculate \u03c0 from the circumference by dividing it by the diameter,\n2, giving 2\u221a2.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/pi-archimedes-triangle.png\"></p>\n\n<p>CA, CD and CB are all the unit radius. AB is \u221a2 as calculated above. The\nangle ACB can be bisected with the line CD. EB is half of AB. Using\nPythagoras\u2019 theorem on the triangle BCE we can calculated CE. DE is then\n1 - CE, allowing us to use Pythagoras\u2019 theorem for a final time on BDE to\ncalculated BD. The improved approximation of the perimeter is now 8 x BD.</p>\n\n<p>We can iterate on this process using the following code:</p>\n\n<div><div><pre><code><span>let</span> <span>rec</span> <span>pi</span> <span>edge_squared</span> <span>sides</span> <span>=</span> <span>function</span>\n <span>|</span> <span>0</span> <span>-></span> <span>sides</span> <span>*.</span> <span>Float</span><span>.</span><span>sqrt</span><span>(</span><span>edge_squared</span><span>)</span> <span>/.</span> <span>2</span><span>.</span>\n <span>|</span> <span>n</span> <span>-></span>\n <span>let</span> <span>edge_squared</span> <span>=</span> <span>2</span><span>.</span> <span>-.</span> <span>2</span><span>.</span> <span>*.</span> <span>Float</span><span>.</span><span>sqrt</span> <span>(</span><span>1</span><span>.</span> <span>-.</span> <span>edge_squared</span> <span>/.</span> <span>4</span><span>.</span><span>)</span> <span>in</span>\n <span>let</span> <span>sides</span> <span>=</span> <span>sides</span> <span>*.</span> <span>2</span><span>.</span> <span>in</span>\n <span>pi</span> <span>edge_squared</span> <span>sides</span> <span>(</span><span>n</span> <span>-</span> <span>1</span><span>)</span>\n\n<span>let</span> <span>approximation</span> <span>=</span> <span>pi</span> <span>2</span><span>.</span> <span>4</span><span>.</span> <span>13</span>\n<span>let</span> <span>()</span> <span>=</span> <span>Printf</span><span>.</span><span>printf</span> <span>\"pi %.31f</span><span>\\n</span><span>\"</span> <span>approximation</span>\n</code></pre></div></div>\n\n<p>I found this method quite interesting. Usually, as the number of\niterations increases the approximation of \u03c0 becomes more accurate\nwith the delta between each step becoming smaller until the difference\nis effectively zero (given the limited precision of the floating\ncalculation). However, in this case, after 13 iterations the\napproximation becomes worse!</p>\n\n\n\n \n \n iteration\n approximation\n % error\n \n \n \n \n 0\n 2.8284271247461902909492437174777\n 9.968368\n \n \n 1\n 3.0614674589207178101446515938733\n 2.550464\n \n \n 2\n 3.1214451522580528575190328410827\n 0.641315\n \n \n 3\n 3.1365484905459406483885231864406\n 0.160561\n \n \n 4\n 3.1403311569547391890466769837076\n 0.040155\n \n \n 5\n 3.1412772509327568926096319046337\n 0.010040\n \n \n 6\n 3.1415138011441454679584239784162\n 0.002510\n \n \n 7\n 3.1415729403678827047485810908256\n 0.000627\n \n \n 8\n 3.1415877252799608854161306226160\n 0.000157\n \n \n 9\n 3.1415914215046352175875199463917\n 0.000039\n \n \n 10\n 3.1415923456110768086091411532834\n 0.000010\n \n \n 11\n 3.1415925765450043449789063743083\n 0.000002\n \n \n 12\n 3.1415926334632482408437681442592\n 0.000001\n \n \n 13\n 3.1415926548075892021927302266704\n -0.000000\n \n \n 14\n 3.1415926453212152935634549066890\n 0.000000\n \n \n 15\n 3.1415926073757196590463536267634\n 0.000001\n \n \n 16\n 3.1415929109396727447744979144773\n -0.000008\n \n \n 17\n 3.1415941251951911006301543238806\n -0.000047\n \n \n 18\n 3.1415965537048196054570325941313\n -0.000124\n \n \n 19\n 3.1415965537048196054570325941313\n -0.000124\n \n \n 20\n 3.1416742650217575061333263874985\n -0.002598\n \n \n 21\n 3.1418296818892015309643284126651\n -0.007545\n \n \n 22\n 3.1424512724941338071005247911671\n -0.027331\n \n \n 23\n 3.1424512724941338071005247911671\n -0.027331\n \n \n 24\n 3.1622776601683795227870632515987\n -0.658424\n \n \n 25\n 3.1622776601683795227870632515987\n -0.658424\n \n \n 26\n 3.4641016151377543863532082468737\n -10.265779\n \n \n 27\n 4.0000000000000000000000000000000\n -27.323954\n \n \n 28\n 0.0000000000000000000000000000000\n 100.000000\n \n \n\n\n<p>Using the <a href=\"https://opam.ocaml.org/packages/decimal/\">decimal</a> package\nwe can specify the floating point precision we want allowing us to\nget to 100 decimal places in 165 steps.</p>\n\n<div><div><pre><code><span>open</span> <span>Decimal</span>\n\n<span>let</span> <span>context</span> <span>=</span> <span>Context</span><span>.</span><span>make</span> <span>~</span><span>prec</span><span>:</span><span>200</span> <span>()</span>\n<span>let</span> <span>two</span> <span>=</span> <span>of_int</span> <span>2</span>\n<span>let</span> <span>four</span> <span>=</span> <span>of_int</span> <span>4</span>\n\n<span>let</span> <span>rec</span> <span>pi</span> <span>edge_squared</span> <span>sides</span> <span>n</span> <span>=</span>\n <span>match</span> <span>n</span> <span>with</span>\n <span>|</span> <span>0</span> <span>-></span> <span>mul</span> <span>~</span><span>context</span> <span>sides</span> <span>(</span><span>div</span> <span>~</span><span>context</span> <span>(</span><span>sqrt</span> <span>~</span><span>context</span> <span>edge_squared</span><span>)</span> <span>two</span><span>)</span>\n <span>|</span> <span>n</span> <span>-></span>\n <span>let</span> <span>edge_squared</span> <span>=</span>\n <span>sub</span> <span>~</span><span>context</span> <span>two</span>\n <span>(</span><span>mul</span> <span>~</span><span>context</span> <span>two</span>\n <span>(</span><span>sqrt</span> <span>~</span><span>context</span> <span>(</span><span>sub</span> <span>~</span><span>context</span> <span>one</span> <span>(</span><span>div</span> <span>~</span><span>context</span> <span>edge_squared</span> <span>four</span><span>))))</span>\n <span>in</span>\n <span>let</span> <span>sides</span> <span>=</span> <span>mul</span> <span>~</span><span>context</span> <span>sides</span> <span>two</span> <span>in</span>\n <span>pi</span> <span>edge_squared</span> <span>sides</span> <span>(</span><span>Int</span><span>.</span><span>pred</span> <span>n</span><span>)</span>\n\n<span>let</span> <span>()</span> <span>=</span> <span>pi</span> <span>two</span> <span>four</span> <span>165</span> <span>|></span> <span>to_string</span> <span>~</span><span>context</span> <span>|></span> <span>Printf</span><span>.</span><span>printf</span> <span>\"%s</span><span>\\n</span><span>\"</span>\n</code></pre></div></div>\n\n<p>This code is available on <a href=\"https://github.com/mtelvers/pi-archimedes\">GitHub</a></p>",
+20
mte/2025_03_15_bluesky-pds.json
+20
mte/2025_03_15_bluesky-pds.json
···+"content": "<p>Today I have set up my own Bluesky (PDS) Personal Data Server.</p>\n\n<p>I followed the README at\n<a href=\"https://github.com/bluesky-social/pds\">https://github.com/bluesky-social/pds</a>\nusing an Ubuntu 22.04 VM. The basic steps are:</p>\n\n<ol>\n <li>Publish DNS records pointing to your machine.</li>\n <li>As root, run <a href=\"https://raw.githubusercontent.com/bluesky-social/pds/main/installer.sh\">install.sh</a>.</li>\n <li>Enter your email address and preferred handle.</li>\n</ol>\n\n<p>It wasn\u2019t entirely obvious how to set your handle to be the same\nas the domain name when you have something else already published\non the domain such as your web server.</p>\n\n<p><a href=\"https://github.com/bluesky-social/pds/issues/103\">Issue #103</a> shows how this should be achieved.</p>\n\n<ol>\n <li>Publish the DNS record for <code>pds.yourdomain.com</code>.</li>\n <li>Use <code>pds.yourdomain.com</code> during setup.</li>\n <li>At the final stage where a handle is created, use <code>tmphandle.pds.yourdomain.com</code></li>\n <li>Change the change to your preferred handle via the Bluesky app.</li>\n</ol>\n\n<p>Login using a custom server pds.yourdomain.com and the handle you created.</p>\n\n<p>Next go to Account > Handle and select \u2018I have my own domain\u2019. Enter\nthe domain name which should be the new handle that you want. In\nmy case, <code>mtelvers.tunbury.org</code>. Next, publish a DNS TXT record\nfor <code>_atproto.mtelvers.tunbury.org</code> and publish your did record\n<code>did=did:plc:5le6ofipuf6sdk6czluurgjc</code></p>\n\n<div><div><pre><code>Check service status : sudo systemctl status pds\nWatch service logs : sudo docker logs -f pds\nBackup service data : /pds\nPDS Admin command : pdsadmin\n\nTo see pdsadmin commands, run \"pdsadmin help\"\n</code></pre></div></div>",
+20
mte/2025_03_16_setup-tangled-with-bluesky.json
+20
mte/2025_03_16_setup-tangled-with-bluesky.json
···+"summary": "To setup this up, I\u2019m using a modified version of Anil\u2019s repo. My repo is here. Firstly, clone the repo and run gen-key.sh.",+"content": "<p>To setup this up, I\u2019m using a modified version of Anil\u2019s <a href=\"https://tangled.sh/@anil.recoil.org/knot-docker\">repo</a>. My repo is <a href=\"https://tangled.sh/@mtelvers.tunbury.org/knot-docker\">here</a>. Firstly, clone the repo and run <code>gen-key.sh</code>.</p>\n\n<p>Go to <a href=\"https://tangled.sh/login\">https://tangled.sh/login</a> and click the <a href=\"https://bsky.app/settings/app-passwords\">link</a> to generate an app password. Copy the created password and return to <a href=\"https://www.tunbury.org/2025/03/16/setup-tangled-with-bluesky/\">https://tangled.sh/login</a> and sign in using your handle and the newly created app password.</p>\n\n<p>Go to <a href=\"https://tangled.sh/knots\">https://tangled.sh/knots</a>, enter your knot hostname and click on generate key. Copy <code>knot.env.template</code> to <code>.env</code> and enter the key in <code>KNOT_SERVER_SECRET</code>. In the same file, also set the server name.</p>\n\n<p>The original <code>Dockerfile</code> didn\u2019t quite work for me as <code>useradd -D</code> (from alpine/busybox) leads to a disabled user which cannot sign in, even over SSH. Instead, I generate a random password for the <code>git</code> user. My diff looks like this:</p>\n\n<div><div><pre><code>- adduser -D -u 1000 -G git -h /home/git git && \\\n+ pw=\"$(head -c 20 /dev/urandom | base64 | head -c 10)\" \\\n+ printf \"$pw\\n$pw\\n\" | \\\n+ adduser -u 1000 -G git -h /home/git git && \\\n</code></pre></div></div>\n\n<p>Run <code>docker compose up -d</code> then check on <a href=\"https://tangled.sh/knots\">https://tangled.sh/knots</a>. Click on initialize and wait for the process to complete.</p>\n\n<p>Add a remote repo as normal:</p>\n\n<div><div><pre><code>git remote add knot git@git.tunbury.org:mtelvers.tunbury.org/pi-archimedes\n</code></pre></div></div>\n<p>Then push as you would to any other remote</p>\n<div><div><pre><code>git push knot\n</code></pre></div></div>",
+20
mte/2025_03_17_capnproto.json
+20
mte/2025_03_17_capnproto.json
···+"summary": "Cap\u2019n Proto has become a hot topic recently and while this is used for many OCaml-CI services, I spent some time creating a minimal application.",+"content": "<p>Cap\u2019n Proto has become a hot topic recently and while this is used for many OCaml-CI services, I spent some time creating a minimal application.</p>\n\n<p>Firstly create a schema with a single interface whch accepts a file name and returns the content.</p>\n\n<div><div><pre><code>interface Foo {\n get @0 (name :Text) -> (reply :Text);\n}\n</code></pre></div></div>\n\n<p>This schema can then be compiled into the bindings for your required language. e.g. <code>capnp compile -o ocaml:. schema.capnp</code></p>\n\n<p>In practice this need not be done by hand as we can use a <code>dune</code> rule to do this.</p>\n\n<div><div><pre><code>(rule\n (targets foo_api.ml foo_api.mli)\n (deps foo_api.capnp)\n (action (run capnp compile -o %{bin:capnpc-ocaml} %{deps})))\n</code></pre></div></div>\n\n<p>On the server side we now need to extend the automatically generate code to actually implement the interface. This code is largely boilerplate.</p>\n\n<div><div><pre><code><span>module</span> <span>Api</span> <span>=</span> <span>Foo_api</span><span>.</span><span>MakeRPC</span><span>(</span><span>Capnp_rpc</span><span>)</span>\n\n<span>open</span> <span>Capnp_rpc</span><span>.</span><span>Std</span>\n\n<span>let</span> <span>read_from_file</span> <span>filename</span> <span>=</span> <span>In_channel</span><span>.</span><span>with_open_text</span> <span>filename</span> <span>@@</span> <span>fun</span> <span>ic</span> <span>-></span> <span>In_channel</span><span>.</span><span>input_all</span> <span>ic</span>\n\n<span>let</span> <span>local</span> <span>=</span>\n <span>let</span> <span>module</span> <span>Foo</span> <span>=</span> <span>Api</span><span>.</span><span>Service</span><span>.</span><span>Foo</span> <span>in</span>\n <span>Foo</span><span>.</span><span>local</span> <span>@@</span> <span>object</span>\n <span>inherit</span> <span>Foo</span><span>.</span><span>service</span>\n\n <span>method</span> <span>get_impl</span> <span>params</span> <span>release_param_caps</span> <span>=</span>\n <span>let</span> <span>open</span> <span>Foo</span><span>.</span><span>Get</span> <span>in</span>\n <span>let</span> <span>name</span> <span>=</span> <span>Params</span><span>.</span><span>name_get</span> <span>params</span> <span>in</span>\n <span>release_param_caps</span> <span>()</span><span>;</span>\n <span>let</span> <span>response</span><span>,</span> <span>results</span> <span>=</span> <span>Service</span><span>.</span><span>Response</span><span>.</span><span>create</span> <span>Results</span><span>.</span><span>init_pointer</span> <span>in</span>\n <span>Results</span><span>.</span><span>reply_set</span> <span>results</span> <span>(</span><span>read_from_file</span> <span>name</span><span>);</span>\n <span>Service</span><span>.</span><span>return</span> <span>response</span>\n <span>end</span>\n</code></pre></div></div>\n\n<p>The server needs to generate the capability file needed to access the service and wait for incoming connections.</p>\n\n<div><div><pre><code><span>let</span> <span>cap_file</span> <span>=</span> <span>\"echo.cap\"</span>\n\n<span>let</span> <span>serve</span> <span>config</span> <span>=</span>\n <span>Switch</span><span>.</span><span>run</span> <span>@@</span> <span>fun</span> <span>sw</span> <span>-></span>\n <span>let</span> <span>service_id</span> <span>=</span> <span>Capnp_rpc_unix</span><span>.</span><span>Vat_config</span><span>.</span><span>derived_id</span> <span>config</span> <span>\"main\"</span> <span>in</span>\n <span>let</span> <span>restore</span> <span>=</span> <span>Restorer</span><span>.</span><span>single</span> <span>service_id</span> <span>(</span><span>Foo</span><span>.</span><span>local</span><span>)</span> <span>in</span>\n <span>let</span> <span>vat</span> <span>=</span> <span>Capnp_rpc_unix</span><span>.</span><span>serve</span> <span>~</span><span>sw</span> <span>~</span><span>restore</span> <span>config</span> <span>in</span>\n <span>match</span> <span>Capnp_rpc_unix</span><span>.</span><span>Cap_file</span><span>.</span><span>save_service</span> <span>vat</span> <span>service_id</span> <span>cap_file</span> <span>with</span>\n <span>|</span> <span>Error</span> <span>`Msg</span> <span>m</span> <span>-></span> <span>failwith</span> <span>m</span>\n <span>|</span> <span>Ok</span> <span>()</span> <span>-></span>\n <span>traceln</span> <span>\"Server running. Connect using %S.\"</span> <span>cap_file</span><span>;</span>\n <span>Fiber</span><span>.</span><span>await_cancel</span> <span>()</span>\n</code></pre></div></div>\n\n<p>The client application imports the capability file and calls the service <code>Foo.get</code>.</p>\n\n<div><div><pre><code><span>let</span> <span>run_client</span> <span>service</span> <span>=</span>\n <span>let</span> <span>x</span> <span>=</span> <span>Foo</span><span>.</span><span>get</span> <span>service</span> <span>\"client.ml\"</span> <span>in</span>\n <span>traceln</span> <span>\"%S\"</span> <span>x</span>\n\n<span>let</span> <span>connect</span> <span>net</span> <span>uri</span> <span>=</span>\n <span>Switch</span><span>.</span><span>run</span> <span>@@</span> <span>fun</span> <span>sw</span> <span>-></span>\n <span>let</span> <span>client_vat</span> <span>=</span> <span>Capnp_rpc_unix</span><span>.</span><span>client_only_vat</span> <span>~</span><span>sw</span> <span>net</span> <span>in</span>\n <span>let</span> <span>sr</span> <span>=</span> <span>Capnp_rpc_unix</span><span>.</span><span>Vat</span><span>.</span><span>import_exn</span> <span>client_vat</span> <span>uri</span> <span>in</span>\n <span>Capnp_rpc_unix</span><span>.</span><span>with_cap_exn</span> <span>sr</span> <span>run_client</span>\n</code></pre></div></div>\n\n<p>Where <code>Foo.get</code> is defined like this</p>\n\n<div><div><pre><code><span>module</span> <span>Foo</span> <span>=</span> <span>Api</span><span>.</span><span>Client</span><span>.</span><span>Foo</span>\n\n<span>let</span> <span>get</span> <span>t</span> <span>name</span> <span>=</span>\n <span>let</span> <span>open</span> <span>Foo</span><span>.</span><span>Get</span> <span>in</span>\n <span>let</span> <span>request</span><span>,</span> <span>params</span> <span>=</span> <span>Capability</span><span>.</span><span>Request</span><span>.</span><span>create</span> <span>Params</span><span>.</span><span>init_pointer</span> <span>in</span>\n <span>Params</span><span>.</span><span>name_set</span> <span>params</span> <span>name</span><span>;</span>\n <span>Capability</span><span>.</span><span>call_for_value_exn</span> <span>t</span> <span>method_id</span> <span>request</span> <span>|></span> <span>Results</span><span>.</span><span>reply_get</span>\n</code></pre></div></div>\n\n<p>Run the server application passing it parameters of where to save the private key and which interface/port to listen on.</p>\n\n<div><div><pre><code><span>$ </span>dune <span>exec</span> <span>--</span> ./server.exe <span>--capnp-secret-key-file</span> ./server.pem <span>--capnp-listen-address</span> tcp:127.0.0.1:7000\n+Server running. Connect using <span>\"echo.cap\"</span><span>.</span>\n</code></pre></div></div>\n\n<p>The <code>.cap</code> looks like this</p>\n\n<div><div><pre><code>capnp://sha-256:f5BAo2n_2gVxUdkyzYsIuitpA1YT_7xFg31FIdNKVls@127.0.0.1:7000/6v45oIvGQ6noMaLOh5GHAJnGJPWEO5A3Qkt0Egke4Ic\n</code></pre></div></div>\n\n<p>In another window, invoke the client.</p>\n\n<div><div><pre><code><span>$ </span>dune <span>exec</span> <span>--</span> ./client.exe ./echo.cap\n</code></pre></div></div>\n\n<p>The full code is available on <a href=\"https://github.com/mtelvers/capnp-minimum\">Github</a>.</p>",
+20
mte/2025_03_17_irmin.json
+20
mte/2025_03_17_irmin.json
···+"content": "<p>After Thomas\u2019 talk today I wanted to try <a href=\"https://irmin.org\">Irmin</a> for myself.</p>\n\n<p>In a new switch I installed Irmin via opam <code>opam install irmin-git</code> and then built the <a href=\"https://irmin.org/tutorial/getting-started/\">example code</a></p>\n\n<div><div><pre><code><span>open</span> <span>Lwt</span><span>.</span><span>Syntax</span>\n<span>module</span> <span>Git_store</span> <span>=</span> <span>Irmin_git_unix</span><span>.</span><span>FS</span><span>.</span><span>KV</span> <span>(</span><span>Irmin</span><span>.</span><span>Contents</span><span>.</span><span>String</span><span>)</span>\n<span>module</span> <span>Git_info</span> <span>=</span> <span>Irmin_unix</span><span>.</span><span>Info</span> <span>(</span><span>Git_store</span><span>.</span><span>Info</span><span>)</span>\n\n<span>let</span> <span>git_config</span> <span>=</span> <span>Irmin_git</span><span>.</span><span>config</span> <span>~</span><span>bare</span><span>:</span><span>true</span> <span>\"./db\"</span>\n<span>let</span> <span>info</span> <span>message</span> <span>=</span> <span>Git_info</span><span>.</span><span>v</span> <span>~</span><span>author</span><span>:</span><span>\"Example\"</span> <span>\"%s\"</span> <span>message</span>\n\n<span>let</span> <span>main_branch</span> <span>config</span> <span>=</span>\n <span>let</span><span>*</span> <span>repo</span> <span>=</span> <span>Git_store</span><span>.</span><span>Repo</span><span>.</span><span>v</span> <span>config</span> <span>in</span>\n <span>Git_store</span><span>.</span><span>main</span> <span>repo</span>\n\n<span>let</span> <span>main</span> <span>=</span>\n <span>let</span><span>*</span> <span>t</span> <span>=</span> <span>main_branch</span> <span>git_config</span> <span>in</span>\n <span>(* Set a/b/c to \"Hello, Irmin!\" *)</span>\n <span>let</span><span>*</span> <span>()</span> <span>=</span>\n <span>Git_store</span><span>.</span><span>set_exn</span> <span>t</span> <span>[</span> <span>\"a\"</span><span>;</span> <span>\"b\"</span><span>;</span> <span>\"c\"</span> <span>]</span> <span>\"Hello, Irmin!\"</span>\n <span>~</span><span>info</span><span>:</span><span>(</span><span>info</span> <span>\"my first commit\"</span><span>)</span>\n <span>in</span>\n <span>(* Get a/b/c *)</span>\n <span>let</span><span>+</span> <span>s</span> <span>=</span> <span>Git_store</span><span>.</span><span>get</span> <span>t</span> <span>[</span> <span>\"a\"</span><span>;</span> <span>\"b\"</span><span>;</span> <span>\"c\"</span> <span>]</span> <span>in</span>\n <span>assert</span> <span>(</span><span>s</span> <span>=</span> <span>\"Hello, Irmin!\"</span><span>)</span>\n\n<span>let</span> <span>()</span> <span>=</span> <span>Lwt_main</span><span>.</span><span>run</span> <span>main</span>\n</code></pre></div></div>\n\n<p>I\u2019m pretty excited about the possibilities.</p>",
+20
mte/2025_03_23_real-time-trains.json
+20
mte/2025_03_23_real-time-trains.json
···+"summary": "After the Heathrow substation electrical fire, I found myself in Manchester with a long train ride ahead. Checking on Real Time Trains for the schedule I noticed that they had an API. With time to spare, I registered for an account and downloaded the sample code from ocaml-cohttp.",+"content": "<p>After the Heathrow substation electrical fire, I found myself in Manchester with a long train ride ahead. Checking on <a href=\"https://www.realtimetrains.co.uk\">Real Time Trains</a> for the schedule I noticed that they had an API. With time to spare, I registered for an account and downloaded the sample code from <a href=\"https://github.com/mirage/ocaml-cohttp\">ocaml-cohttp</a>.</p>\n\n<p>The API account details uses HTTP basic authentication which is added via the HTTP header:</p>\n\n<div><div><pre><code> <span>let</span> <span>headers</span> <span>=</span> <span>Cohttp</span><span>.</span><span>Header</span><span>.</span><span>init</span> <span>()</span> <span>in</span>\n <span>let</span> <span>headers</span> <span>=</span>\n <span>Cohttp</span><span>.</span><span>Header</span><span>.</span><span>add_authorization</span> <span>headers</span> <span>(</span><span>`Basic</span> <span>(</span><span>user</span><span>,</span> <span>password</span><span>))</span>\n</code></pre></div></div>\n\n<p>The response from the API can be converted to JSON using <a href=\"https://github.com/ocaml-community/yojson\">Yojson</a>.</p>\n\n<div><div><pre><code><span>let</span> <span>json</span> <span>=</span>\n <span>Eio</span><span>.</span><span>Buf_read</span><span>.(</span><span>parse_exn</span> <span>take_all</span><span>)</span> <span>body</span> <span>~</span><span>max_size</span><span>:</span><span>max_int</span>\n <span>|></span> <span>Yojson</span><span>.</span><span>Safe</span><span>.</span><span>from_string</span>\n</code></pre></div></div>\n\n<p>The JSON field can be read using the <code>Util</code> functions. For example, <code>Yojson.Basic.Util.member \"services\" json</code> will read the <code>services</code> entry. Elements can be converted to lists with <code>Yojson.Basic.Util.to_list</code>. After a bit of hacking this turned out to be quite tedious to code.</p>\n\n<p>As an alternative, I decided to use <code>ppx_deriving_yojson.runtime</code>. I described the JSON blocks as OCaml types, e.g. <code>station</code> as below.</p>\n\n<div><div><pre><code><span>type</span> <span>station</span> <span>=</span> <span>{</span>\n <span>tiploc</span> <span>:</span> <span>string</span><span>;</span>\n <span>description</span> <span>:</span> <span>string</span><span>;</span>\n <span>workingTime</span> <span>:</span> <span>string</span><span>;</span>\n <span>publicTime</span> <span>:</span> <span>string</span><span>;</span>\n<span>}</span>\n<span>[</span><span>@@</span><span>deriving</span> <span>yojson</span><span>]</span>\n</code></pre></div></div>\n\n<p>The preprocessor automatically generates two functions:<code>station_of_json</code> and <code>station_to_json</code> which handle the conversion.</p>\n\n<p>The only negative on this approach is that RTT doesn\u2019t emit empty JSON fields, so they need to be flagged as possibly missing and a default value provided. For example, <code>realtimeArrivalNextDay</code> is not emitted unless the value is <code>true</code>.</p>\n\n<div><div><pre><code> <span>realtimeArrivalNextDay</span> <span>:</span> <span>(</span><span>bool</span><span>[</span><span>@</span><span>default</span> <span>false</span><span>]);</span>\n</code></pre></div></div>\n\n<p>Now once the JSON has been received we can just convert it to OCaml types very easily:</p>\n\n<div><div><pre><code> <span>match</span> <span>reply_of_yojson</span> <span>json</span> <span>with</span>\n <span>|</span> <span>Ok</span> <span>reply</span> <span>-></span>\n <span>(* Use reply.services *)</span>\n <span>|</span> <span>Error</span> <span>err</span> <span>-></span> <span>Printf</span><span>.</span><span>printf</span> <span>\"Error %s</span><span>\\n</span><span>\"</span> <span>err</span>\n</code></pre></div></div>\n\n<p>My work in progress code is available on <a href=\"https://github.com/mtelvers/ocaml-rtt\">GitHub</a></p>\n\n<div><div><pre><code>dune exec --release -- rtt --user USER --pass PASS --station RTR\nrtt: [DEBUG] received 3923 bytes of body\nrtt: [DEBUG] received 4096 bytes of body\nrtt: [DEBUG] received 4096 bytes of body\nrtt: [DEBUG] received 4096 bytes of body\nrtt: [DEBUG] received 1236 bytes of body\nrtt: [DEBUG] end of inbound body\n2025-03-23 2132 W16178 1C69 1 Ramsgate St Pancras International\n2025-03-23 2132 W25888 9P59 2 Plumstead Rainham (Kent)\n2025-03-23 2136 J00119 1U28 2 London Victoria Ramsgate\n2025-03-23 2144 W25927 9P86 1 Rainham (Kent) Plumstead\n2025-03-23 2157 W16899 1C66 2 St Pancras International Ramsgate\n2025-03-23 2202 W25894 9P61 2 Plumstead Rainham (Kent)\n2025-03-23 2210 J26398 1U80 1 Ramsgate London Victoria\n2025-03-23 2214 W25916 9P70 1 Rainham (Kent) Plumstead\n2025-03-23 2232 W16910 1C73 1 Ramsgate St Pancras International\n2025-03-23 2232 W25900 9P63 2 Plumstead Rainham (Kent)\n2025-03-23 2236 J00121 1U30 2 London Victoria Ramsgate\n2025-03-23 2244 W25277 9A92 1 Rainham (Kent) Dartford\n2025-03-23 2257 W16450 1F70 2 St Pancras International Faversham\n2025-03-23 2302 W25906 9P65 2 Plumstead Rainham (Kent)\n2025-03-23 2314 W25283 9A94 1 Rainham (Kent) Dartford\n2025-03-23 2318 J00155 1U82 1 Ramsgate London Victoria\n2025-03-23 2332 W25912 9P67 2 Plumstead Gillingham (Kent)\n2025-03-23 2336 J00123 1U32 2 London Victoria Ramsgate\n2025-03-23 2344 W25289 9A96 1 Rainham (Kent) Dartford\n2025-03-23 2357 W16475 1F74 2 St Pancras International Faversham\n2025-03-23 0002 W25915 9P69 2 Plumstead Gillingham (Kent)\n2025-03-23 0041 J26381 1Z34 2 London Victoria Faversham\n</code></pre></div></div>",
+20
mte/2025_03_24_recent-ocaml-version.json
+20
mte/2025_03_24_recent-ocaml-version.json
···+"summary": "Following my post on discuss.ocaml.org, I have created a new release of ocurrent/ocaml-version that moves the minimum version of OCaml, considered as recent, from 4.02 to 4.08.",+"content": "<p>Following my <a href=\"https://discuss.ocaml.org/t/docker-base-images-and-ocaml-ci-support-for-ocaml-4-08/16229\">post on discuss.ocaml.org</a>, I have created a new release of <a href=\"https://github.com/ocurrent/ocaml-version\">ocurrent/ocaml-version</a> that moves the minimum version of OCaml, considered as <em>recent</em>, from 4.02 to 4.08.</p>\n\n<div><div><pre><code><span>let</span> <span>recent</span> <span>=</span> <span>[</span> <span>v4_08</span><span>;</span> <span>v4_09</span><span>;</span> <span>v4_10</span><span>;</span> <span>v4_11</span><span>;</span> <span>v4_12</span><span>;</span> <span>v4_13</span><span>;</span> <span>v4_14</span><span>;</span> <span>v5_0</span><span>;</span> <span>v5_1</span><span>;</span> <span>v5_2</span><span>;</span> <span>v5_3</span> <span>]</span>\n</code></pre></div></div>\n\n<p>This may feel like a mundane change, but <a href=\"https://github.com/ocurrent/ocaml-ci\">OCaml-CI</a>, <a href=\"https://github.com/ocurrent/opam-repo-ci\">opam-repo-ci</a>, <a href=\"https://github.com/ocurrent/docker-base-images\">Docker base image builder</a> among other things, use this to determine the set of versions of OCaml to test against. Therefore, as these services are updated, testing on the old releases will be removed.</p>",
+20
mte/2025_03_25_topological-sort.json
+20
mte/2025_03_25_topological-sort.json
···+"summary": "Given a list of packages and their dependencies, what order should those packages be installed in?",+"content": "<p>Given a list of packages and their dependencies, what order should those packages be installed in?</p>\n\n<p>The above graph gives a simple example of the dependencies of the package <code>dune</code> nicely ordered right to left.</p>\n\n<p>We might choose to model this in OCaml using a map with the package name as the key and a set of the dependent packages:</p>\n\n<div><div><pre><code><span>module</span> <span>PackageSet</span> <span>=</span> <span>Set</span><span>.</span><span>Make</span> <span>(</span><span>String</span><span>);;</span>\n<span>module</span> <span>PackageMap</span> <span>=</span> <span>Map</span><span>.</span><span>Make</span> <span>(</span><span>String</span><span>);;</span>\n</code></pre></div></div>\n\n<p>Thus, the <code>dune</code> example could be defined like this.</p>\n\n<div><div><pre><code><span>let</span> <span>dune</span> <span>=</span> <span>PackageMap</span><span>.(</span><span>empty</span> <span>|></span>\n <span>add</span> <span>\"ocaml\"</span> <span>(</span><span>PackageSet</span><span>.(</span><span>empty</span> <span>|></span> <span>add</span> <span>\"ocaml-config\"</span> <span>|></span> <span>add</span> <span>\"ocaml-variants\"</span><span>))</span> <span>|></span>\n <span>add</span> <span>\"ocaml-config\"</span> <span>(</span><span>PackageSet</span><span>.(</span><span>empty</span> <span>|></span> <span>add</span> <span>\"ocaml-variants\"</span><span>))</span> <span>|></span>\n <span>add</span> <span>\"dune\"</span> <span>(</span><span>PackageSet</span><span>.(</span><span>empty</span> <span>|></span> <span>add</span> <span>\"ocaml\"</span> <span>|></span> <span>add</span> <span>\"base-unix.base\"</span> <span>|></span> <span>add</span> <span>\"base-threads.base\"</span><span>))</span> <span>|></span>\n <span>add</span> <span>\"ocaml-variants\"</span> <span>(</span><span>PackageSet</span><span>.</span><span>empty</span><span>)</span> <span>|></span>\n <span>add</span> <span>\"base-unix.base\"</span> <span>(</span><span>PackageSet</span><span>.</span><span>empty</span><span>)</span> <span>|></span>\n <span>add</span> <span>\"base-threads.base\"</span> <span>(</span><span>PackageSet</span><span>.</span><span>empty</span><span>)</span>\n <span>);;</span>\n</code></pre></div></div>\n\n<p>We can create a topological sort by first choosing any package with an empty set of dependencies. This package should then be removed from the map of packages and also removed as a dependency from any of the sets. This can be written concisely in OCaml</p>\n\n<div><div><pre><code><span>let</span> <span>rec</span> <span>topological_sort</span> <span>pkgs</span> <span>=</span>\n <span>match</span> <span>PackageMap</span><span>.</span><span>is_empty</span> <span>pkgs</span> <span>with</span>\n <span>|</span> <span>true</span> <span>-></span> <span>[]</span>\n <span>|</span> <span>false</span> <span>-></span>\n <span>let</span> <span>installable</span> <span>=</span> <span>PackageMap</span><span>.</span><span>filter</span> <span>(</span><span>fun</span> <span>_</span> <span>deps</span> <span>-></span> <span>PackageSet</span><span>.</span><span>is_empty</span> <span>deps</span><span>)</span> <span>pkgs</span> <span>in</span>\n <span>let</span> <span>()</span> <span>=</span> <span>assert</span> <span>(</span><span>not</span> <span>(</span><span>PackageMap</span><span>.</span><span>is_empty</span> <span>installable</span><span>))</span> <span>in</span>\n <span>let</span> <span>i</span> <span>=</span> <span>PackageMap</span><span>.</span><span>choose</span> <span>installable</span> <span>|></span> <span>fst</span> <span>in</span>\n <span>let</span> <span>pkgs</span> <span>=</span> <span>PackageMap</span><span>.</span><span>remove</span> <span>i</span> <span>pkgs</span> <span>|></span> <span>PackageMap</span><span>.</span><span>map</span> <span>(</span><span>fun</span> <span>deps</span> <span>-></span> <span>PackageSet</span><span>.</span><span>remove</span> <span>i</span> <span>deps</span><span>)</span> <span>in</span>\n <span>i</span> <span>::</span> <span>topological_sort</span> <span>pkgs</span>\n</code></pre></div></div>\n\n<p>This gives us the correct installation order:</p>\n\n<div><div><pre><code># topological_sort dune;;\n- : PackageMap.key list =\n[\"base-threads.base\"; \"base-unix.base\"; \"ocaml-variants\"; \"ocaml-config\"; \"ocaml\"; \"dune\"]\n</code></pre></div></div>",
+20
mte/2025_03_26_freebsd-14.2.json
+20
mte/2025_03_26_freebsd-14.2.json
···+"content": "<p>CI workers <code>spring</code> and <code>summer</code> run FreeBSD and need to be updated.</p>\n\n<p>Check the current version of FreeBSD which we have with <code>uname -r</code>.</p>\n\n<div><div><pre><code>FreeBSD summer 14.1-RELEASE-p5 FreeBSD 14.1-RELEASE-p5 GENERIC amd64\n</code></pre></div></div>\n\n<p>Run <code>freebsd-update fetch</code> to download the latest versions of the system components, particularly the <code>freebsd-update</code> utility. It even reported that it really is time to upgrade!</p>\n\n<div><div><pre><code><span># freebsd-update fetch</span>\n...\nWARNING: FreeBSD 14.1-RELEASE-p5 is approaching its End-of-Life date.\nIt is strongly recommended that you upgrade to a newer\nrelease within the next 5 days.\n</code></pre></div></div>\n\n<p>Install these updates.</p>\n\n<div><div><pre><code>freebsd-update <span>install</span>\n</code></pre></div></div>\n\n<p>Now use <code>freebsd-update</code> to fetch the 14.2-RELEASE and install it.</p>\n\n<div><div><pre><code><span># freebsd-update upgrade -r 14.2-RELEASE</span>\n...\n<span>#\u00a0freebsd-update install</span>\nsrc component not installed, skipped\nInstalling updates...\nKernel updates have been installed. Please reboot and run\n<span>'freebsd-update [options] install'</span> again to finish installing updates.\n</code></pre></div></div>\n\n<p>Reboot the system using <code>reboot</code> and then finish installing updates.</p>\n\n<div><div><pre><code><span># freebsd-update install</span>\nsrc component not installed, skipped\nInstalling updates...\nRestarting sshd after upgrade\nPerforming sanity check on sshd configuration.\nStopping sshd.\nWaiting <span>for </span>PIDS: 707.\nPerforming sanity check on sshd configuration.\nStarting sshd.\nScanning /usr/share/certs/untrusted <span>for </span>certificates...\nScanning /usr/share/certs/trusted <span>for </span>certificates...\nScanning /usr/local/share/certs <span>for </span>certificates...\n <span>done</span><span>.</span>\n</code></pre></div></div>\n\n<p>Now use <code>pkg</code> to upgrade any applications.</p>\n\n<div><div><pre><code><span># pkg upgrade</span>\nUpdating FreeBSD repository catalogue...\nFetching data.pkg: 100% 7 MiB 7.5MB/s 00:01 \nProcessing entries: 100%\nFreeBSD repository update completed. 35885 packages processed.\nAll repositories are up to date.\nChecking <span>for </span>upgrades <span>(</span>28 candidates<span>)</span>: 100%\nProcessing candidates <span>(</span>28 candidates<span>)</span>: 100%\nThe following 28 package<span>(</span>s<span>)</span> will be affected <span>(</span>of 0 checked<span>)</span>:\n\nInstalled packages to be UPGRADED:\n\tcurl: 8.10.1 -> 8.11.1_1\n...\n\txxd: 9.1.0764 -> 9.1.1199\n\nNumber of packages to be upgraded: 28\n\nThe process will require 3 MiB more space.\n77 MiB to be downloaded.\n\nProceed with this action? <span>[</span>y/N]: y\n</code></pre></div></div>\n\n<p>Finally, reboot the system and check <code>uname -a</code>.</p>\n\n<div><div><pre><code><span># uname -a</span>\nFreeBSD spring 14.2-RELEASE-p1 FreeBSD 14.2-RELEASE-p1 GENERIC amd64\n</code></pre></div></div>\n\n<p>To update the the FreeBSD base images used by the CI services, I applied <a href=\"https://github.com/ocurrent/freebsd-infra/pull/13\">PR#13</a> to <a href=\"https://github.com/ocurrent/freebsd-infra\">ocurrent/freebsd-infra</a>.</p>\n\n<p>This was followed up by <a href=\"https://github.com/ocurrent/ocaml-ci/pull/1007\">PR#1007</a> on ocurrent/ocaml-ci and <a href=\"https://github.com/ocurrent/opam-repo-ci/pull/427\">PR#427</a> to ocurrent/opam-repo-ci.</p>",
+20
mte/2025_03_27_dell-poweredge-r640.json
+20
mte/2025_03_27_dell-poweredge-r640.json
···+"summary": "We have received our first batch of 7.68TB Kingston SSD drives for deployment in some Dell PowerEdge R640 servers, which will be used to create a large storage pool.",+"content": "<p>We have received our first batch of 7.68TB Kingston SSD drives for deployment in some Dell PowerEdge R640 servers, which will be used to create a large storage pool.</p>\n\n<p>The first job was to mount each of the drives in a caddy.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/kingston-with-caddy.png\"></p>\n\n<p>And then install them in the server.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/kingston-in-slot.png\"></p>\n\n<p>These R640 servers are equipped with the Dell PERC H740P RAID controller. They support either hardware RAID 0,1,5,10,50 etc or Enhanced HBA mode.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/r640-enhanced-hba.png\"></p>\n\n<p>In eHBA mode, the disks operate in a passthrough mode, presenting the raw disks to the OS, however each disk needs to be specifically selected in an additional step after enabling eHBA mode.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/r640-jbod.png\"></p>\n\n<p>In RAID mode, one or more virtual disks need to be created to present the disks to the OS. Preconfigured profiles are available to complete this step easily.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/r640-raid5.png\"></p>\n\n<p>We will run these with a ZFS file system, so need to decide on whether we want to use the hardware RAID features or follow the advice on Wikipedia on the <a href=\"https://en.wikipedia.org/wiki/ZFS#Avoidance_of_hardware_RAID_controllers\">Avoidance of hardware RAID controllers</a>. Online opinion is divided. My summary is that hardware RAID will be easier to manage when a disk fails, but ZFS on the raw disks will have some integrity advantages.</p>",
+20
mte/2025_03_30_box-diff.json
+20
mte/2025_03_30_box-diff.json
···+"summary": "Box has an unlimited storage model but has an upload limit of 1TB per month. I have been uploading various data silos but would now like to verify that the data is all present. Box has an extensive API, but I only need the list items in folder call.",+"content": "<p>Box has an unlimited storage model but has an upload limit of 1TB per month. I have been uploading various data silos but would now like to verify that the data is all present. Box has an extensive <a href=\"https://developer.box.com/reference/\">API</a>, but I only need the <a href=\"https://developer.box.com/reference/get-folders-id-items/\">list items in folder</a> call.</p>\n\n<p>The list-items call assumes that you have a folder ID which you would like to query. The root of the tree is always ID 0. To check for the presence of file <code>foo</code> in a folder tree <code>a/b/c/foo</code>, we need to call the API with folder ID 0. This returns a list of entries in that folder. e.g.</p>\n\n<div><div><pre><code><span>{</span><span>\n </span><span>\"entries\"</span><span>:</span><span> </span><span>[</span><span>\n </span><span>{</span><span>\n </span><span>\"id\"</span><span>:</span><span> </span><span>\"12345\"</span><span>,</span><span>\n </span><span>\"type\"</span><span>:</span><span> </span><span>\"folder\"</span><span>,</span><span>\n </span><span>\"name\"</span><span>:</span><span> </span><span>\"a\"</span><span>\n </span><span>}</span><span>\n </span><span>]</span><span>\n</span><span>}</span><span>\n</span></code></pre></div></div>\n\n<p>The API must now be called again with the new ID number to get the contents of folder <code>a</code>. This is repeated until we finally have the entries for folder <code>c</code> which would contain the file itself. I have used a <code>Hashtbl</code> to cache the results of each call.</p>\n\n<div><div><pre><code><span>{</span><span>\n </span><span>\"entries\"</span><span>:</span><span> </span><span>[</span><span>\n </span><span>{</span><span>\n </span><span>\"id\"</span><span>:</span><span> </span><span>\"78923434\"</span><span>,</span><span>\n </span><span>\"type\"</span><span>:</span><span> </span><span>\"file\"</span><span>,</span><span>\n </span><span>\"name\"</span><span>:</span><span> </span><span>\"foo\"</span><span>\n </span><span>}</span><span>\n </span><span>]</span><span>\n</span><span>}</span><span>\n</span></code></pre></div></div>\n\n<p>Each call defaults to returning at most 100 entries. This can be increased to a maximum of 1000 by passing <code>?limit=1000</code> to the GET request. For more results, Box offers two pagination systems: <code>offset</code> and <code>marker</code>. Offset allows you to pass a starting item number along with the call, but this is limited to 10,000 entries.</p>\n\n<blockquote>\n <p>Queries with offset parameter value exceeding 10000 will be rejected with a 400 response.</p>\n</blockquote>\n\n<p>To deal with folders of any size, we should use the marker system. For this, we pass <code>?usemarker=true</code> to the first GET request, which causes the API to return <code>next_marker</code> and <code>prev_marker</code> as required as additional JSON properties. Subsequent calls would use <code>?usemarker=true&marker=XXX</code>. The end is detected by the absence of the <code>next_marker</code> when no more entries are available.</p>\n\n<p>The project can be found on GitHub in <a href=\"https://github.com/mtelvers/ocaml-box-diff\">mtelvers/ocaml-box-diff</a>.</p>",
+20
mte/2025_03_31_opam-post-deps.json
+20
mte/2025_03_31_opam-post-deps.json
···+"summary": "Previously, I discussed the installation order for a simple directed acyclic graph without any cycles. However, opam packages include post dependencies. Rather than package A depending upon B where B would be installed first, post dependencies require X to be installed after Y. The post dependencies only occur in a small number of core OCaml packages. They are quite often empty and exist to direct the solver. Up until now, I had been using a base layer with an opam switch containing the base compiler and, therefore, did not need to deal with any post dependencies.",+"content": "<p>Previously, I discussed the installation order for a simple directed acyclic graph without any cycles. However, <code>opam</code> packages include <em>post</em> dependencies. Rather than package A depending upon B where B would be installed first, <em>post</em> dependencies require X to be installed after Y. The <em>post</em> dependencies only occur in a small number of core OCaml packages. They are quite often empty and exist to direct the solver. Up until now, I had been using a base layer with an opam switch containing the base compiler and, therefore, did not need to deal with any <em>post</em> dependencies.</p>\n\n<p>Here is the graph of <a href=\"https://www.tunbury.org/images/0install.2.18-with-post-with-colour.pdf\">0install</a> with <em>post</em> dependencies coloured in red.</p>\n\n<p>Removing the <em>post</em> dependencies gives an unsatisfying graph with orphaned dependencies. <a href=\"https://www.tunbury.org/images/0install.2.18-without-post.pdf\">0install without post</a>. Note <code>base-nnp.base</code> and <code>base-effects.base</code>. However, this graph can be used to produce a linear installation order. The orphaned packages can be removed with a recursive search.</p>\n\n<p>When opam wants to decide the installation order, it uses OCamlgraph\u2019s topological sort capability.</p>\n\n<blockquote>\n <p>This functor provides functions which allow iterating over a graph in topological order. Cycles in graphs are allowed. Specification is the following: If vertex [x] is visited before vertex [y] then either there is a path from [x] to [y], or there is no path from [y] to [x]. In the particular case of a DAG, this simplifies to: if there is an edge from [x] to [y], then [x] is visited before [y].</p>\n</blockquote>\n\n<p>The description of <code>fold</code> is particularly interesting as the order for cycles is unspecified.</p>\n\n<blockquote>\n <p>[fold action g seed] allows iterating over the graph [g] in topological order. [action node accu] is called repeatedly, where [node] is the node being visited, and [accu] is the result of the [action]\u2019s previous invocation, if any, and [seed] otherwise. If [g] contains cycles, the order is unspecified inside the cycles and every node in the cycles will be presented exactly once</p>\n</blockquote>\n\n<p>In my testing, the installation order matches the order used by opam within the variation allowed above.</p>\n\n<p>Layers can be built up using the intersection of packages installed so far and the required dependencies.</p>",
+20
mte/2025_04_01_go-docker.json
+20
mte/2025_04_01_go-docker.json
···+"summary": "For some time, we have had issues on Ubuntu Noble when extracting tar files within Docker containers. See ocaml/infrastructure#121. This is only an issue on exotic architectures like RISCV and PPC64LE.",+"content": "<p>For some time, we have had issues on Ubuntu Noble when extracting\ntar files within Docker containers. See\n<a href=\"https://github.com/ocaml/infrastructure/issues/121\">ocaml/infrastructure#121</a>.\nThis is only an issue on exotic architectures like RISCV and PPC64LE.</p>\n\n<div><div><pre><code><span># docker run --rm -it ubuntu:noble</span>\nroot@cf3491db4abd:/# <span>cd\n</span>root@cf3491db4abd:~# <span>mkdir </span>foo\nroot@cf3491db4abd:~# <span>tar</span> <span>-cf</span> bar.tar foo\nroot@cf3491db4abd:~# <span>rmdir </span>foo\nroot@cf3491db4abd:~# <span>tar</span> <span>-xf</span> bar.tar\n<span>tar</span>: foo: Cannot change mode to rwxr-xr-x: Operation not permitted\n<span>tar</span>: Exiting with failure status due to previous errors\n</code></pre></div></div>\n\n<p>The combination of Docker version and <code>libseccomp2</code> version prevents\nthe container from running the <code>fchmodat2</code> system call. There is a\nbug report on Ubuntu\u2019s bug tracker for the issue.</p>\n\n<p>I have been working around this by building Docker from scratch.</p>\n\n<div><div><pre><code>apt <span>install </span>golang\ngit clone https://github.com/moby/moby\n<span>cd </span>moby\n<span>AUTO_GOPATH</span><span>=</span>1 ./hack/make.sh binary\n<span>mv </span>bundles/binary-daemon/<span>*</span> /usr/bin/\nservice docker restart\n</code></pre></div></div>\n\n<p>When provisioning some new RISCV machines, I have once again hit this\nissue, but now the version of Go installed by <code>apt</code> on Ubuntu Noble is\ntoo old to build Docker!</p>\n\n<div><div><pre><code>go: vendor.mod requires go >= 1.23.0 (running go 1.22.2; GOTOOLCHAIN=local)\n</code></pre></div></div>\n\n<p>As this needs to be repeated multiple times, it makes sense\nto wrap the installation steps into an Ansible Playbook.\n<a href=\"https://gist.github.com/mtelvers/ced9d981b9137c491c95780390ce802c\">golang+docker.yml</a></p>",
+20
mte/2025_04_02_ubuntu-with-zfs-root.json
+20
mte/2025_04_02_ubuntu-with-zfs-root.json
···+"summary": "The installation of Ubuntu on ZFS contains about 50 steps of detailed configuration. I have 10 servers to install, so I would like to script this process as much as possible.",+"content": "<p>The installation of <a href=\"https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/Ubuntu%2022.04%20Root%20on%20ZFS.html\">Ubuntu on ZFS</a>\ncontains about 50 steps of detailed configuration. I have 10 servers to install, so I would like to script this process as much as possible.</p>\n\n<p>To test my script, I have created a new VM on VMware ESXi with 10 x 16GB\ndisks, 16GB RAM, 4 vCPU. In the advanced options, I have set the boot to\nEFI and set <code>disk.EnableUUID = \"TRUE\"</code> in the <code>.vmx</code> file. Doing this\nensures that <code>/dev/disk</code> aliases are created in the guest.</p>\n\n<p>Boot Ubuntu 24.04 from the Live CD and install SSH.</p>\n\n<div><div><pre><code><span>sudo</span> <span>-i</span>\napt update\napt <span>install </span>openssh-server <span>-y</span>\n</code></pre></div></div>\n\n<p>Use <code>wget</code> to download https://github.com/mtelvers.keys into <code>~/.ssh/authorized_keys</code>.</p>\n\n<div><div><pre><code>wget https://github.com/mtelvers.keys <span>-O</span> ~/.ssh/authorized_keys\n</code></pre></div></div>\n\n<p>In your Ansible <code>hosts</code> file, add your new machine and its IP address</p>\n\n<div><div><pre><code>your.fqdn ansible_host=<ip>\n</code></pre></div></div>\n\n<p>Run the playbook with</p>\n\n<div><div><pre><code>ansible-playbook <span>-i</span> hosts <span>--limit</span> your.fqdn ubuntu-zfs.yml\n</code></pre></div></div>\n\n<p>The playbook is available as a GitHub gist <a href=\"https://gist.github.com/mtelvers/2cbeb5e35f43f5e461aa0c14c4a0a6b8\">zfs-ubuntu.yml</a>.</p>",
+20
mte/2025_04_03_kingston-drives.json
+20
mte/2025_04_03_kingston-drives.json
···+"summary": "We have received the second batch of 40 x 7.68TB Kingston SSD drives, bringing the total to 50 drives.",+"content": "<p>We have received the second batch of 40 x 7.68TB Kingston SSD drives, bringing the total to 50 drives.</p>\n\n<p>We now have 5 fully populated Dell PowerEdge R640 with a total raw capacity of 384TB.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/kingston-forty-with-caddies.png\"></p>",
+20
mte/2025_04_04_opam-repo-ci.json
+20
mte/2025_04_04_opam-repo-ci.json
···+"summary": "It\u2019s Tuesday morning, and virtually all opam repo ci jobs are failing with timeouts. This comes at a critical time as these are the first jobs following the update of ocurrent/ocaml-version noted on 24th March.",+"content": "<p>It\u2019s Tuesday morning, and virtually all opam repo ci jobs are failing with timeouts. This comes at a critical time as these are the first jobs following the update of <a href=\"https://github.com/ocurrent/ocaml-version\">ocurrent/ocaml-version</a> <a href=\"https://www.tunbury.org/recent-ocaml-version/\">noted</a> on 24th March.</p>\n\n<p>The <a href=\"https://opam.ci.ocaml.org/github/ocaml/opam-repository\">opam repo ci</a> tests all PRs on <a href=\"https://github.com/ocaml/opam-repository\">opam-repository</a>. The pipeline downloads Docker images, which contain the root filesystem for various Linux distributions, architectures, and OCaml versions, which are used as the base environment to run the tests. These base images are created by the <a href=\"https://images.ci.ocaml.org\">base image builder</a>. <a href=\"https://github.com/ocurrent/docker-base-images/pull/317\">PR#317</a> update these base images in three ways:</p>\n\n<ul>\n <li>Images for OCaml < 4.08 were removed.</li>\n <li>The <code>opam-repository-archive</code> overlay was removed as this contained the < 4.08 opam packages.</li>\n <li>The <code>ocaml-patches-overlay</code> overlay was removed as this was only needed to build OCaml < 4.08 on GCC 14.</li>\n</ul>\n\n<p>Given these changes, I immediately assumed some element of these was the culprit.</p>\n\n<p>Here\u2019s an example of a failure as reported in the log.</p>\n\n<div><div><pre><code>2025-04-01 07:27.45 ---> using \"9dd47386dd0565c83eac2e9d589d75bdd268a7f34f3c854d1db189e7a2e5f77b\" from cache\n\n/: (user (uid 1000) (gid 1000))\n\n/: (workdir /home/opam)\n\n/home/opam: (run (shell \"sudo ln -f /usr/bin/opam-dev /usr/bin/opam\"))\n2025-04-01 07:27.45 ---> using \"132d861be153666fd67b2e16b21c4de16e15e26f8d7d42f3bcddf0360ad147be\" from cache\n\n/home/opam: (run (network host)\n (shell \"opam init --reinit --config .opamrc-sandbox -ni\"))\nConfiguring from /home/opam/.opamrc-sandbox, then /home/opam/.opamrc, and finally from built-in defaults.\nChecking for available remotes: rsync and local, git.\n - you won't be able to use mercurial repositories unless you install the hg command on your system.\n - you won't be able to use darcs repositories unless you install the darcs command on your system.\n\nThis development version of opam requires an update to the layout of /home/opam/.opam from version 2.0 to version 2.2, which can't be reverted.\nYou may want to back it up before going further.\n\nContinue? [Y/n] y\n[NOTE] The 'jobs' option was reset, its value was 39 and its new value will vary according to the current number of cores on your machine. You can restore the fixed value using:\n opam option jobs=39 --global\nFormat upgrade done.\n\n<><> Updating repositories ><><><><><><><><><><><><><><><><><><><><><><><><><><>\n2025-04-01 09:27.34: Cancelling: Timeout (120.0 minutes)\nJob cancelled\n2025-04-01 09:27.40: Timeout (120.0 minutes)\n</code></pre></div></div>\n\n<p>With nearly all jobs taking 2 hours to run, the cluster was understandably backlogged!</p>\n\n<p>The issue could be reproduced with this Dockerfile:</p>\n\n<div><div><pre><code>cd $(mktemp -d)\ngit clone --recursive \"https://github.com/ocaml/opam-repository.git\" && cd \"opam-repository\" && git fetch origin \"refs/pull/27696/head\" && git reset --hard 46b8cc5a\ngit fetch origin master\ngit merge --no-edit 4d8fa0fb8fce3b6c8b06f29ebcfa844c292d4f3e\ncat > ../Dockerfile <<'END-OF-DOCKERFILE'\nFROM ocaml/opam:debian-12-ocaml-4.09@sha256:13bd7f0979922adb13049eecc387d65d7846a3058f7dd6509738933e88bc8d4a\nUSER 1000:1000\nWORKDIR /home/opam\nRUN sudo ln -f /usr/bin/opam-dev /usr/bin/opam\nRUN opam init --reinit -ni\nRUN opam option solver=builtin-0install && opam config report\nENV OPAMDOWNLOADJOBS=\"1\"\nENV OPAMERRLOGLEN=\"0\"\nENV OPAMPRECISETRACKING=\"1\"\nENV CI=\"true\"\nENV OPAM_REPO_CI=\"true\"\nRUN rm -rf opam-repository/\nCOPY --chown=1000:1000 . opam-repository/\nRUN opam repository set-url --strict default opam-repository/\nRUN opam update --depexts || true\nRUN opam pin add -k version -yn chrome-trace.3.18.0~alpha0 3.18.0~alpha0\nRUN opam reinstall chrome-trace.3.18.0~alpha0; \\\n res=$?; \\\n test \"$res\" != 31 && exit \"$res\"; \\\n export OPAMCLI=2.0; \\\n build_dir=$(opam var prefix)/.opam-switch/build; \\\n failed=$(ls \"$build_dir\"); \\\n partial_fails=\"\"; \\\n for pkg in $failed; do \\\n if opam show -f x-ci-accept-failures: \"$pkg\" | grep -qF \"\\\"debian-12\\\"\"; then \\\n echo \"A package failed and has been disabled for CI using the 'x-ci-accept-failures' field.\"; \\\n fi; \\\n test \"$pkg\" != 'chrome-trace.3.18.0~alpha0' && partial_fails=\"$partial_fails $pkg\"; \\\n done; \\\n test \"${partial_fails}\" != \"\" && echo \"opam-repo-ci detected dependencies failing: ${partial_fails}\"; \\\n exit 1\n\nEND-OF-DOCKERFILE\ndocker build -f ../Dockerfile .\n</code></pre></div></div>\n\n<p>It was interesting to note which jobs still work. For example, builds on macOS and FreeBSD ran normally. This makes sense as these architectures don\u2019t use the Docker base images. Looking further, opam repo ci attempts builds on opam 2.0, 2.1, 2.2, and 2.3 on Debian. These builds succeeded. Interesting. All the other builds use the latest version of opam built from the head of the master branch.</p>\n\n<p>Taking the failing Dockerfile above and replacing <code>sudo ln -f /usr/bin/opam-dev /usr/bin/opam</code> with <code>sudo ln -f /usr/bin/opam-2.3 /usr/bin/opam</code> immediately fixed the issue!</p>\n\n<p>I pushed commit <a href=\"https://github.com/ocurrent/opam-repo-ci/commit/7174953145735a54ecf668c7387e57b3f2d2a411\">7174953</a> to force opam repo ci to use opam 2.3 and opened <a href=\"https://github.com/ocaml/opam/issues/6448\">issue#6448</a> on ocaml/opam. The working theory is that some change associated with <a href=\"https://github.com/ocaml/opam/pull/5892\">PR#5892</a>, which replace GNU patch with the OCaml patch library is the root cause.</p>\n\n<p>Musing on this issue with David, the idea of using the latest tag rather than head commit seemed like a good compromise. This allowed us to specifically test pre-release versions of opam when they were tagged but not be at the cutting edge with the risk of impacting a key service.</p>\n\n<p>We need the latest tag by version number, not by date, as we wouldn\u2019t want to revert to testing on, for example, 2.1.7 if something caused a new release of the 2.1 series. The result was a function which runs <code>git tag --format %(objectname) %(refname:strip=2)</code> and semantically sorts the version numbers using <code>OpamVersion.compare</code>. See <a href=\"https://github.com/ocurrent/docker-base-images/pull/318\">PR#318</a>.</p>",
+20
mte/2025_04_07_ocaml-claude-box.json
+20
mte/2025_04_07_ocaml-claude-box.json
···+"summary": "Over the weekend, I decided to extend my Box tool to incorporate file upload. There is a straightforward POST API for this with a curl one-liner given in the Box documentation. Easy.",+"content": "<p>Over the weekend, I decided to extend my <a href=\"https://box.com\">Box</a> <a href=\"https://github.com/mtelvers/ocaml-box-diff\">tool</a> to incorporate file upload. There is a straightforward POST API for this with a <code>curl</code> one-liner given in the Box <a href=\"https://developer.box.com/reference/post-files-content/\">documentation</a>. Easy.</p>\n\n<p>The documentation for <a href=\"https://mirage.github.io/ocaml-cohttp/cohttp-eio/Cohttp_eio/Client/index.html\">Cohttp-eio.Client</a> only gives the function signature for <code>post</code>, but it looked pretty similar to <code>get</code>, which I had already been working with. The <a href=\"https://github.com/mirage/ocaml-cohttp\">README</a> for Cohttp gave me pause when I read this comment about multipart forms.</p>\n\n<blockquote>\n <p>Multipart form data is not supported out of the box but is provided by external libraries</p>\n</blockquote>\n\n<p>Of the three options given, the second option looked abandoned, while the third said it didn\u2019t support streaming, so I went with the first one <a href=\"https://github.com/dinosaure/multipart_form\">dionsaure/multipart_form</a>.</p>\n\n<p>The landing page included an example encoder. A couple of external functions are mentioned, and I found example code for these in <a href=\"https://github.com/dinosaure/multipart_form/blob/main/test/test.ml\">test/test.ml</a>. This built, but didn\u2019t work against Box. I ran <code>nc -l 127.0.0.1 6789</code> and set that as the API endpoint for both the <code>curl</code> and my application. This showed I was missing the <code>Content-Type</code> header in the part boundary. It should be <code>application/octet-stream</code>.</p>\n\n<p>There is a <code>~header</code> parameter to <code>part</code>, and I hoped for a <code>Header.add</code> like the <code>Cohttp</code>, but sadly not. See the <a href=\"https://ocaml.org/p/multipart_form/latest/doc/Multipart_form/Header/index.html\">documentation</a>. There is <code>Header.content_type</code>, but that returns the content type. How do you make it? <code>Header.of_list</code> requires a <code>Field.field list</code>.</p>\n\n<p>In a bit of frustration, I decided to ask Claude. I\u2019ve not tried it before, but I\u2019ve seen some impressive demonstrations. My first lesson here was to be specific. Claude is not a mind reader. After a few questions, I got to this:</p>\n\n<div><div><pre><code><span>Field</span><span>.(</span><span>make</span> <span>Content_type</span><span>.</span><span>name</span> <span>(</span><span>Content_type</span><span>.</span><span>v</span> <span>`Application</span> <span>`Octet_stream</span><span>));</span>\n</code></pre></div></div>\n\n<p>I can see why this was suggested as <code>Content_disposition.v</code> exists, but <code>Content_type.v</code> does not, nor does <code>Field.make</code>. Claude quickly obliged with a new version when I pointed this out but added the <code>Content_type</code> to the HTTP header rather than the boundary header. This went back and forth for a while, with Claude repeatedly suggesting functions which did not exist. I gave up.</p>\n\n<p>On OCaml.org, the <a href=\"https://ocaml.org/p/multipart_form/latest\">multipart-form</a> documentation includes a <em>Used by</em> section that listed <code>dream</code> as the only (external) application which used the library. From the source, I could see <code>Field.Field (field_name, Field.Content_type, v)</code>, which looked good.</p>\n\n<p>There is a function <code>Content_type.of_string</code>. I used <code>:MerlinLocate</code> to find the source, which turned out to be an Angstrom parser which returns a <code>Content_type.t</code>. This led me to <code>Content_type.make</code>, and ultimately, I was able to write these two lines:</p>\n\n<div><div><pre><code><span>let</span> <span>v</span> <span>=</span> <span>Content_type</span><span>.</span><span>make</span> <span>`Application</span> <span>(</span><span>`Iana_token</span> <span>\"octet-stream\"</span><span>)</span> <span>Content_type</span><span>.</span><span>Parameters</span><span>.</span><span>empty</span>\n<span>let</span> <span>p0</span> <span>=</span> <span>part</span> <span>~</span><span>header</span><span>:</span><span>(</span><span>Header</span><span>.</span><span>of_list</span> <span>[</span> <span>Field</span> <span>(</span><span>Field_name</span><span>.</span><span>content_type</span><span>,</span> <span>Content_type</span><span>,</span> <span>v</span><span>)</span> <span>])</span> <span>...</span>\n</code></pre></div></div>\n\n<p>As a relatively new adopter of OCaml as my language of choice, the most significant challenge I face is documentation, particularly when I find a library on opam which I want to use. I find this an interesting contrast to the others in the community, where it is often cited that tooling is the most significant barrier to adoption. In my opinion, the time taken to set up a build environment is dwarfed by the time spent in that environment iterating code.</p>\n\n<p>I would like to take this opportunity to thank all contributors to opam repository for their time and effort in making packages available. This post mentions specific packages but only to illustrate my point.</p>",
+20
mte/2025_04_10_dell-r640-installation.json
+20
mte/2025_04_10_dell-r640-installation.json
···+"summary": "Today we have racked the five 14th generation Dell R640 servers and a Dell N4032 switch.",+"content": "<p>Today we have racked the five 14th generation Dell R640 servers and a Dell N4032 switch.</p>\n\n<p>When inspecting the rack rails, I noticed that some of the left-hand rails had an extra tab on them while the others did not. For the first server, I used a rail with a tab only to discover that the tab prohibited the server from being pushed in all the way. The tabs were easily removed but the server needed to be removed from the rack first.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/dell-r640-rail.jpg\"></p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/dell-r640-rail-removal.jpg\"></p>\n\n<p>First server installed</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/dell-r640-first-one.jpg\"></p>\n\n<p>The last server on the rails</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/dell-r640-last-one.jpg\"></p>\n\n<p>Front view</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/dell-r640-front-view.jpg\"></p>\n\n<p>Rear view</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/dell-r640-rear-view.jpg\"></p>\n\n<p>Cabling</p>\n\n<ul>\n <li>Yellow CAT5 for iDRAC ports</li>\n <li>Red CAT6 for 10GBase-T</li>\n</ul>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/dell-r640-cabled.jpg\"></p>\n\n<p>The initial iDRAC configuration was carried out using a crash cart.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/dell-r640-idrac-config.jpg\"></p>\n\n<p>The servers are called:</p>\n\n<ul>\n <li>myrina</li>\n <li>thalestris</li>\n <li>lampedo</li>\n <li>otrera</li>\n <li>antiope</li>\n</ul>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/dell-r640-final.jpg\"></p>\n\n<p>We had some difficulty with the 40G uplink from the switch and we could only get the link to come up by splitting it into 4 x 10G channels, as follows.</p>\n\n<div><div><pre><code>console>enable\nconsole#configure\nconsole(config)#interface Fo1/1/1\nconsole(config-if-Fo1/1/1)#hardware profile portmode 4x10g\n</code></pre></div></div>\n\n<p>Then rebooting with <code>do reload</code>. The 4 x 10G uplinks has been configured as an LACP port channel (Po1).</p>\n\n<h1>R640 Configuration</h1>\n\n<p>Each server has:</p>\n\n<ul>\n <li>2 x Intel Xeon Gold 6244 3.6G 8C / 16T</li>\n <li>8 x 16GB DIMM</li>\n <li>10 x Kingston 7.68TB SSD</li>\n</ul>\n\n<p><a href=\"https://www.dell.com/support/manuals/en-uk/poweredge-r640/per640_ism_pub/general-memory-module-installation-guidelines?guid=guid-acbc0f13-dedb-492b-a0b0-18303ded565a&lang=en-us\">Dell R640 has 24 DIMM slots</a></p>",
+20
mte/2025_04_11_dell-r640-ubuntu.json
+20
mte/2025_04_11_dell-r640-ubuntu.json
···+"summary": "I could have scripted this via Ansible, but there would always be a manual element, such as configuring the H740P controller and booting from the network to get to the point where you can SSH to the machine. Therefore, I decided to just document the steps required.",+"content": "<p>I could have scripted this via Ansible, but there would always be a manual element, such as configuring the H740P controller and booting from the network to get to the point where you can SSH to the machine. Therefore, I decided to just document the steps required.</p>\n\n<p>After powering the system on, press F2 to open setup and follow this path through the menu</p>\n\n<div><div><pre><code>Device Configuration > Integrated RAID Controller H740P > Configure > Clear Configuration\n</code></pre></div></div>\n\n<p>then</p>\n\n<div><div><pre><code>View Server Profile > Controller Management > Advanced Controller Management > Manage Controller Mode\n</code></pre></div></div>\n\n<p>Choose <code>Switch to Enhanced HBA Controller Mode</code>, then Confirm and reset the system.</p>\n\n<p>Boot to the Ubuntu installer. I used <code>netboot.xyz</code> running in a Docker container.</p>\n\n<p>I will use a software RAID set configured by <code>mdadm</code> for the Ubuntu root drive. In this configuration, the EFI partition needs special attention as EFI does not understand software RAID. GRUB can be configured to create and update multiple copies of the EFI partition. For consistency, I will create an EFI partition on all the drives.</p>\n\n<p>These commands will create the following partitions:</p>\n\n<div><div><pre><code><span>for </span>a <span>in </span>sd<span>{</span>a..j<span>}</span> <span>;</span> <span>do </span>sgdisk <span>-n1</span>:1M:+512M <span>-t1</span>:EF00 /dev/<span>$a</span> <span>;</span> <span>done\nfor </span>a <span>in </span>sd<span>{</span>a..j<span>}</span> <span>;</span> <span>do </span>sgdisk <span>-n2</span>:0:+16G <span>-t2</span>:FD00 /dev/<span>$a</span> <span>;</span> <span>done\nfor </span>a <span>in </span>sd<span>{</span>a..j<span>}</span> <span>;</span> <span>do </span>sgdisk <span>-n3</span>:0:0 <span>-t3</span>:BF00 /dev/<span>$a</span> <span>;</span> <span>done</span>\n</code></pre></div></div>\n\n<p>Next, format the EFI drives with a DOS filesystem and create the RAID device:</p>\n\n<div><div><pre><code><span>for </span>a <span>in </span>sd<span>{</span>a..j<span>}</span> <span>;</span> <span>do </span>mkdosfs <span>-F</span> 32 <span>-s</span> 1 <span>-n</span> EFI /dev/<span>${</span><span>a</span><span>}</span>1 <span>;</span> <span>done\n</span>mdadm <span>--create</span> /dev/md0 <span>--metadata</span><span>=</span>1.2 <span>--level</span><span>=</span>raid5 <span>--raid-devices</span><span>=</span>10 /dev/sd[a-j]2\n</code></pre></div></div>\n\n<p>Check the partition tables with <code>sgdisk -p /dev/sda</code>, and the soft RAID setup with <code>cat /proc/mdstat</code>.</p>\n\n<p>Install Ubuntu via the setup program selecting the software RAID as the root volume and the first drive as the boot drive.</p>\n\n<p>After the system reboots, delete the current EFI entries from <code>/etc/fstab</code>:</p>\n\n<div><div><pre><code>umount /boot/efi\n<span>sed</span> <span>-i</span> <span>'/\\/efi/d'</span> /etc/fstab\n</code></pre></div></div>\n\n<p>Then add the entries for <code>/dev/sda1</code> and <code>/dev/sdb1</code>.</p>\n\n<div><div><pre><code><span>echo</span> /dev/disk/by-uuid/<span>$(</span>blkid <span>-s</span> UUID <span>-o</span> value /dev/sda1<span>)</span> /boot/efi vfat defaults 0 0 <span>>></span> /etc/fstab\n<span>mkdir</span> <span>-p</span> /boot/efi-alt\n<span>echo</span> /dev/disk/by-uuid/<span>$(</span>blkid <span>-s</span> UUID <span>-o</span> value /dev/sdb1<span>)</span> /boot/efi-alt vfat defaults 0 0 <span>>></span> /etc/fstab\nsystemctl daemon-reload\nmount <span>-a</span>\n</code></pre></div></div>\n\n<p>Run <code>dpkg-reconfigure grub-efi-amd64</code> to configure GRUB. Accept all of the defaults and select <code>/dev/sda1</code> and <code>/dev/sdb1</code> as the boot drives. Reboot the system.</p>\n\n<p>After the reboot, install the ZFS utils.</p>\n\n<div><div><pre><code>apt <span>install </span>zfsutils-linux\n</code></pre></div></div>\n\n<p>Create a ZFS <em>tank</em> using the <em>by-id</em> values.</p>\n\n<div><div><pre><code>zpool create <span>\\</span>\n <span>-o</span> <span>ashift</span><span>=</span>12 <span>\\</span>\n <span>-o</span> <span>autotrim</span><span>=</span>on <span>\\</span>\n <span>-O</span> <span>acltype</span><span>=</span>posixacl <span>-O</span> <span>xattr</span><span>=</span>sa <span>-O</span> <span>dnodesize</span><span>=</span>auto <span>\\</span>\n <span>-O</span> <span>normalization</span><span>=</span>formD <span>\\</span>\n <span>-O</span> <span>relatime</span><span>=</span>on <span>\\</span>\n tank raidz /dev/disk/by-id/wwn-<span>*</span><span>-part3</span>\n</code></pre></div></div>\n\n<p>Check it is available:</p>\n\n<div><div><pre><code><span># zfs list</span>\nNAME USED AVAIL REFER MOUNTPOINT\ntank 789K 61.8T 171K /tank\n</code></pre></div></div>",
+20
mte/2025_04_12_box-diff.json
+20
mte/2025_04_12_box-diff.json
···+"summary": "Over the weekend, I extended mtelvers/ocaml-box-diff to include the ability to upload files over 50MB. This is a more complex API which requires a call to https://upload.box.com/api/2.0/files/upload_sessions by posting JSON containing the name of the file, the folder ID and the file size. Box replies with various session endpoints which give the URIs to use to upload the parts and to commit the the file. Box also specifies the size of each part.",+"content": "<p>Over the weekend, I extended <a href=\"https://github.com/mtelvers/ocaml-box-diff\">mtelvers/ocaml-box-diff</a> to include the ability to upload files over 50MB. This is a more complex API which requires a call to <a href=\"https://upload.box.com/api/2.0/files/upload_sessions\">https://upload.box.com/api/2.0/files/upload_sessions</a> by posting JSON containing the name of the file, the folder ID and the file size. Box replies with various <em>session endpoints</em> which give the URIs to use to upload the parts and to commit the the file. Box also specifies the size of each part.</p>\n\n<p>Each part is uploaded with an HTTP PUT of the binary data, with header fields giving the byte range within the overall file along with the SHA for this chunk. Box replies with a part identifier. Once all the parts have been uploaded, an HTTP POST is required to the commit URI, passing a JSON array of all the parts as well as the overall SHA for the file.</p>\n\n<p>I was pleased to be able to reuse <code>stream_of_file</code>, which was written for the small file upload. Additionally, I was able to keep a running total SHA for the data uploaded so far using <code>Sha1.update_string ctx chunk</code>, meaning that I did not need to recompute the overall file SHA at the end.</p>",
+20
mte/2025_04_13_gnu-parallel.json
+20
mte/2025_04_13_gnu-parallel.json
···+"summary": "If you haven\u2019t used it before, or perhaps it has been so long that it has been swapped out to disk, let me commend GNU\u2019s Parallel to you.",+"content": "<p>If you haven\u2019t used it before, or perhaps it has been so long that it has been swapped out to disk, let me commend GNU\u2019s <a href=\"https://www.gnu.org/software/parallel/parallel.html\">Parallel</a> to you.</p>\n\n<p>Parallel executes shell commands in parallel! A trivial example would be <code>parallel echo ::: A B C</code>, which runs <code>echo A</code>, <code>echo B</code> and <code>echo C</code>. <code>{}</code> can be used as a placeholder for the parameter in cases where it isn\u2019t simply appended to the command line.</p>\n\n<p>Multiple parameters can be read from an input file using four colons, <code>parallel echo :::: params_file</code>. This is particularly useful as it correctly deals with parameters/file names with spaces. For example, create a tab-delimited list of source and destination paths in <code>paths.tsv</code> and then run:</p>\n\n<div><div><pre><code>parallel <span>--jobs</span> 8 <span>--colsep</span> <span>'\\t'</span> <span>--progress</span> rsync <span>-avh</span> <span>{</span>1<span>}</span> <span>{</span>2<span>}</span> :::: paths.tsv\n</code></pre></div></div>",
+20
mte/2025_04_14_slurm-workload-manager.json
+20
mte/2025_04_14_slurm-workload-manager.json
···+"summary": "Sadiq mentioned slurm as a possible way to better schedule the group\u2019s compute resources. Many resources are available showing how to create batch jobs for Slurm clusters but far fewer on how to set up a cluster. This is a quick walkthrough of the basic steps to set up a two-node compute cluster on Ubuntu 24.04. Note that slurmd and slurmctld can run on the same machine.",+"content": "<p>Sadiq mentioned <code>slurm</code> as a possible way to better schedule the group\u2019s compute resources. Many resources are available showing how to create batch jobs for Slurm clusters but far fewer on how to set up a cluster. This is a quick walkthrough of the basic steps to set up a two-node compute cluster on Ubuntu 24.04. Note that <code>slurmd</code> and <code>slurmctld</code> can run on the same machine.</p>\n\n<p>Create three VMs: <code>node1</code>, <code>node2</code> and <code>head</code>.</p>\n\n<p>On <code>head</code>, install these components.</p>\n\n<div><div><pre><code>apt <span>install </span>munge slurmd slurmctld\n</code></pre></div></div>\n\n<p>On <code>node1</code> and <code>node2</code> install.</p>\n\n<div><div><pre><code>apt <span>install </span>munge slurmd\n</code></pre></div></div>\n\n<p>Copy <code>/etc/munge/munge.key</code> from <code>head</code> to the same location on <code>node1</code> and <code>node2</code>. Then restart <code>munge</code> on the other nodes with <code>service munge restart</code>.</p>\n\n<p>You should now be able to <code>munge -n | unmunge</code> without error. This should also work via SSH. i.e. <code>ssh head munge -n | ssh node1 unmunge</code></p>\n\n<p>If you don\u2019t have DNS, add <code>node1</code> and <code>node2</code> to the <code>/etc/hosts</code> file on <code>head</code> and add <code>head</code> to the <code>/etc/hosts</code> on <code>node1</code> and <code>node2</code>.</p>\n\n<p>On <code>head</code>, create the daemon spool directory:</p>\n\n<div><div><pre><code><span>mkdir</span> /var/spool/slurmctld\n<span>chown</span> <span>-R</span> slurm:slurm /var/spool/slurmctld/\n<span>chmod </span>775 /var/spool/slurmctld/\n</code></pre></div></div>\n\n<p>Create <code>/etc/slurm/slurm.conf</code>, as below. Update the compute node section by running <code>slurmd -C</code> on each node to generate the configuration line. This file should be propagated to all the machines. The configuration file can be created using this <a href=\"https://slurm.schedmd.com/configurator.html\">tool</a>.</p>\n\n<div><div><pre><code>ClusterName=cluster\nSlurmctldHost=head\nProctrackType=proctrack/linuxproc\nReturnToService=1\nSlurmctldPidFile=/var/run/slurmctld.pid\nSlurmctldPort=6817\nSlurmdPidFile=/var/run/slurmd.pid\nSlurmdPort=6818\nSlurmdSpoolDir=/var/spool/slurmd\nSlurmUser=slurm\nStateSaveLocation=/var/spool/slurmctld\nTaskPlugin=task/affinity,task/cgroup\n\n# TIMERS\nInactiveLimit=0\nKillWait=30\nMinJobAge=300\nSlurmctldTimeout=120\nSlurmdTimeout=300\nWaittime=0\n\n# SCHEDULING\nSchedulerType=sched/backfill\nSelectType=select/cons_tres\n\n# LOGGING AND ACCOUNTING\nJobCompType=jobcomp/none\nJobAcctGatherFrequency=30\nSlurmctldDebug=info\nSlurmctldLogFile=/var/log/slurmctld.log\nSlurmdDebug=info\nSlurmdLogFile=/var/log/slurmd.log\n\n# COMPUTE NODES\nNodeName=node1 CPUs=1 Boards=1 SocketsPerBoard=1 CoresPerSocket=1 ThreadsPerCore=1 RealMemory=1963\nNodeName=node2 CPUs=1 Boards=1 SocketsPerBoard=1 CoresPerSocket=1 ThreadsPerCore=1 RealMemory=1963\nPartitionName=debug Nodes=ALL Default=YES MaxTime=INFINITE State=UP\n</code></pre></div></div>\n\n<p>On <code>head</code>, start the control daemon.</p>\n\n<div><div><pre><code>service slurmctld start\n</code></pre></div></div>\n\n<p>And on the nodes, start the slurm daemon.</p>\n\n<div><div><pre><code>service slurmd start\n</code></pre></div></div>\n\n<p>From <code>head</code>, you can now run a command simultaneously on both nodes.</p>\n\n<div><div><pre><code><span># srun -N2 -l /bin/hostname</span>\n0: node1\n1: node2\n</code></pre></div></div>\n\n<p>The optional <code>Gres</code> parameter on <code>NodeName</code> allows nodes to be configured with extra resources such as GPUs.</p>\n\n<p>Typical configurations use an NFS server to make /home available on all the nodes. Note that users only need to be created on the head node and don\u2019t need SSH access to the compute nodes.</p>",
+20
mte/2025_04_16_ubuntu-cloud-init.json
+20
mte/2025_04_16_ubuntu-cloud-init.json
···+"summary": "Testing cloud-init is painful on real (server) hardware, as the faster the server, the longer it seems to take to complete POST. Therefore, I highly recommend testing with a virtual machine before moving to real hardware.",+"content": "<p>Testing cloud-init is painful on real (server) hardware, as the faster the server, the longer it seems to take to complete POST. Therefore, I highly recommend testing with a virtual machine before moving to real hardware.</p>\n\n<p>I have set up a QEMU machine to simulate the Dell R640 machines with 10 x 8T disks. I\u2019ll need to set up and tear this machine down several times for testing, so I have wrapped the setup commands into a <code>Makefile</code>. QCOW2 is a thin format, so you don\u2019t actually need 80T of disk space to do this!</p>\n\n<p>The Dell machines use EFI, so I have used EFI on the QEMU machine. Note the <code>OVMF</code> lines in the configuration. Ensure that you emulate a hard disk controller, which is supported by the EFI BIOS. For example, <code>-device megasas,id=scsi0</code> won\u2019t boot as the EFI BIOS can\u2019t see the drives. I have enabled VNC access, but I primarily used the serial console to interact with the machine.</p>\n\n<div><div><pre><code>machine: disk0.qcow2 disk1.qcow2 disk2.qcow2 disk3.qcow2 disk4.qcow2 disk5.qcow2 disk6.qcow2 disk7.qcow2 disk8.qcow2 disk9.qcow2 OVMF_VARS.fd\n\tqemu-system-x86_64 -m 8G -smp 4 -machine accel=kvm,type=pc -cpu host -display none -vnc :0 \\\n\t\t-drive if=pflash,format=raw,readonly=on,file=/usr/share/OVMF/OVMF_CODE.fd \\\n\t\t-drive if=pflash,format=raw,file=OVMF_VARS.fd \\\n\t\t-serial stdio \\\n\t\t-device virtio-scsi-pci,id=scsi0 \\\n\t\t-device scsi-hd,drive=drive0,bus=scsi0.0,channel=0,scsi-id=0,lun=0 \\\n\t\t-drive file=disk0.qcow2,if=none,id=drive0 \\\n\t\t-device scsi-hd,drive=drive1,bus=scsi0.0,channel=0,scsi-id=1,lun=0 \\\n\t\t-drive file=disk1.qcow2,if=none,id=drive1 \\\n\t\t-device scsi-hd,drive=drive2,bus=scsi0.0,channel=0,scsi-id=2,lun=0 \\\n\t\t-drive file=disk2.qcow2,if=none,id=drive2 \\\n\t\t-device scsi-hd,drive=drive3,bus=scsi0.0,channel=0,scsi-id=3,lun=0 \\\n\t\t-drive file=disk3.qcow2,if=none,id=drive3 \\\n\t\t-device scsi-hd,drive=drive4,bus=scsi0.0,channel=0,scsi-id=4,lun=0 \\\n\t\t-drive file=disk4.qcow2,if=none,id=drive4 \\\n\t\t-device scsi-hd,drive=drive5,bus=scsi0.0,channel=0,scsi-id=5,lun=0 \\\n\t\t-drive file=disk5.qcow2,if=none,id=drive5 \\\n\t\t-device scsi-hd,drive=drive6,bus=scsi0.0,channel=0,scsi-id=6,lun=0 \\\n\t\t-drive file=disk6.qcow2,if=none,id=drive6 \\\n\t\t-device scsi-hd,drive=drive7,bus=scsi0.0,channel=0,scsi-id=7,lun=0 \\\n\t\t-drive file=disk7.qcow2,if=none,id=drive7 \\\n\t\t-device scsi-hd,drive=drive8,bus=scsi0.0,channel=0,scsi-id=8,lun=0 \\\n\t\t-drive file=disk8.qcow2,if=none,id=drive8 \\\n\t\t-device scsi-hd,drive=drive9,bus=scsi0.0,channel=0,scsi-id=9,lun=0 \\\n\t\t-drive file=disk9.qcow2,if=none,id=drive9 \\\n\t\t-net nic,model=virtio-net-pci,macaddr=02:00:00:00:00:01 \\\n\t\t-net bridge,br=br0\n\ndisk%.qcow2:\n\tqemu-img create -f qcow2 $@ 8T\n\nOVMF_VARS.fd:\n\tcp /usr/share/OVMF/OVMF_VARS.fd OVMF_VARS.fd\n\nclean:\n\trm *.qcow2 OVMF_VARS.fd\n</code></pre></div></div>\n\n<p>We are using <a href=\"https://netboot.xyz\">netboot.xyz</a> to network boot the machine via PXE. The easiest way to use netboot.xyz is to use it within the prebuilt Docker container. This can be set up using a <code>docker-compose.yml</code> file. Start the container with <code>docker compose up -d</code>.</p>\n\n<div><div><pre><code>version: \"2.1\"\nservices:\n netbootxyz:\n image: ghcr.io/netbootxyz/netbootxyz\n container_name: netbootxyz\n environment:\n - NGINX_PORT=80 # optional\n - WEB_APP_PORT=3000 # optional\n volumes:\n - /netbootxyz/config:/config # optional\n - /netbootxyz/assets:/assets # optional\n ports:\n - 3000:3000 # optional, destination should match ${WEB_APP_PORT} variable above.\n - 69:69/udp\n - 8080:80 # optional, destination should match ${NGINX_PORT} variable above.\n restart: unless-stopped\n</code></pre></div></div>\n\n<p>We have a Ubiquiti EdgeMax providing DHCP services. The DHCP options should point new clients to the Docker container.</p>\n\n<div><div><pre><code>set service dhcp-serverbootfile-server doc.caelum.ci.dev\nset service dhcp-server global-parameters \"class &quot;BIOS-x86&quot; { match if option arch = 00:00; filename &quot;netboot.xyz.kpxe&quot;; }\"\nset service dhcp-server global-parameters \"class &quot;UEFI-x64&quot; { match if option arch = 00:09; filename &quot;netboot.xyz.efi&quot;; }\"\nset service dhcp-server global-parameters \"class &quot;UEFI-bytecode&quot; { match if option arch = 00:07; filename &quot;netboot.xyz.efi&quot;; }\"\n</code></pre></div></div>\n\n<p>I also recommend staging the Ubuntu installation ISO, <code>vmlinuz</code>, and <code>initrd</code> locally, as this will speed up the machine\u2019s boot time. The files needed are:</p>\n\n<ul>\n <li>https://releases.ubuntu.com/24.04.2/ubuntu-24.04.2-live-server-amd64.iso</li>\n <li>https://github.com/netbootxyz/ubuntu-squash/releases/download/24.04.2-dac09526/vmlinuz</li>\n <li>https://github.com/netbootxyz/ubuntu-squash/releases/download/24.04.2-dac09526/initrd</li>\n</ul>\n\n<p>Create a <code>user-data</code> file containing the following cloud-init configuration. In this case, it primarily includes the storage configuration. The goal here is to configure each disk identically, with a tiny EFI partition, an MD RAID partition and a rest given over to the ZFS datastore. Additionally, create empty files <code>meta-data</code> and <code>vendor-data</code>. None of the files have an extension. The encrypted password is <code>ubuntu</code>.</p>\n\n<div><div><pre><code>#cloud-config\nautoinstall:\n version: 1\n storage:\n config:\n - { ptable: gpt, path: /dev/sda, preserve: false, name: '', grub_device: false, id: disk-sda, type: disk }\n - { ptable: gpt, path: /dev/sdb, wipe: superblock-recursive, preserve: false, name: '', grub_device: false, id: disk-sdb, type: disk }\n - { ptable: gpt, path: /dev/sdc, wipe: superblock-recursive, preserve: false, name: '', grub_device: false, id: disk-sdc, type: disk }\n - { ptable: gpt, path: /dev/sdd, wipe: superblock-recursive, preserve: false, name: '', grub_device: false, id: disk-sdd, type: disk }\n - { ptable: gpt, path: /dev/sde, wipe: superblock-recursive, preserve: false, name: '', grub_device: false, id: disk-sde, type: disk }\n - { ptable: gpt, path: /dev/sdf, wipe: superblock-recursive, preserve: false, name: '', grub_device: false, id: disk-sdf, type: disk }\n - { ptable: gpt, path: /dev/sdg, wipe: superblock-recursive, preserve: false, name: '', grub_device: false, id: disk-sdg, type: disk }\n - { ptable: gpt, path: /dev/sdh, wipe: superblock-recursive, preserve: false, name: '', grub_device: false, id: disk-sdh, type: disk }\n - { ptable: gpt, path: /dev/sdi, wipe: superblock-recursive, preserve: false, name: '', grub_device: false, id: disk-sdi, type: disk }\n - { ptable: gpt, path: /dev/sdj, wipe: superblock-recursive, preserve: false, name: '', grub_device: false, id: disk-sdj, type: disk }\n - { device: disk-sda, size: 512M, wipe: superblock, flag: boot, number: 1, preserve: false, grub_device: true, offset: 1048576, id: efi-0, type: partition }\n - { device: disk-sdb, size: 512M, wipe: superblock, flag: boot, number: 1, preserve: false, grub_device: true, offset: 1048576, id: efi-1, type: partition }\n - { device: disk-sdc, size: 512M, wipe: superblock, flag: boot, number: 1, preserve: false, grub_device: false, offset: 1048576, id: efi-2, type: partition }\n - { device: disk-sdd, size: 512M, wipe: superblock, flag: boot, number: 1, preserve: false, grub_device: false, offset: 1048576, id: efi-3, type: partition }\n - { device: disk-sde, size: 512M, wipe: superblock, flag: boot, number: 1, preserve: false, grub_device: false, offset: 1048576, id: efi-4, type: partition }\n - { device: disk-sdf, size: 512M, wipe: superblock, flag: boot, number: 1, preserve: false, grub_device: false, offset: 1048576, id: efi-5, type: partition }\n - { device: disk-sdg, size: 512M, wipe: superblock, flag: boot, number: 1, preserve: false, grub_device: false, offset: 1048576, id: efi-6, type: partition }\n - { device: disk-sdh, size: 512M, wipe: superblock, flag: boot, number: 1, preserve: false, grub_device: false, offset: 1048576, id: efi-7, type: partition }\n - { device: disk-sdi, size: 512M, wipe: superblock, flag: boot, number: 1, preserve: false, grub_device: false, offset: 1048576, id: efi-8, type: partition }\n - { device: disk-sdj, size: 512M, wipe: superblock, flag: boot, number: 1, preserve: false, grub_device: false, offset: 1048576, id: efi-9, type: partition }\n - { device: disk-sda, size: 16G, wipe: superblock, number: 2, preserve: false, grub_device: false, id: md-0, type: partition }\n - { device: disk-sdb, size: 16G, wipe: superblock, number: 2, preserve: false, grub_device: false, id: md-1, type: partition }\n - { device: disk-sdc, size: 16G, wipe: superblock, number: 2, preserve: false, grub_device: false, id: md-2, type: partition }\n - { device: disk-sdd, size: 16G, wipe: superblock, number: 2, preserve: false, grub_device: false, id: md-3, type: partition }\n - { device: disk-sde, size: 16G, wipe: superblock, number: 2, preserve: false, grub_device: false, id: md-4, type: partition }\n - { device: disk-sdf, size: 16G, wipe: superblock, number: 2, preserve: false, grub_device: false, id: md-5, type: partition }\n - { device: disk-sdg, size: 16G, wipe: superblock, number: 2, preserve: false, grub_device: false, id: md-6, type: partition }\n - { device: disk-sdh, size: 16G, wipe: superblock, number: 2, preserve: false, grub_device: false, id: md-7, type: partition }\n - { device: disk-sdi, size: 16G, wipe: superblock, number: 2, preserve: false, grub_device: false, id: md-8, type: partition }\n - { device: disk-sdj, size: 16G, wipe: superblock, number: 2, preserve: false, grub_device: false, id: md-9, type: partition }\n - { device: disk-sda, size: -1, wipe: superblock, number: 3, preserve: false, grub_device: false, id: zfs-0, type: partition }\n - { device: disk-sdb, size: -1, wipe: superblock, number: 3, preserve: false, grub_device: false, id: zfs-1, type: partition }\n - { device: disk-sdc, size: -1, wipe: superblock, number: 3, preserve: false, grub_device: false, id: zfs-2, type: partition }\n - { device: disk-sdd, size: -1, wipe: superblock, number: 3, preserve: false, grub_device: false, id: zfs-3, type: partition }\n - { device: disk-sde, size: -1, wipe: superblock, number: 3, preserve: false, grub_device: false, id: zfs-4, type: partition }\n - { device: disk-sdf, size: -1, wipe: superblock, number: 3, preserve: false, grub_device: false, id: zfs-5, type: partition }\n - { device: disk-sdg, size: -1, wipe: superblock, number: 3, preserve: false, grub_device: false, id: zfs-6, type: partition }\n - { device: disk-sdh, size: -1, wipe: superblock, number: 3, preserve: false, grub_device: false, id: zfs-7, type: partition }\n - { device: disk-sdi, size: -1, wipe: superblock, number: 3, preserve: false, grub_device: false, id: zfs-8, type: partition }\n - { device: disk-sdj, size: -1, wipe: superblock, number: 3, preserve: false, grub_device: false, id: zfs-9, type: partition }\n - { name: md0, raidlevel: raid5, devices: [ md-0, md-1, md-2, md-3, md-4, md-5, md-6, md-7, md-8, md-9 ], spare_devices: [], preserve: false, wipe: superblock, id: raid-0, type: raid }\n - { fstype: fat32, volume: efi-0, preserve: false, id: efi-dos-0, type: format }\n - { fstype: fat32, volume: efi-1, preserve: false, id: efi-dos-1, type: format }\n - { fstype: ext4, volume: raid-0, preserve: false, id: root-ext4, type: format }\n - { path: /, device: root-ext4, id: mount-2, type: mount }\n - { path: /boot/efi, device: efi-dos-0, id: mount-0, type: mount }\n - { path: /boot/efi-alt, device: efi-dos-1, id: mount-1, type: mount }\n identity:\n hostname: ubuntu-server\n password: \"$6$exDY1mhS4KUYCE/2$zmn9ToZwTKLhCw.b4/b.ZRTIZM30JZ4QrOQ2aOXJ8yk96xpcCof0kxKwuX1kqLG/ygbJ1f8wxED22bTL4F46P0\"\n username: ubuntu\n ssh:\n install-server: yes\n authorized-keys:\n - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA7UrJmBFWR3c7jVzpoyg4dJjON9c7t9bT9acfrj6G7i\n allow-pw: no\n packages:\n - zfsutils-linux\n user-data:\n disable_root: false\n</code></pre></div></div>\n\n<p>The binaries and configuration files should be stored in the assets folder used by netbootxyz.</p>\n\n<div><div><pre><code>/netbootxyz/assets/r640/initrd\n/netbootxyz/assets/r640/meta-data\n/netbootxyz/assets/r640/ubuntu-24.04.2-live-server-amd64.iso\n/netbootxyz/assets/r640/user-data\n/netbootxyz/assets/r640/vendor-data\n/netbootxyz/assets/r640/vmlinuz\n</code></pre></div></div>\n\n<p>The kernel command line used for iPXE needs to include <code>autoinstall</code> and <code>ds=nocloud;s=http://your_server</code>. We could modify one of the existing <code>ipxe</code> scripts to do this, but it is more flexible to create <code>/netbootxyz/config/menus/MAC-020000000001.ipxe</code> where <code>020000000001</code> represents the MAC address <code>02:00:00:00:00:01</code> and should be updated to reflect the actual server\u2019s MAC address.</p>\n\n<div><div><pre><code>#!ipxe\n\n# Set a timeout (in milliseconds) for automatic selection\nset timeout 30000\n\n# Define a title for the menu\n:start\nmenu Boot Menu\nitem --key 1 local Boot from local hdd\nitem --key 2 ubuntu Autoinstall Ubuntu Noble\nitem --key r reboot Reboot system\nitem --key x exit Exit to iPXE shell\nchoose --timeout ${timeout} --default local option && goto ${option}\n\n# boot local system\n:local\necho Booting from local disks ...\nexit 1\n\n# Ubuntu boot configuration\n:ubuntu\nimgfree\necho Autoinstall Ubuntu Noble...\nset base-url http://doc.caelum.ci.dev:8080/r640\nkernel ${base-url}/vmlinuz\ninitrd ${base-url}/initrd\nimgargs vmlinuz root=/dev/ram0 ramdisk_size=3500000 cloud-config-url=/dev/null ip=dhcp url=${base-url}/ubuntu-24.04.2-live-server-amd64.iso initrd=initrd.magic console=ttyS0,115200n8 autoinstall ds=nocloud;s=${base-url}\nboot || goto failed\n\n# Error handling\n:failed\necho Boot failed, waiting 5 seconds...\nsleep 5\ngoto start\n\n# Reboot option\n:reboot\nreboot\n\n# Exit to shell\n:exit\necho Exiting to iPXE shell...\nexit\n</code></pre></div></div>\n\n<p>With this setup, we can now boot a machine from the network and automatically install Ubuntu with our chosen disk configuration.</p>",
+20
mte/2025_04_19_gluster.json
+20
mte/2025_04_19_gluster.json
···+"summary": "Gluster is a free and open-source software network filesystem. It has been a few years since I last looked at the project, and I was interested in taking another look. Some features, like automatic tiering of hot/cold data, have been removed, and the developers now recommend dm-cache with LVM instead.",+"content": "<p>Gluster is a free and open-source software network filesystem. It has been a few years since I last looked at the project, and I was interested in taking another look. Some features, like automatic tiering of hot/cold data, have been removed, and the developers now recommend <code>dm-cache</code> with LVM instead.</p>\n\n<p>I am going to use four QEMU VMs on which I have installed Ubuntu via PXE boot. For easy repetition, I have wrapped my <code>qemu-system-x86_64</code> commands into a <code>Makefile</code>.</p>\n\n<div><div><pre><code>machine: disk0.qcow2 disk1.qcow2 OVMF_VARS.fd\n qemu-system-x86_64 -m 8G -smp 4 -machine accel=kvm,type=pc -cpu host -display none -vnc :11 \\\n -drive if=pflash,format=raw,readonly=on,file=/usr/share/OVMF/OVMF_CODE.fd \\\n -drive if=pflash,format=raw,file=OVMF_VARS.fd \\\n -serial stdio \\\n -device virtio-scsi-pci,id=scsi0 \\\n -device scsi-hd,drive=drive0,bus=scsi0.0,channel=0,scsi-id=0,lun=0 \\\n -drive file=disk0.qcow2,if=none,id=drive0 \\\n -device scsi-hd,drive=drive1,bus=scsi0.0,channel=0,scsi-id=1,lun=0 \\\n -drive file=disk1.qcow2,if=none,id=drive1 \\\n -net nic,model=virtio-net-pci,macaddr=02:00:00:00:00:11 \\\n -net bridge,br=br0\n\ndisk%.qcow2:\n qemu-img create -f qcow2 $@ 1T\n\nOVMF_VARS.fd:\n cp /usr/share/OVMF/OVMF_VARS.fd OVMF_VARS.fd\n\nclean:\n rm -f *.qcow2 OVMF_VARS.fd\n</code></pre></div></div>\n\n<p>Gluster works on any file system that supports extended attributes <em>xattr</em>, which includes <code>ext[2-4]</code>. However, XFS is typically used as it performs well with parallel read/write operations and large files. I have used 512-byte inodes, <code>-i size=512</code>, which is recommended as this creates extra space for the extended attributes.</p>\n\n<div><div><pre><code>mkfs.xfs <span>-i</span> <span>size</span><span>=</span>512 /dev/sdb\n<span>mkdir</span> <span>-p</span> /gluster/sdb\n<span>echo</span> <span>\"/dev/sdb /gluster/sdb xfs defaults 0 0\"</span> <span>>></span> /etc/fstab\nmount <span>-a</span>\n</code></pre></div></div>\n\n<p>With the filesystem prepared, install and start Gluster. Gluster stores its settings in <code>/var/lib/glusterd</code>, so if you need to reset your installation, stop the gluster daemon and remove that directory.</p>\n\n<div><div><pre><code>apt <span>install </span>glusterfs-server\nsystemctl <span>enable </span>glusterd\nsystemctl start glusterd\n</code></pre></div></div>\n\n<p>From one node, probe all the other nodes. You can do this by IP address or by hostname.</p>\n\n<div><div><pre><code>gluster peer probe node222\ngluster peer probe node200\ngluster peer probe node152\n</code></pre></div></div>\n\n<p><code>gluster pool list</code> should now list all the nodes. <code>localhost</code> indicates your current host.</p>\n\n<div><div><pre><code>UUID Hostname State\n8d2a1ef0-4c23-4355-9faa-8f3387054d41 node222 Connected\n4078f192-b2bb-4c74-a588-35d5475dedc7 node200 Connected\n5b2fc21b-b0ab-401e-9848-3973121bfec7 node152 Connected\nd5878850-0d40-4394-8dd8-b9b0d4266632 localhost Connected\n</code></pre></div></div>\n\n<p>Now we need to add a volume. A Gluster volume can be distributed, replicated or dispersed. It is possible to have mix distributed with the other two types, giving a distributed replicated volume or a distributed dispersed volume. Briefly, distributed splits the data across the nodes without redundancy but gives a performance advantage. Replicated creates 2 or more copies of the data. Dispersed uses erasure coding, which can be considered as RAID5 over nodes.</p>\n\n<p>Once a volume has been created, it needs to be started. The commands to create and start the volume only need to be executed on one of the nodes.</p>\n\n<div><div><pre><code>gluster volume create vol1 disperse 4 transport tcp node<span>{</span>200,222,223,152<span>}</span>:/gluster/sdb/vol1\ngluster volume start vol1\n</code></pre></div></div>\n\n<p>On each node, or on a remote machine, you can now mount the Gluster volume. Here I have mounted it to <code>/mnt</code> from the node itself. All writes to <code>/mnt</code> will be dispersed to the other nodes.</p>\n\n<div><div><pre><code>echo \"localhost:/vol1 /mnt glusterfs defaults 0 0\" >> /etc/fstab\nmount -a\n</code></pre></div></div>\n\n<p>The volume can be inspected with <code>gluster volume info</code>.</p>\n\n<div><div><pre><code>Volume Name: vol1\nType: Disperse\nVolume ID: 31e165b2-da96-40b2-bc09-e4607a02d14b\nStatus: Started\nSnapshot Count: 0\nNumber of Bricks: 1 x (3 + 1) = 4\nTransport-type: tcp\nBricks:\nBrick1: node200:/gluster/sdb/vol1\nBrick2: node222:/gluster/sdb/vol1\nBrick3: node223:/gluster/sdb/vol1\nBrick4: node152:/gluster/sdb/vol1\nOptions Reconfigured:\nnetwork.ping-timeout: 4\nstorage.fips-mode-rchecksum: on\ntransport.address-family: inet\nnfs.disable: on\n</code></pre></div></div>\n\n<p>In initial testing, any file operation on the mounted volume appeared to hang when a node went down. This is because Gluster has a default timeout of 42 seconds. This command will set a lower value:</p>\n\n<div><div><pre><code>gluster volume set vol1 network.ping-timeout 4\n</code></pre></div></div>\n\n<p>The video below shows the four VMs running. One is writing random data to <code>/mnt/random</code>. The other machines are running <code>ls -phil /mnt</code> so we can watch the file growing. <code>node222</code> is killed, and after the 4-second pause, the other nodes continue. When the node is rebooted, it automatically recovers.</p>\n\n\n\n<blockquote>\n <p>While I used 4 nodes, this works equally well with 3 nodes.</p>\n</blockquote>",
+20
mte/2025_04_21_clock-winder-repair.json
+20
mte/2025_04_21_clock-winder-repair.json
···+"summary": "The galvanised steel wire rope on one of my clock winders has snapped. This is a 3mm rope, so it would have a rating of greater than 500 kg. I am quite surprised that it snapped, as the load on this wire rope is much lower than that of others in use in the same system.",+"content": "<p>The galvanised steel wire rope on one of my clock winders has snapped. This is a 3mm rope, so it would have a rating of greater than 500 kg. I am quite surprised that it snapped, as the load on this wire rope is much lower than that of others in use in the same system.</p>\n\n<p>I suspect that the failure is due to the pulley. There is a significant gap between the frame and the pulley wheel where the wire may get jammed. (Right-hand picture). My initial thought was to 3d print a spacer washer, but instead, I was able to squash the entire assembly, removing all the play while still allowing the pulley to rotate. (Left-hand picture).</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/aylesford-pulley.jpg\"></p>\n\n<p>When the clock is being wound, either by hand or via the clock winder, the tension is removed from the drive wheel, resulting in a reduced impulse on the escapement. In early versions of the winder, I had ignored the counterweight by tying it out of the way, but this caused the clock to lose almost 10 minutes per day. The counterweight is an ingeniously simple workaround which keeps tension on the drive wheel by pulling on one of the gear teeth. This particular part of the clock winder lifts the counterweight before the winder lifts the weight.</p>",
+20
mte/2025_04_21_ubuntu-dm-cache.json
+20
mte/2025_04_21_ubuntu-dm-cache.json
···+"summary": "dm-cache has been part of the mainline Linux kernel for over a decade, making it possible for faster SSD and NVMe drives to be used as a cache within a logical volume. This technology brief from Dell gives a good overview of dm-cache and the performance benefits. Skip to the graph on page 25, noting the logarithmic scale.",+"content": "<p><a href=\"https://en.wikipedia.org/wiki/Dm-cache\">dm-cache</a> has been part of the mainline Linux kernel for over a decade, making it possible for faster SSD and NVMe drives to be used as a cache within a logical volume. <a href=\"https://videos.cdn.redhat.com/summit2015/presentations/17856_getting-the-most-out-of-your-nvme-ssd.pdf\">This technology brief from Dell</a> gives a good overview of <code>dm-cache</code> and the performance benefits. Skip to the graph on page 25, noting the logarithmic scale.</p>\n\n<p>Given a system with a small SATADOM module, <code>/dev/sdd</code>, an SSD drive <code>/dev/sdc</code> and a couple of large-capacity spinning disks, <code>/dev/sd[ab]</code>, can we use cloud-init to configure RAID1 on the capacity disks with the SSD being used as a cache?</p>\n\n<p>Unfortunately, the <code>storage:</code> / <code>config:</code> nodes are not very flexible when it comes to even modest complexity. For example, given an LVM volume group consisting of multiple disk types, it isn\u2019t possible to create a logical volume on a specific disk as <code>devices:</code> is not a parameter to <code>lvm_partition</code>. It is also not possible to specify <code>raid: raid1</code>.</p>\n\n<p>I have taken the approach of creating two volume groups, <code>vg_raid</code> and <code>vg_cache</code>, on disks <code>/dev/sd[ab]</code> and <code>/dev/sdc</code>, respectively, thereby forcing the use of the correct devices. On the <code>vg_raid</code> group, I have created a single logical volume without RAID. On <code>vg_cache</code>, I have created the two cache volumes, <code>lv-cache</code> and <code>lv-cache-meta</code>.</p>\n\n<p>The <code>lv-cache</code> and <code>lv-cache-meta</code> should be sized in the ratio 1000:1.</p>\n\n<p>As the final step of the installation, I used <code>late-commands</code> to configure the system as I want it. These implement RAID1 for the root logical volume, deactivate the two cache volumes as a necessary step before merging <code>vg_raid</code> and <code>vg_cache</code>, create the cache pool from the cache volumes, and finally enable the cache. The cache pool can be either <em>writethrough</em> or <em>writeback</em>, with the default being <em>writethrough</em>. In this mode, data is written to both the cache and the original volume, so a failure in the cache device doesn\u2019t result in any data loss. <em>Writeback</em> has better performance as writes initially only go to the cache volume and are only written to the original volume later.</p>\n\n<div><div><pre><code>lvconvert -y --type raid1 -m 1 /dev/vg_raid/lv_data\nlvchange -an vg_cache/lv_cache\nlvchange -an vg_cache/lv_cache_meta\nvgmerge vg_raid vg_cache\nlvconvert -y --type cache-pool --poolmetadata vg_raid/lv_cache_meta vg_raid/lv_cache\nlvconvert -y --type cache --cachemode writethrough --cachepool vg_raid/lv_cache vg_raid/lv_data\n</code></pre></div></div>\n\n<p>I have placed <code>/boot</code> and <code>/boot/EFI</code> on the SATADOM so that the system can be booted.</p>\n\n<p>My full configuration given below.</p>\n\n<div><div><pre><code>#cloud-config\nautoinstall:\n version: 1\n storage:\n config:\n # Define the physical disks\n - { id: disk-sda, type: disk, ptable: gpt, path: /dev/sda, preserve: false }\n - { id: disk-sdb, type: disk, ptable: gpt, path: /dev/sdb, preserve: false }\n - { id: disk-sdc, type: disk, ptable: gpt, path: /dev/sdc, preserve: false }\n - { id: disk-sdd, type: disk, ptable: gpt, path: /dev/sdd, preserve: false }\n\n # Define the partitions\n - { id: efi-part, type: partition, device: disk-sdd, size: 512M, wipe: superblock, flag: boot, number: 1, preserve: false, grub_device: true, offset: 1048576}\n - { id: boot-part, type: partition, device: disk-sdd, size: 1G, wipe: superblock, number: 2, preserve: false, grub_device: false }\n\n # Create volume groups\n - { id: vg-raid, type: lvm_volgroup, name: vg_raid, devices: [disk-sda, disk-sdb] }\n - { id: vg-cache, type: lvm_volgroup, name: vg_cache, devices: [disk-sdc] }\n\n # Create logical volume which will be for RAID\n - { id: lv-data, type: lvm_partition, volgroup: vg-raid, name: lv_data, size: 1000G, preserve: false}\n\n # Create cache metadata logical volume on SSD VG (ratio 1000:1 with cache data)\n - { id: lv-cache-meta, type: lvm_partition, volgroup: vg-cache, name: lv_cache_meta, size: 1G, preserve: false }\n\n # Create cache data logical volume on SSD VG\n - { id: lv-cache, type: lvm_partition, volgroup: vg-cache, name: lv_cache, size: 1000G, preserve: false }\n\n # Format the volumes\n - { id: root-fs, type: format, fstype: ext4, volume: lv-data, preserve: false }\n - { id: efi-fs, type: format, fstype: fat32, volume: efi-part, preserve: false }\n - { id: boot-fs, type: format, fstype: ext4, volume: boot-part, preserve: false }\n\n # Mount the volumes\n - { id: mount-1, type: mount, path: /, device: root-fs }\n - { id: mount-2, type: mount, path: /boot, device: boot-fs }\n - { id: mount-3, type: mount, path: /boot/efi, device: efi-fs }\n identity:\n hostname: unnamed-server\n password: \"$6$exDY1mhS4KUYCE/2$zmn9ToZwTKLhCw.b4/b.ZRTIZM30JZ4QrOQ2aOXJ8yk96xpcCof0kxKwuX1kqLG/ygbJ1f8wxED22bTL4F46P0\"\n username: mte24\n ssh:\n install-server: yes\n authorized-keys:\n - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA7UrJmBFWR3c7jVzpoyg4dJjON9c7t9bT9acfrj6G7i mark.elvers@tunbury.org\n allow-pw: no\n packages:\n - lvm2\n - thin-provisioning-tools\n user-data:\n disable_root: false\n late-commands:\n - lvconvert -y --type raid1 -m 1 /dev/vg_raid/lv_data\n - lvchange -an vg_cache/lv_cache\n - lvchange -an vg_cache/lv_cache_meta\n - vgmerge vg_raid vg_cache\n - lvconvert -y --type cache-pool --poolmetadata vg_raid/lv_cache_meta vg_raid/lv_cache\n - lvconvert -y --type cache --cachemode writethrough --cachepool vg_raid/lv_cache vg_raid/lv_data\n</code></pre></div></div>",
+20
mte/2025_04_22_ocaml-fedora-gcc.json
+20
mte/2025_04_22_ocaml-fedora-gcc.json
···+"summary": "Late last week, @MisterDA added Fedora 42 support to the Docker base image builder. The new base images attempted to build over the weekend, but there have been a few issues!",+"content": "<p>Late last week, @MisterDA added Fedora 42 support to the <a href=\"https://images.ci.ocaml.org\">Docker base image builder</a>. The new base images attempted to build over the weekend, but there have been a few issues!</p>\n\n<p>The code I had previously added to force Fedora 41 to use the DNF version 5 syntax was specifically for version 41. For reference, the old syntax was <code>yum groupinstall -y 'C Development Tools and Libraries\u2019</code>, and the new syntax is <code>yum group install -y 'c-development'</code>. Note the extra space.</p>\n\n<div><div><pre><code><span>let</span> <span>c_devtools_libs</span> <span>:</span> <span>(</span><span>t</span><span>,</span> <span>unit</span><span>,</span> <span>string</span><span>,</span> <span>t</span><span>)</span> <span>format4</span> <span>=</span>\n <span>match</span> <span>d</span> <span>with</span>\n <span>|</span> <span>`Fedora</span> <span>`V41</span> <span>-></span> <span>{</span><span>|</span><span>\"c-development\"</span><span>|</span><span>}</span>\n <span>|</span> <span>`Fedora</span> <span>_</span> <span>-></span> <span>{</span><span>|</span><span>\"C Development Tools and Libraries\"</span><span>|</span><span>}</span>\n <span>|</span> <span>_</span> <span>-></span> <span>{</span><span>|</span><span>\"Development Tools\u201d|}\n...\nlet dnf_version = match d with `Fedora `V41 -> 5 | _ -> 3\n</span></code></pre></div></div>\n\n<p>To unburden ourselves of this maintenance in future releases, I have inverted the logic so unmatched versions will use the new syntax.</p>\n\n<div><div><pre><code><span>let</span> <span>(</span><span>dnf_version</span><span>,</span> <span>c_devtools_libs</span><span>)</span> <span>:</span> <span>int</span> <span>*</span> <span>(</span><span>t</span><span>,</span> <span>unit</span><span>,</span> <span>string</span><span>,</span> <span>t</span><span>)</span> <span>format4</span> <span>=</span>\n <span>match</span> <span>d</span> <span>with</span>\n <span>|</span> <span>`Fedora</span>\n <span>(</span> <span>`V21</span> <span>|</span> <span>`V22</span> <span>|</span> <span>`V23</span> <span>|</span> <span>`V24</span> <span>|</span> <span>`V25</span> <span>|</span> <span>`V26</span> <span>|</span> <span>`V27</span> <span>|</span> <span>`V28</span> <span>|</span> <span>`V29</span>\n <span>|</span> <span>`V30</span> <span>|</span> <span>`V31</span> <span>|</span> <span>`V32</span> <span>|</span> <span>`V33</span> <span>|</span> <span>`V34</span> <span>|</span> <span>`V35</span> <span>|</span> <span>`V36</span> <span>|</span> <span>`V37</span> <span>|</span> <span>`V38</span>\n <span>|</span> <span>`V39</span> <span>|</span> <span>`V40</span> <span>)</span> <span>-></span>\n <span>(</span><span>3</span><span>,</span> <span>{</span><span>|</span><span>\"C Development Tools and Libraries\"</span><span>|</span><span>})</span>\n <span>|</span> <span>`Fedora</span> <span>_</span> <span>-></span> <span>(</span><span>5</span><span>,</span> <span>{</span><span>|</span><span>\"c-development\"</span><span>|</span><span>})</span>\n <span>|</span> <span>_</span> <span>-></span> <span>(</span><span>3</span><span>,</span> <span>{</span><span>|</span><span>\"Development Tools\"</span><span>|</span><span>})</span>\n</code></pre></div></div>\n\n<p>Fedora 42 also removed <code>awk</code>, so it now needs to be specifically included as a dependency. However, this code is shared with Oracle Linux, which does not have a package called <code>awk</code>. Fortunately, both have a package called <code>gawk</code>!</p>\n\n<p>The next issue is that Fedora 42 is the first of the distributions we build base images for that has moved to GCC 15, specifically GCC 15.0.1. This breaks all versions of OCaml < 4.14.</p>\n\n<p>The change is that the code below, which previously gave no information about the number or type of parameters. (see <code>runtime/caml/prims.h</code>)</p>\n\n<div><div><pre><code><span>typedef</span> <span>value</span> <span>(</span><span>*</span><span>c_primitive</span><span>)();</span>\n</code></pre></div></div>\n\n<p>Now means that there are no parameters, aka:</p>\n\n<div><div><pre><code><span>typedef</span> <span>value</span> <span>(</span><span>*</span><span>c_primitive</span><span>)(</span><span>void</span><span>);</span>\n</code></pre></div></div>\n\n<p>This is caused by a change of the default compilter language version. See <a href=\"https://gcc.gnu.org/gcc-15/changes.html\">GCC change log</a></p>\n\n<blockquote>\n <p>C23 by default: GCC 15 changes the default language version for C compilation from <code>-std=gnu17</code> to <code>-std=gnu23</code>. If your code relies on older versions of the C standard, you will need to either add <code>-std=</code> to your build flags, or port your code; see the porting notes.</p>\n</blockquote>\n\n<p>Also see the <a href=\"https://gcc.gnu.org/gcc-15/porting_to.html#c23\">porting notes</a>, and <a href=\"https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118112\">this bug report</a>.</p>\n\n<p>This is <em>not</em> an immediate problem as OCaml-CI and opam-repo-ci only test against OCaml 4.14.2 and 5.3.0 on Fedora. I have opened <a href=\"https://github.com/ocurrent/docker-base-images/issues/320\">issue#320</a> to track this problem.</p>",
+20
mte/2025_04_23_blade-allocation.json
+20
mte/2025_04_23_blade-allocation.json
···+"summary": "Equinix has stopped commercial sales of Metal and will sunset the service at the end of June 2026. Equinix have long been a supporter of OCaml and has provided free credits to use on their Metal platform. These credits are coming to an end at the end of this month, meaning that we need to move some of our services away from Equinix. We have two new four-node blade servers, which will become the new home for these services. The blades have dual 10C/20T processors with either 192GB or 256GB of RAM and a combination of SSD and spinning disk.",+"content": "<p>Equinix has stopped commercial sales of Metal and will sunset the service at the end of June 2026. Equinix have long been a supporter of OCaml and has provided free credits to use on their Metal platform. These credits are coming to an end at the end of this month, meaning that we need to move some of our services away from Equinix. We have two new four-node blade servers, which will become the new home for these services. The blades have dual 10C/20T processors with either 192GB or 256GB of RAM and a combination of SSD and spinning disk.</p>\n\n<p>192GB, 20C/40T with 1.1TB SSD, 2 x 6T disks</p>\n<ul>\n <li>rosemary: FreeBSD CI Worker (releasing spring & summer)</li>\n <li>oregano: OpenBSD CI Worker (releasing bremusa)</li>\n <li>basil: docs-ci (new implementation, eventually replacing eumache)</li>\n <li>mint: spare</li>\n</ul>\n\n<p>256GB, 20C/40T with 1.5TB SSD, 2 x 8T disks</p>\n<ul>\n <li>thyme: Equinix c2-2 (registry.ci.dev)</li>\n <li>chives: Equinix c2-4 (opam-repo-ci) + Equinix c2-3 (OCaml-ci) + Equinix c2-1 (preview.dune.dev)</li>\n</ul>\n\n<p>256GB, 20C/40T with 1.1TB SSD, 2 x 6T disks</p>\n<ul>\n <li>dill: spare</li>\n <li>sage: spare</li>\n</ul>\n\n<p>VMs currently running on hopi can be redeployed to chives, allowing hopi to be redeployed.</p>\n\n<p>Machines which can then be recycled are:</p>\n<ul>\n <li>sleepy (4C)</li>\n <li>grumpy (4C)</li>\n <li>doc (4C)</li>\n <li>spring (8T)</li>\n <li>tigger</li>\n <li>armyofdockerness</li>\n</ul>",
+20
mte/2025_04_24_infra-map.json
+20
mte/2025_04_24_infra-map.json
···+"summary": "Yesterday, we were talking about extending the current infrastructure database to incorporate other information to provide prompts to return machines to the pool of resources after they have completed their current role/loan, etc. There is also a wider requirement to bring these services back to Cambridge from Equinix/Scaleway, which will be the subject of a follow-up post. However, the idea of extending the database made me think that it would be amusing to overlay the machine\u2019s positions onto Google Maps.",+"content": "<p>Yesterday, we were talking about extending the current infrastructure database to incorporate other information to provide prompts to return machines to the pool of resources after they have completed their current role/loan, etc. There is also a wider requirement to bring these services back to Cambridge from Equinix/Scaleway, which will be the subject of a follow-up post. However, the idea of extending the database made me think that it would be amusing to overlay the machine\u2019s positions onto Google Maps.</p>\n\n<p>I added positioning data in the Jekyll Collection <code>_machines\\*.md</code> for each machine. e.g. <a href=\"https://raw.githubusercontent.com/ocaml/infrastructure/refs/heads/master/_machines/ainia.md\">ainia.md</a></p>\n\n<div><div><pre><code>---\nname: ainia\n...\nlatitude: 52.2109\nlongitude: 0.0917\n---\n</code></pre></div></div>\n\n<p>Then Jekyll\u2019s Liquid templating engine can create a JavaScript array for us</p>\n\n<div><div><pre><code>\n <span>// Define machines data array from Jekyll collection</span>\n <span>const</span> <span>machinesData</span> <span>=</span> <span>[</span>\n <span>{</span><span>%</span> <span>for</span> <span>machine</span> <span>in</span> <span>site</span><span>.</span><span>machines</span> <span>%</span><span>}</span>\n <span>{</span><span>%</span> <span>if</span> <span>machine</span><span>.</span><span>latitude</span> <span>and</span> <span>machine</span><span>.</span><span>longitude</span> <span>%</span><span>}</span>\n <span>{</span>\n <span>name</span><span>:</span> <span>\"</span><span>{{ machine.name }}</span><span>\"</span><span>,</span>\n <span>lat</span><span>:</span> <span>{{</span> <span>machine</span><span>.</span><span>latitude</span> <span>}},</span>\n <span>lng</span><span>:</span> <span>{{</span> <span>machine</span><span>.</span><span>longitude</span> <span>}},</span>\n <span>{</span><span>%</span> <span>if</span> <span>machine</span><span>.</span><span>description</span> <span>%</span><span>}</span>\n <span>description</span><span>:</span> <span>\"</span><span>{{ machine.description | escape }}</span><span>\"</span><span>,</span>\n <span>{</span><span>%</span> <span>endif</span> <span>%</span><span>}</span>\n <span>// Add any other properties you need</span>\n <span>},</span>\n <span>{</span><span>%</span> <span>endif</span> <span>%</span><span>}</span>\n <span>{</span><span>%</span> <span>endfor</span> <span>%</span><span>}</span>\n <span>];</span>\n\n</code></pre></div></div>\n\n<p>This array can be converted into an array of map markers. Google have an API for clustering the markers into a count of machines. I added a random offset to each location to avoid all the markers piling up on a single spot.</p>\n\n<p>The interactive map can be seen at <a href=\"https://infra.ocaml.org/machines.html\">machines.html</a></p>",
+20
mte/2025_04_25_blade-reallocation.json
+20
mte/2025_04_25_blade-reallocation.json
···+"summary": "We have changed our mind about using dm-cache in the SSD/RAID1 configuration. The current thinking is that the mechanical drives would be better served as extra capacity for our distributed ZFS infrastructure, where we intend to have two copies of all data, and these disks represent ~100TB of storage.",+"content": "<p>We have changed our mind about using <code>dm-cache</code> in the SSD/RAID1 configuration. The current thinking is that the mechanical drives would be better served as extra capacity for our distributed ZFS infrastructure, where we intend to have two copies of all data, and these disks represent ~100TB of storage.</p>\n\n<p>As mentioned previously, we have a deadline of Wednesday, 30th April, to move the workloads from the Equinix machines or incur hosting fees.</p>\n\n<p>I also noted that the SSD capacity is 1.7TB in all cases. The new distribution is:</p>\n\n<ul>\n <li>rosemary: FreeBSD CI Worker (releasing spring & summer)</li>\n <li>oregano: OpenBSD CI Worker (releasing bremusa)</li>\n <li>basil: Equinix c2-2 (registry.ci.dev)</li>\n <li>mint: @mte24 workstation</li>\n <li>thyme: spare</li>\n <li>chives: Equinix c2-4 (opam-repo-ci) + Equinix c2-3 (OCaml-ci) + Equinix c2-1 (preview.dune.dev)</li>\n <li>dill: spare</li>\n <li>sage: docs-ci (new implementation, eventually replacing eumache)</li>\n</ul>",
+20
mte/2025_04_25_bluesky-ssh-authentication.json
+20
mte/2025_04_25_bluesky-ssh-authentication.json
···+"summary": "If you have sign up to tangled.sh you will have published your SSH public key on the Bluesky ATproto network. Have a browse to your Bluesky ID, or mine. Look under sh.tangled.publicKey.",+"content": "<p>If you have sign up to <a href=\"https://tangled.sh\">tangled.sh</a> you will have published your SSH public key on the Bluesky ATproto network. Have a browse to your Bluesky ID, or <a href=\"https://www.atproto-browser.dev/at/did:plc:476rmswt6ji7uoxyiwjna3ti\">mine</a>. Look under <code>sh.tangled.publicKey</code>.</p>\n\n<p><a href=\"https://github.com/mtelvers/bluesky-ssh-key-extractor.git\">BlueSky ATproto SSH Public Key Extractor</a> extracts this public key information and outputs one public key at a time. The format is suitable to use with the <code>AuthorizedKeysCommand</code> parameter in your <code>/etc/sshd/ssh_config</code> file.</p>\n\n<p>Build the project:</p>\n\n<div><div><pre><code>opam <span>install</span> <span>.</span> <span>-deps-only</span>\ndune build\n</code></pre></div></div>\n\n<p>Install the binary by copying it to the local system. Setting the ownership and permissions is essential.</p>\n\n<div><div><pre><code><span>cp </span>_build/install/default/bin/bluesky-ssh-key-extractor /usr/local/bin\n<span>chmod </span>755 /usr/local/bin/bluesky-ssh-key-extractor\n<span>chown </span>root:root /usr/local/bin/bluesky-ssh-key-extractor\n</code></pre></div></div>\n\n<p>Test the command is working:</p>\n\n<div><div><pre><code><span>$ </span>bluesky-ssh-key-extractor mtelvers.tunbury.org\nssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA7UrJmBFWR3c7jVzpoyg4dJjON9c7t9bT9acfrj6G7i mark.elvers@tunbury.org\n</code></pre></div></div>\n\n<p>If that works, then edit your <code>/etc/sshd/ssh_config</code>:-</p>\n\n<div><div><pre><code>AuthorizedKeysCommand /usr/local/bin/bluesky-ssh-key-extractor your_bluesky_handle\nAuthorizedKeysCommandUser nobody\n</code></pre></div></div>\n\n<p>Now you should be able to SSH to the machine using your published key</p>\n\n<div><div><pre><code>ssh root@your_host\n</code></pre></div></div>\n\n<blockquote>\n <p>Note, this program was intended as a proof of concept rather than something you\u2019d actually use.</p>\n</blockquote>\n\n<p>If you have a 1:1 mapping, between Bluesky accounts and system usernames, you might get away with:</p>\n\n<div><div><pre><code>AuthorizedKeysCommand /usr/local/bin/bluesky-ssh-key-extractor %u.bsky.social\nAuthorizedKeysCommandUser nobody\n</code></pre></div></div>",
+20
mte/2025_04_26_bluesky-ssh-authentication-2.json
+20
mte/2025_04_26_bluesky-ssh-authentication-2.json
···+"summary": "Addressing the glaring omissions from yesterday\u2019s proof of concept, such as the fact that you could sign in as any user, you couldn\u2019t revoke access, all hosts had the same users, and there was no mapping between Bluesky handles and POSIX users, I have updated mtelvers/bluesky-ssh-key-extractor and newly published mtelvers/bluesky-collection.",+"content": "<p>Addressing the glaring omissions from yesterday\u2019s proof of concept, such as the fact that you could sign in as any user, you couldn\u2019t revoke access, all hosts had the same users, and there was no mapping between Bluesky handles and POSIX users, I have updated <a href=\"https://github.com/mtelvers/bluesky-ssh-key-extractor\">mtelvers/bluesky-ssh-key-extractor</a> and newly published <a href=\"https://github.com/mtelvers/bluesky-collection.git\">mtelvers/bluesky-collection</a>.</p>\n\n<p>The tool creates ATProto collections using <code>app.bsky.graph.list</code> and populates them with <code>app.bsky.graph.listitem</code> records.</p>\n\n<p>Each list should be named with a friendly identifier such as the FQDN of the host being secured. List entries have a <code>subject_did</code>, which is the DID of the user you are giving access to, and a <code>displayName</code>, which is used as the POSIX username on the system you are connecting to.</p>\n\n<p>A typical usage would be creating a collection and adding records. Here I have made a collection called <code>rosemary.caelum.ci.dev</code> and then added to users <code>anil.recoil.org</code> and <code>mtelvers.tunbury.org</code> with POSIX usernames of <code>avsm2</code> and <code>mte24</code> respectively. Check my <a href=\"https://www.atproto-browser.dev/at/did:plc:476rmswt6ji7uoxyiwjna3ti\">Bluesky record</a>)</p>\n\n<div><div><pre><code>bluesky_collection create --handle mtelvers.tunbury.org --password *** --collection rosemary.caelum.ci.dev\nbluesky_collection add --handle mtelvers.tunbury.org --password *** --collection rosemary.caelum.ci.dev --user-handle anil.recoil.org --user-id avsm2\nbluesky_collection add --handle mtelvers.tunbury.org --password *** --collection rosemary.caelum.ci.dev --user-handle mtelvers.tunbury.org --user-id mte24\n</code></pre></div></div>\n\n<p>When authenticating using SSHD, the companion tool <a href=\"https://github.com/mtelvers/bluesky-ssh-key-extractor\">mtelvers/bluesky-ssh-key-extractor</a> would have command line parameters of the Bluesky user account holding the collection, collection name (aka the hostname), and the POSIX username (provided by SSHD). The authenticator queries the Bluesky network to find the collection matching the FQDN, then finds the list entries comparing them to the POSIX user given. If there is a match, the <code>subject_did</code> is used to look up the associated <code>sh.tangled.publicKey</code>.The authenticator requires no password to access Bluesky, as all the records are public.</p>",
+20
mte/2025_04_27_ocaml-ci.json
+20
mte/2025_04_27_ocaml-ci.json
···+"summary": "As noted on Thursday, the various OCaml services will need to be moved away from Equinix. Below are my notes on moving OCaml-CI.",+"content": "<p>As noted on Thursday, the various OCaml services will need to be moved away from Equinix. Below are my notes on moving OCaml-CI.</p>\n\n<p>Generate an SSH key on the new server <code>chives</code> using <code>ssh-keygen -t ed25519</code>. Copy the public key to <code>c2-3.equinix.ci.dev</code> and save it under <code>~/.ssh/authorized_keys</code>.</p>\n\n<p>Use <code>rsync</code> to mirror the Docker volumes. <code>-z</code> did improve performance as there appears to be a rate limiter somewhere in the path.</p>\n\n<div><div><pre><code>rsync <span>-azvh</span> <span>--progress</span> c2-3.equinix.ci.dev:/var/lib/docker/volumes/ /var/lib/docker/volumes/\n</code></pre></div></div>\n\n<p>After completing the copy, I waited for a quiet moment, and then scaled all of the Docker services to 0. I prefer to scale the services rather than remove them, as the recovery is much easier.</p>\n\n<div><div><pre><code>docker service scale <span>infra_grafana</span><span>=</span>0\ndocker service scale <span>infra_prometheus</span><span>=</span>0\ndocker service scale ocaml-ci_ci<span>=</span>0\ndocker service scale ocaml-ci_gitlab<span>=</span>0\ndocker service scale ocaml-ci_web<span>=</span>0\n</code></pre></div></div>\n\n<p>For the final copy, I used <code>--checksum</code> and also added <code>--delete</code>, as the Prometheus database creates segment files that are periodically merged into the main database.</p>\n\n<div><div><pre><code>rsync <span>-azvh</span> <span>--checksum</span> <span>--delete</span> <span>--progress</span> c2-3.equinix.ci.dev:/var/lib/docker/volumes/ /var/lib/docker/volumes/\n</code></pre></div></div>\n\n<p>The machine configuration is held in an Ansible Playbook, which includes the Docker stack for Grafana and Prometheus. It can be easily applied to the new machine:</p>\n\n<div><div><pre><code>ansible-playbook <span>-e</span> @secrets/ocaml.ci.dev.yml <span>--vault-password-file</span> secrets/vault-password ocaml.ci.dev.yml\n</code></pre></div></div>\n\n<p>OCaml-CI\u2019s Docker stack is held on GitHub <a href=\"https://github.com/ocurrent/ocaml-ci\">ocurrent/ocaml-ci</a> and can be deployed with:</p>\n\n<div><div><pre><code>make deploy-stack\n</code></pre></div></div>",
+20
mte/2025_04_29_distributed-zfs-storage.json
+20
mte/2025_04_29_distributed-zfs-storage.json
···+"summary": "Following Anil\u2019s note, we will design and implement a distributed storage archive system for ZFS volumes and associated metadata. Metadata here refers to key information about the dataset itself:",+"content": "<p>Following Anil\u2019s <a href=\"https://anil.recoil.org/notes/syncoid-sanoid-zfs\">note</a>, we will design and implement a distributed storage archive system for ZFS volumes and associated metadata. <em>Metadata</em> here refers to key information about the dataset itself:</p>\n\n<ul>\n <li>A summary of what the dataset is</li>\n <li>Data retention requirement (both legal and desirable)</li>\n <li>Time/effort/cost required to reproduce the data</li>\n <li>Legal framework under which the data is available, restrictions on the distribution of the data, etc.</li>\n</ul>\n\n<p>And also refers to the more <em>systems</em> style meanings such as:</p>\n\n<ul>\n <li>Size of the dataset</li>\n <li>List of machines/ZFS pools where the data is stored</li>\n <li>Number and distribution of copies required</li>\n <li>Snapshot and replication frequency/policy</li>\n</ul>\n\n<p>These data will be stored in a JSON/YAML or other structured file format.</p>\n\n<p>The system would have a database of machines and their associated storage (disks/zpools/etc) and location. Each item of storage would have a \u2018failure domain\u2019 to logically group resources for redundancy. This would allow copies of a dataset to be placed in different domains to meet the redundancy requirements. For example, given that we are committed to holding two distinct copies of the data, would we use RAIDZ on the local disks or just a dynamic stripe, RAID0, to maximise capacity?</p>\n\n<p>While under development, the system will output recommended actions - shell commands - to perform the snapshot and replication steps necessary to meet the replication and redundancy policies. Ultimately, these commands could be executed automatically.</p>\n\n<p>Utilising ZFS encryption, the remote pools can be stored as an encrypted filesystem without the encryption keys.</p>\n\n<p>When the data is being processed, it will be staged locally on the worker\u2019s NVMe drive for performance, and the resultant dataset <em>may</em> be uploaded with a new dataset of metadata.</p>",
+20
mte/2025_04_29_equinix-moves.json
+20
mte/2025_04_29_equinix-moves.json
···+"summary": "The moves of registry.ci.dev, opam-repo-ci, and get.dune.build have followed the template of OCaml-CI. Notable differences have been that I have hosted get.dune.build in a VM, as the services required very little disk space or CPU/RAM. For opam-repo-ci, the rsync was pretty slow, so I tried running multiple instances using GNU parallel with marginal gains.",+"content": "<p>The moves of registry.ci.dev, opam-repo-ci, and get.dune.build have followed the template of <a href=\"https://www.tunbury.org/ocaml-ci/\">OCaml-CI</a>. Notable differences have been that I have hosted <code>get.dune.build</code> in a VM, as the services required very little disk space or CPU/RAM. For opam-repo-ci, the <code>rsync</code> was pretty slow, so I tried running multiple instances using GNU parallel with marginal gains.</p>\n\n<div><div><pre><code><span>cd</span> /var/lib/docker/volumes2/opam-repo-ci_data/_data/var/job\n<span>ls</span> <span>-d</span> <span>*</span> | parallel <span>-j</span> 5 rsync <span>-azh</span> c2-4.equinix.ci.dev:/var/lib/docker/volumes/opam-repo-ci_data/_data/var/job/<span>{}</span>/ <span>{}</span>/\n</code></pre></div></div>\n\n<p>The Ansible configuration script for OCaml-CI is misnamed as it configures the machine and deploys infrastructure: Caddy, Grafana, Prometheus and Docker secrets, but not the Docker stack. The Docker stack for OCaml-CI is deployed by <code>make deploy-stack</code> from <a href=\"https://github.com/ocurrent/ocaml-ci\">ocurrent/ocaml-ci</a>. Conversely, opam-repo-ci <em>is</em> deployed from the Ansible playbook, but there is a <code>Makefile</code> and an outdated <code>stack.yml</code> in <a href=\"https://github.com/ocurrent/opam-repo-ci\">ocurrent/opam-repo-ci</a>.</p>\n\n<p>As part of the migration away from Equinix, these services have been merged into a single large machine <code>chives.caelum.ci.dev</code>. With this change, I have moved the Docker stack configuration for opam-repo-ci back to the repository <a href=\"https://github.com/ocurrent/opam-repo-ci/pull/428\">PR#428</a> and merged and renamed the machine configuration <a href=\"https://github.com/mtelvers/ansible/pull/44\">PR#44</a>.</p>\n\n<p>We want to thank Equinix for supporting OCaml over the years.</p>",
+20
mte/2025_04_29_raptor-talos-ii.json
+20
mte/2025_04_29_raptor-talos-ii.json
···+"summary": "We have two Raptor Computing Talos II POWER9 machines. One of these has had issues for some time and cannot run for more than 20 minutes before locking up completely. Over the last few days, our second machine has exhibited similar issues and needs to be power-cycled every ~24 hours. I spent some time today trying to diagnose the issue with the first machine, removing the motherboard as recommended by Raptor support, to see if the issue still exists with nothing else connected. Sadly, it does. I noted that a firmware update is available, which would move from v2.00 to v2.10.",+"content": "<p>We have two Raptor Computing Talos II POWER9 machines. One of these has had issues for some time and cannot run for more than 20 minutes before locking up completely. Over the last few days, our second machine has exhibited similar issues and needs to be power-cycled every ~24 hours. I spent some time today trying to diagnose the issue with the first machine, removing the motherboard as recommended by Raptor support, to see if the issue still exists with nothing else connected. Sadly, it does. I noted that a firmware update is available, which would move from v2.00 to v2.10.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/raptor-computing.jpeg\"></p>",
+20
mte/2025_05_01_removing-mdadm.json
+20
mte/2025_05_01_removing-mdadm.json
···+"summary": "Cloud providers automatically configure their machines as they expect you to use them. For example, a machine with 4 x 8T disks might come configured with an mdadm RAID5 array spanning the disks. This may be what most people want, but we don\u2019t want this configuration, as we want to see the bare disks. Given you have only a serial console (over SSH) and no access to the cloud-init environment, how do you boot the machine in a different configuration?",+"content": "<p>Cloud providers automatically configure their machines as they expect you to use them. For example, a machine with 4 x 8T disks might come configured with an mdadm RAID5 array spanning the disks. This may be what most people want, but we don\u2019t want this configuration, as we want to see the bare disks. Given you have only a serial console (over SSH) and no access to the cloud-init environment, how do you boot the machine in a different configuration?</p>\n\n<p>Example configuration:</p>\n\n<div><div><pre><code>$ lsblk\nNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS\nfd0 2:0 1 4K 0 disk\nsda 8:0 0 4G 0 disk\n\u251c\u2500sda1 8:1 0 512M 0 part /boot/efi\n\u2514\u2500sda2 8:2 0 3.5G 0 part\n \u2514\u2500md0 9:0 0 10.5G 0 raid5 /\nsdb 8:16 0 4G 0 disk\n\u2514\u2500sdb1 8:17 0 4G 0 part\n \u2514\u2500md0 9:0 0 10.5G 0 raid5 /\nsdc 8:32 0 4G 0 disk\n\u2514\u2500sdc1 8:33 0 4G 0 part\n \u2514\u2500md0 9:0 0 10.5G 0 raid5 /\nsdd 8:48 0 4G 0 disk\n\u2514\u2500sdd1 8:49 0 4G 0 part\n \u2514\u2500md0 9:0 0 10.5G 0 raid5 /\n</code></pre></div></div>\n\n<p>My initial approach was to create a tmpfs root filesystem and then use <code>pivot_root</code> to switch it. This worked except <code>/dev/md0</code> was still busy, so I could not unmount it.</p>\n\n<p>It occurred to me that I could remove one of the partitions from the RAID5 set and use that as the new root disk. <code>mdadm --fail /dev/md0 /dev/sda2</code>, followed by <code>mdadm --remove /dev/md0 /dev/sda2</code> frees up a disk. <code>debootstrap</code> can then be used to install Ubuntu on the partition. As we have a working system, we can preserve the key configuration settings such as <code>/etc/hostname</code>, <code>/etc/netplan</code>, <code>/etc/fstab</code> etc by just copying them from <code>/etc</code> to <code>/mnt/etc</code>. Unfortunately, Ansible\u2019s copy module does not preserve ownership. Therefore, I used <code>rsync</code> instead. <code>/etc/fstab</code> must be edited to reflect the new root partition.</p>\n\n<p>Lastly, run <code>grub-install</code> using <code>chroot</code> to the new environment and reboot.</p>\n\n<div><div><pre><code># lsblk\nNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS\nfd0 2:0 1 4K 0 disk\nsda 8:0 0 4G 0 disk\n\u251c\u2500sda1 8:1 0 512M 0 part /boot/efi\n\u2514\u2500sda2 8:2 0 3.5G 0 part /\nsdb 8:16 0 4G 0 disk\n\u2514\u2500sdb1 8:17 0 4G 0 part\nsdc 8:32 0 4G 0 disk\n\u2514\u2500sdc1 8:33 0 4G 0 part\nsdd 8:48 0 4G 0 disk\n\u2514\u2500sdd1 8:49 0 4G 0 part\n</code></pre></div></div>\n\n<p>The redundant RAID5 partitions can be removed with <code>wipefs -af /dev/sd[b-d]</code></p>\n\n<p>I have wrapped all the steps in an Ansible <a href=\"https://gist.github.com/mtelvers/1fe3571830d982eb8adbcf5a513edb2c\">playbook</a>, which is available as a GitHub gist.</p>\n\n<h1>Addendum</h1>\n\n<p>I had tested this in QEMU with EFI under the assumption that a newly provisioned cloud machine would use EFI. However, when I ran the script against the machine, I found it used a legacy bootloader, and it was even more complicated than I had envisioned, as there were three separate MDADM arrays in place:</p>\n\n<div><div><pre><code># cat /proc/mdstat \nPersonalities : [raid1] [raid6] [raid5] [raid4] [raid0] [raid10] \nmd2 : active raid5 sdb4[0] sdd4[2] sda4[4] sdc4[1]\n 34252403712 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]\n bitmap: 2/86 pages [8KB], 65536KB chunk\n\nmd1 : active raid5 sdd3[1] sda3[2] sdc3[0] sdb3[4]\n 61381632 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]\n \nmd0 : active raid1 sdd2[1] sda2[2] sdb2[3] sdc2[0]\n 523264 blocks super 1.2 [4/4] [UUUU]\n \nunused devices: <none>\n</code></pre></div></div>\n\n<p>With <code>lsblk</code> showing four disks each configured as below:</p>\n\n<div><div><pre><code>NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS\nsda 8:0 0 10.9T 0 disk \n\u251c\u2500sda1 8:1 0 1M 0 part \n\u251c\u2500sda2 8:2 0 512M 0 part \n\u2502 \u2514\u2500md0 9:0 0 511M 0 raid1 \n\u2502 \u2514\u2500md0p1 259:0 0 506M 0 part /boot\n\u251c\u2500sda3 8:3 0 19.5G 0 part \n\u2502 \u2514\u2500md1 9:1 0 58.5G 0 raid5 \n\u2502 \u2514\u2500md1p1 259:1 0 58.5G 0 part /\n\u251c\u2500sda4 8:4 0 10.6T 0 part \n\u2502 \u2514\u2500md2 9:2 0 31.9T 0 raid5 \n\u2502 \u2514\u2500md2p1 259:2 0 31.9T 0 part /data\n\u2514\u2500sda5 8:5 0 512M 0 part [SWAP]\n</code></pre></div></div>\n\n<p>The boot device is a RAID1 mirror (four copies), so removing one of these copies is no issue. There is also a 1MB BIOS boot partition first to give some space for GRUB. The root device was RAID5 as I had anticipated.</p>\n\n<p>The playbook could be adapted: double up on the <code>mdadm</code> commands to break two arrays, update two entries in <code>/etc/fstab</code> and use <code>grub-pc</code> rather than <code>grub-efi-amd64</code>. The updated playbook is <a href=\"https://gist.github.com/mtelvers/ba3b7a5974b50422e2c2e594bed0bdb2\">here</a>.</p>\n\n<p>For testing, I installed Ubuntu using this <a href=\"https://gist.github.com/mtelvers/d2d333bf5c9bd94cb905488667f0cae1\">script</a> to simulate the VM.</p>\n\n<p>Improvements could be made, as <code>/boot</code> could be merged into <code>/</code> as there is no reason to separate them when not using EFI. There never <em>needed</em> to be a <code>/boot</code> as GRUB2 will boot a RAID5 MDADM.</p>\n\n<p>The system is a pretty minimal installation of Ubuntu, a more typical set of tools could be installed with:</p>\n\n<div><div><pre><code>apt install ubuntu-standard\n</code></pre></div></div>",
+20
mte/2025_05_02_zfs-send-streams.json
+20
mte/2025_05_02_zfs-send-streams.json
···+"summary": "We often say that ZFS is an excellent replicated file system, but not the best local filesystem. This led me to think that if we run zfs send on one machine, we might want to write that out as a different filesystem. Is that even possible?",+"content": "<p>We often say that ZFS is an excellent replicated file system, but not the best <em>local</em> filesystem. This led me to think that if we run <code>zfs send</code> on one machine, we might want to write that out as a different filesystem. Is that even possible?</p>\n\n<p>What is in a ZFS stream?</p>\n\n<div><div><pre><code>fallocate <span>-l</span> 10G temp.zfs\nzpool create tank <span>`</span><span>pwd</span><span>`</span>/temp.zfs \nzfs create tank/home\n<span>cp </span>README.md /tank/home\nzfs snapshot tank/home@send\nzfs send tank/home@send | hexdump\n</code></pre></div></div>\n\n<p>I spent a little time writing an OCaml application to parse the record structure before realising that there already was a tool to do this: <code>zstreamdump</code>. Using the <code>-d</code> flag shows the contents; you can see your file in the dumped output.</p>\n\n<div><div><pre><code>zfs send tank/home@send | zstreamdump <span>-d</span>\n</code></pre></div></div>\n\n<p>However, this is <em>not</em> like a <code>tar</code> file. It is not a list of file names and their content. It is a list of block changes. ZFS is a tree structure with a snapshot and a volume being tree roots. The leaves of the tree may be unchanged between two snapshots. <code>zfs send</code> operates at the block level below the file system layer.</p>\n\n<p>To emphasise this point, consider a <code>ZVOL</code> formatted as XFS. The structure of the send stream is the same: a record of block changes.</p>\n\n<div><div><pre><code>zfs create <span>-V</span> 1G tank/vol\nmkfs.xfs /dev/zvol/tank/vol\nzfs snapshot tank/vol@send\nzfs send tank/vol@send | zstreamdump <span>-d</span>\n</code></pre></div></div>\n\n<p>ZVOLs are interesting as they give you a snapshot capability on a file system that doesn\u2019t have one. However, some performance metrics I saw posted online showed disappointing results compared with creating a file and using a loopback device. Furthermore, the snapshot would only be in a crash-consistent state as it would be unaware of the underlying snapshot. XFS does have <code>xfsdump</code> and <code>xfsrestore</code>, but they are pretty basic tools.</p>\n\n<p>[1] See also <a href=\"https://openzfs.org/wiki/Documentation/ZfsSend\">ZfsSend Documentation</a></p>",
+20
mte/2025_05_05_ventoy.json
+20
mte/2025_05_05_ventoy.json
···+"summary": "I need to install a chunky Windows application (90GB download, +250 GB install), but all my Windows VMs are pretty small, so I decided to use a spare Dell OptiPlex 7090. It had Windows 10 installed, but it was pretty messy from the previous use, so I decided to install Windows 11. I had a Windows 11 ISO on hand, so I wrote that to a USB memory stick using the Raspberry Pi Imaging tool (effectively dd in this use case). The machine booted without issue, but the installation failed, citing \u201cA media driver your computer needs is missing\u201d. This error looked familiar: a mass storage driver was missing. I often see this in QEMU or similar situations, and it\u2019s also common on server hardware. However, pressing Shift-F10 and opening diskpart showed all my storage.",+"content": "<p>I need to install a chunky Windows application (90GB download, +250 GB install), but all my Windows VMs are pretty small, so I decided to use a spare Dell OptiPlex 7090. It had Windows 10 installed, but it was pretty messy from the previous use, so I decided to install Windows 11. I had a Windows 11 ISO on hand, so I wrote that to a USB memory stick using the Raspberry Pi Imaging tool (effectively <code>dd</code> in this use case). The machine booted without issue, but the installation failed, citing \u201cA media driver your computer needs is missing\u201d. This error looked familiar: a mass storage driver was missing. I often see this in QEMU or similar situations, and it\u2019s also common on server hardware. However, pressing Shift-F10 and opening <code>diskpart</code> showed all my storage.</p>\n\n<p>It\u2019s been a while since I installed Windows on real hardware. Mostly, I use QEMU and an ISO and an <code>autounattend.xml</code> or PXE boot with Windows Deployment Services and Microsoft Deployment Toolkit. It seems that some time ago, the ISO files that Microsoft publish started to contain files that were larger than the standard allows, and thus, the <code>dd</code> approach to creating an image no longer works.</p>\n\n<p>Microsoft produces a USB creation tool, but I couldn\u2019t see how to tell it to use the ISO file that I already had! This happily led me to <a href=\"https://www.ventoy.net/en/index.html\">Ventoy</a>. The tool installs a small bootloader (~30M) on the memory stick and formats the rest as an exFAT partition. Copy your ISO file(s) to the exFAT partition, and boot the machine from the memory stick. You are then presented with a simple menu allowing you to boot from any of the ISO files. I couldn\u2019t help myself, I had to try to see if another OS would work too!</p>",
+20
mte/2025_05_06_freebsd-uefi.json
+20
mte/2025_05_06_freebsd-uefi.json
···+"summary": "I had assumed that booting FreeBSD over the network using iPXE would be pretty simple. There is even a freebsd.ipxe file included with Netboot.xyz. However, I quickly realised that most of the Internet wisdom on this process centred around legacy BIOS rather than UEFI. When booting with UEFI, the Netboot.xyz menu omits the FreeBSD option as it only supports legacy BIOS. Even in legacy mode, it uses memdisk from the Syslinux project rather than a FreeBSD loader.",+"content": "<p>I had assumed that booting FreeBSD over the network using iPXE would be pretty simple. There is even a <code>freebsd.ipxe</code> file included with Netboot.xyz. However, I quickly realised that most of the Internet wisdom on this process centred around legacy BIOS rather than UEFI. When booting with UEFI, the Netboot.xyz menu omits the FreeBSD option as it only supports legacy BIOS. Even in legacy mode, it uses <code>memdisk</code> from the Syslinux project rather than a FreeBSD loader.</p>\n\n<p>FreeBSD expects to use <code>loader.efi</code> to boot and to mount the root directory over NFS based upon the DHCP scope option <code>root-path</code>. I didn\u2019t want to provide an NFS server just for this process, but even when I gave in and set one up, it still didn\u2019t work. I\u2019m pleased that, in the final configuration, I didn\u2019t need an NFS server.</p>\n\n<p>Much of the frustration around doing this came from setting the <code>root-path</code> option. FreeBSD\u2019s <code>loader.efi</code> sends its own DHCP request to the DHCP server, ignoring the options <code>set root-path</code> or <code>set dhcp.root-path</code> configured in iPXE.</p>\n\n<p>Many <code>dhcpd.conf</code> snippets suggest a block similar to below, but usually with the comment that it doesn\u2019t work. Most authors proceed by setting <code>root-path</code> for the entire scope.</p>\n\n<div><div><pre><code>if exists user-class and option user-class = \"FreeBSD\" {\n option root-path \"your-path\";\n}\n</code></pre></div></div>\n\n<p>I used <code>dhcpdump -i br0</code> to examine the DHCP packets. This showed an ASCII BEL character (0x07) before <code>FreeBSD</code> in the <code>user-class</code> string.</p>\n\n<div><div><pre><code> TIME: 2025-05-07 08:51:03.811\n IP: 0.0.0.0 (2:0:0:0:0:22) > 255.255.255.255 (ff:ff:ff:ff:ff:ff)\n OP: 1 (BOOTPREQUEST)\n HTYPE: 1 (Ethernet)\n HLEN: 6\n HOPS: 0\n XID: 00000001\n SECS: 0\n FLAGS: 0\nCIADDR: 0.0.0.0\nYIADDR: 0.0.0.0\nSIADDR: 0.0.0.0\nGIADDR: 0.0.0.0\nCHADDR: 02:00:00:00:00:22:00:00:00:00:00:00:00:00:00:00\n SNAME: .\n FNAME: .\nOPTION: 53 ( 1) DHCP message type 3 (DHCPREQUEST)\nOPTION: 50 ( 4) Request IP address x.y.z.250\nOPTION: 54 ( 4) Server identifier x.y.z.1\nOPTION: 51 ( 4) IP address leasetime 300 (5m)\nOPTION: 60 ( 9) Vendor class identifier PXEClient\nOPTION: 77 ( 8) User-class Identification 0746726565425344 .FreeBSD\nOPTION: 55 ( 7) Parameter Request List 17 (Root path)\n\t\t\t\t\t 12 (Host name)\n\t\t\t\t\t 16 (Swap server)\n\t\t\t\t\t 3 (Routers)\n\t\t\t\t\t 1 (Subnet mask)\n\t\t\t\t\t 26 (Interface MTU)\n\t\t\t\t\t 54 (Server identifier)\n</code></pre></div></div>\n\n<p>There is a <code>substring</code> command, so I was able to set the <code>root-path</code> like this successfully:</p>\n\n<div><div><pre><code>if exists user-class and substring ( option user-class, 1, 7 ) = \"FreeBSD\" {\n option root-path \"your-path\";\n}\n</code></pre></div></div>\n\n<p>The situation is further complicated as we are using a Ubiquiti Edge router. This requires the command to be encoded as a <code>subnet-parameters</code>, which is injected into <code>/opt/vyatta/etc/dhcpd.conf</code>.</p>\n\n<div><div><pre><code>set service dhcp-server shared-network-name lab subnet x.y.z.0/24 subnet-parameters 'if exists user-class and substring( option user-class, 1, 7 ) = &quot;FreeBSD&quot; { option root-path &quot;tftp://x.y.z.240/freebsd14&quot;;}'\n</code></pre></div></div>\n\n<p>The FreeBSD 14.2 installation <a href=\"https://download.freebsd.org/releases/amd64/amd64/ISO-IMAGES/14.2/FreeBSD-14.2-RELEASE-amd64-disc1.iso\">ISO</a> contains the required <code>boot/loader.efi</code>, but we cannot use the extracted ISO as a root file system.</p>\n\n<p>Stage <code>loader.efi</code> on a TFTP server; in my case, the TFTP root is <code>/netbootxyz/config/menus</code>. The IPXE file only needs to contain the <code>chain</code> command.</p>\n\n<div><div><pre><code>#!ipxe\nchain loader.efi\n</code></pre></div></div>\n\n<p>Download <a href=\"https://mfsbsd.vx.sk/files/iso/14/amd64/mfsbsd-14.2-RELEASE-amd64.iso\">mfsBSD</a>, and extract the contents to a subfolder on the TFTP server. I went <code>freebsd14</code>. This ISO contains the kernel, <code>loader.conf</code> and the a minimal root file system, <code>mfsroot.gz</code>.</p>\n\n<p>With the content of mfsBSD ISO staged on the TFTP server and the modification to the DHCP scope options, the machine will boot into FreeBSD. Sign in with <code>root</code>/<code>mfsroot</code> and invoke <code>bsdinstall</code>.</p>\n\n<p>On real hardware, rather than QEMU, I found that I needed to explicitly set the serial console by adding these lines to the end of <code>boot/loader.conf</code>/</p>\n\n<div><div><pre><code># Serial console\nconsole=\"comconsole\"\ncomconsole_port=\"0x2f8\"\ncomconsole_speed=\"115200\"\n</code></pre></div></div>",
+20
mte/2025_05_07_otter-wiki-with-raven.json
+20
mte/2025_05_07_otter-wiki-with-raven.json
···+"summary": "We\u2019d like to have a go using Otter Wiki, but rather than having yet more usernames and passwords, we would like to integrate this into the Raven authentication system. There is guide on using SAML2 with Apache",+"content": "<p>We\u2019d like to have a go using <a href=\"https://otterwiki.com\">Otter Wiki</a>, but rather than having yet more usernames and passwords, we would like to integrate this into the Raven authentication system. There is <a href=\"https://docs.raven.cam.ac.uk/en/latest/apache-saml2/\">guide on using SAML2 with Apache</a></p>\n\n<p>The steps are:</p>\n<ol>\n <li>Start the provided container.</li>\n <li>Visit http://your-container/Shibboleth.sso/Metadata and download the <code>Metadata</code>.</li>\n <li>Go to <a href=\"https://metadata.raven.cam.ac.uk\">https://metadata.raven.cam.ac.uk</a> and create a new site by pasting in the metadata.</li>\n <li>Wait one minute and try to connect to http://your-container</li>\n</ol>\n\n<p>Otter Wiki, when started with the environment variable <code>AUTH_METHOD=PROXY_HEADER</code>, reads HTTP header fields <code>x-otterwiki-name</code>, <code>x-otterwiki-email</code> and <code>x-otterwiki-permissions</code>. See <a href=\"https://github.com/redimp/otterwiki/blob/main/docs/auth_examples/header-auth/README.md\">this example</a></p>\n\n<p>Apache can be configured to set these header fields based upon the SAML user who is authenticated with Raven:</p>\n\n<div><div><pre><code>ShibUseEnvironment On\nRequestHeader set x-otterwiki-name %{displayName}e\nRequestHeader set x-otterwiki-email %{REMOTE_USER}s\nRequestHeader set x-otterwiki-permissions \"READ,WRITE,UPLOAD,ADMIN\u201d\n</code></pre></div></div>\n\n<p>I have created a <code>docker-compose.yml</code> file, which incorporates Apache running as a reverse proxy, an Otter Wiki container and includes HTTPS support with a Let\u2019s Encrypt certificate. The files are available on <a href=\"https://github.com/mtelvers/doc-samples/commit/5ca2f8934a4cf1269e60b2b18de563352f764f66\">GitHub</a></p>\n\n<p>The test site is <a href=\"https://otterwiki.tunbury.uk\">https://otterwiki.tunbury.uk</a>.</p>",
+20
mte/2025_05_08_debugging-obuilder-macos.json
+20
mte/2025_05_08_debugging-obuilder-macos.json
···+"summary": "The log from an OBuilder job starts with the steps needed to reproduce the job locally. This boilerplate output assumes that all OBuilder jobs start from a Docker base image, but on some operating systems, such as FreeBSD and macOS, OBuilder uses ZFS base images. On OpenBSD and Windows, it uses QEMU images. The situation is further complicated when the issue only affects a specific architecture that may be unavailable to the user.",+"content": "<p>The log from an <a href=\"https://github.com/ocurrent/obuilder\">OBuilder</a> job starts with the steps needed to reproduce the job locally. This boilerplate output assumes that all OBuilder jobs start from a Docker base image, but on some operating systems, such as FreeBSD and macOS, OBuilder uses ZFS base images. On OpenBSD and Windows, it uses QEMU images. The situation is further complicated when the issue only affects a specific architecture that may be unavailable to the user.</p>\n\n<div><div><pre><code>2025-05-08 13:29.37: New job: build bitwuzla-cxx.0.7.0, using opam 2.3\n from https://github.com/ocaml/opam-repository.git#refs/pull/27768/head (55a47416d532dc829d9111297970934a21a1b1c4)\n on macos-homebrew-ocaml-4.14/amd64\n\nTo reproduce locally:\n\ncd $(mktemp -d)\ngit clone --recursive \"https://github.com/ocaml/opam-repository.git\" && cd \"opam-repository\" && git fetch origin \"refs/pull/27768/head\" && git reset --hard 55a47416\ngit fetch origin master\ngit merge --no-edit b8a7f49af3f606bf8a22869a1b52b250dd90092e\ncat > ../Dockerfile <<'END-OF-DOCKERFILE'\n\nFROM macos-homebrew-ocaml-4.14\nUSER 1000:1000\nRUN ln -f ~/local/bin/opam-2.3 ~/local/bin/opam\nRUN opam init --reinit -ni\nRUN opam option solver=builtin-0install && opam config report\nENV OPAMDOWNLOADJOBS=\"1\"\nENV OPAMERRLOGLEN=\"0\"\nENV OPAMPRECISETRACKING=\"1\"\nENV CI=\"true\"\nENV OPAM_REPO_CI=\"true\"\nRUN rm -rf opam-repository/\nCOPY --chown=1000:1000 . opam-repository/\nRUN opam repository set-url -k local --strict default opam-repository/\nRUN opam update --depexts || true\nRUN opam pin add -k version -yn bitwuzla-cxx.0.7.0 0.7.0\nRUN opam reinstall bitwuzla-cxx.0.7.0; \\\n res=$?; \\\n test \"$res\" != 31 && exit \"$res\"; \\\n export OPAMCLI=2.0; \\\n build_dir=$(opam var prefix)/.opam-switch/build; \\\n failed=$(ls \"$build_dir\"); \\\n partial_fails=\"\"; \\\n for pkg in $failed; do \\\n if opam show -f x-ci-accept-failures: \"$pkg\" | grep -qF \"\\\"macos-homebrew\\\"\"; then \\\n echo \"A package failed and has been disabled for CI using the 'x-ci-accept-failures' field.\"; \\\n fi; \\\n test \"$pkg\" != 'bitwuzla-cxx.0.7.0' && partial_fails=\"$partial_fails $pkg\"; \\\n done; \\\n test \"${partial_fails}\" != \"\" && echo \"opam-repo-ci detected dependencies failing: ${partial_fails}\"; \\\n exit 1\n\n\nEND-OF-DOCKERFILE\ndocker build -f ../Dockerfile .\n</code></pre></div></div>\n\n<p>It is, therefore, difficult to diagnose the issue on these operating systems and on esoteric architectures. Is it an issue with the CI system or the job itself?</p>\n\n<p>My approach is to get myself into an interactive shell at the point in the build where the failure occurs. On Linux and FreeBSD, the log is available in <code>/var/log/syslog</code> or <code>/var/log/messages</code> respectively. On macOS, this log is written to <code>ocluster.log</code>. macOS workers are single-threaded, so the worker must be paused before progressing.</p>\n\n<p>Each step in an OBuilder job consists of taking a snapshot of the previous layer, running a command in that layer, and keeping or discarding the layer depending on the command\u2019s success or failure. On macOS, layers are ZFS snapshots mounted over the Homebrew directory and the CI users\u2019 home directory. We can extract the appropriate command from the logs.</p>\n\n<div><div><pre><code>2025-05-08 14:31.17 application [INFO] Exec \"zfs\" \"clone\" \"-o\" \"canmount=noauto\" \"--\" \"obuilder/result/a67e6d3b460fa52b5c57581e7c01fa74ddca0a0b5462fef34103a09e87f3feec@snap\" \"obuilder/result/af09425cd7744c7b32ed000b11db90295142f3d3430fddb594932d5c02343b40\"\n2025-05-08 14:31.17 application [INFO] Exec \"zfs\" \"mount\" \"obuilder/result/af09425cd7744c7b32ed000b11db90295142f3d3430fddb594932d5c02343b40\"\n2025-05-08 14:31.17 application [INFO] Exec \"zfs\" \"clone\" \"-o\" \"mountpoint=none\" \"--\" \"obuilder/result/a67e6d3b460fa52b5c57581e7c01fa74ddca0a0b5462fef34103a09e87f3feec/brew@snap\" \"obuilder/result/af09425cd7744c7b32ed000b11db90295142f3d3430fddb594932d5c02343b40/brew\"\n2025-05-08 14:31.17 application [INFO] Exec \"zfs\" \"clone\" \"-o\" \"mountpoint=none\" \"--\" \"obuilder/result/a67e6d3b460fa52b5c57581e7c01fa74ddca0a0b5462fef34103a09e87f3feec/home@snap\" \"obuilder/result/af09425cd7744c7b32ed000b11db90295142f3d3430fddb594932d5c02343b40/home\"\ncannot open 'obuilder/result/af09425cd7744c7b32ed000b11db90295142f3d3430fddb594932d5c02343b40@snap': dataset does not exist\n2025-05-08 14:31.17 application [INFO] Exec \"zfs\" \"clone\" \"--\" \"obuilder/cache/c-opam-archives@snap\" \"obuilder/cache-tmp/8608-c-opam-archives\"\n2025-05-08 14:31.17 application [INFO] Exec \"zfs\" \"clone\" \"--\" \"obuilder/cache/c-homebrew@snap\" \"obuilder/cache-tmp/8609-c-homebrew\"\n2025-05-08 14:31.18 obuilder [INFO] result_tmp = /Volumes/obuilder/result/af09425cd7744c7b32ed000b11db90295142f3d3430fddb594932d5c02343b40\n2025-05-08 14:31.18 application [INFO] Exec \"zfs\" \"set\" \"mountpoint=/Users/mac1000\" \"obuilder/result/af09425cd7744c7b32ed000b11db90295142f3d3430fddb594932d5c02343b40/home\"\n2025-05-08 14:31.18 application [INFO] Exec \"zfs\" \"set\" \"mountpoint=/usr/local\" \"obuilder/result/af09425cd7744c7b32ed000b11db90295142f3d3430fddb594932d5c02343b40/brew\"\n2025-05-08 14:31.18 obuilder [INFO] src = /Volumes/obuilder/cache-tmp/8608-c-opam-archives, dst = /Users/mac1000/.opam/download-cache, type rw\n2025-05-08 14:31.18 application [INFO] Exec \"zfs\" \"set\" \"mountpoint=/Users/mac1000/.opam/download-cache\" \"obuilder/cache-tmp/8608-c-opam-archives\"\nUnmount successful for /Volumes/obuilder/cache-tmp/8608-c-opam-archives\n2025-05-08 14:31.18 obuilder [INFO] src = /Volumes/obuilder/cache-tmp/8609-c-homebrew, dst = /Users/mac1000/Library/Caches/Homebrew, type rw\n2025-05-08 14:31.18 application [INFO] Exec \"zfs\" \"set\" \"mountpoint=/Users/mac1000/Library/Caches/Homebrew\" \"obuilder/cache-tmp/8609-c-homebrew\"\nUnmount successful for /Volumes/obuilder/cache-tmp/8609-c-homebrew\n2025-05-08 14:31.19 application [INFO] Exec \"sudo\" \"dscl\" \".\" \"list\" \"/Users\"\n2025-05-08 14:31.19 application [INFO] Exec \"sudo\" \"-u\" \"mac1000\" \"-i\" \"getconf\" \"DARWIN_USER_TEMP_DIR\"\n2025-05-08 14:31.19 application [INFO] Fork exec \"sudo\" \"su\" \"-l\" \"mac1000\" \"-c\" \"--\" \"source ~/.obuilder_profile.sh && env 'TMPDIR=/var/folders/s_/z7_t3bvn5txfn81hk9p3ntfw0000z8/T/' 'OPAM_REPO_CI=true' 'CI=true' 'OPAMPRECISETRACKING=1' 'OPAMERRLOGLEN=0' 'OPAMDOWNLOADJOBS=1' \"$0\" \"$@\"\" \"/usr/bin/env\" \"bash\" \"-c\" \"opam reinstall bitwuzla-cxx.0.7.0;\n res=$?;\n test \"$res\" != 31 && exit \"$res\";\n export OPAMCLI=2.0;\n build_dir=$(opam var prefix)/.opam-switch/build;\n failed=$(ls \"$build_dir\");\n partial_fails=\"\";\n for pkg in $failed; do\n if opam show -f x-ci-accept-failures: \"$pkg\" | grep -qF \"\\\"macos-homebrew\\\"\"; then\n echo \"A package failed and has been disabled for CI using the 'x-ci-accept-failures' field.\";\n fi;\n test \"$pkg\" != 'bitwuzla-cxx.0.7.0' && partial_fails=\"$partial_fails $pkg\";\n done;\n test \"${partial_fails}\" != \"\" && echo \"opam-repo-ci detected dependencies failing: ${partial_fails}\u201d;\n exit 1\"\n2025-05-08 14:31.28 worker [INFO] OBuilder partition: 27% free, 2081 items\n2025-05-08 14:31.58 worker [INFO] OBuilder partition: 27% free, 2081 items\n2025-05-08 14:32.28 worker [INFO] OBuilder partition: 27% free, 2081 items\n2025-05-08 14:32.43 application [INFO] Exec \"zfs\" \"inherit\" \"mountpoint\" \"obuilder/cache-tmp/8608-c-opam-archives\"\nUnmount successful for /Users/mac1000/.opam/download-cache\n2025-05-08 14:32.44 application [INFO] Exec \"zfs\" \"inherit\" \"mountpoint\" \"obuilder/cache-tmp/8609-c-homebrew\"\nUnmount successful for /Users/mac1000/Library/Caches/Homebrew\n2025-05-08 14:32.45 application [INFO] Exec \"zfs\" \"set\" \"mountpoint=none\" \"obuilder/result/af09425cd7744c7b32ed000b11db90295142f3d3430fddb594932d5c02343b40/home\"\nUnmount successful for /Users/mac1000\n2025-05-08 14:32.45 application [INFO] Exec \"zfs\" \"set\" \"mountpoint=none\" \"obuilder/result/af09425cd7744c7b32ed000b11db90295142f3d3430fddb594932d5c02343b40/brew\"\nUnmount successful for /usr/local\n2025-05-08 14:32.46 application [INFO] Exec \"zfs\" \"rename\" \"--\" \"obuilder/cache/c-homebrew\" \"obuilder/cache-tmp/8610-c-homebrew\"\nUnmount successful for /Volumes/obuilder/cache/c-homebrew\n2025-05-08 14:32.46 application [INFO] Exec \"zfs\" \"promote\" \"obuilder/cache-tmp/8609-c-homebrew\"\n2025-05-08 14:32.46 application [INFO] Exec \"zfs\" \"destroy\" \"-f\" \"--\" \"obuilder/cache-tmp/8610-c-homebrew\"\nUnmount successful for /Volumes/obuilder/cache-tmp/8610-c-homebrew\n2025-05-08 14:32.48 application [INFO] Exec \"zfs\" \"rename\" \"--\" \"obuilder/cache-tmp/8609-c-homebrew@snap\" \"obuilder/cache-tmp/8609-c-homebrew@old-2152\"\n2025-05-08 14:32.48 application [INFO] Exec \"zfs\" \"destroy\" \"-d\" \"--\" \"obuilder/cache-tmp/8609-c-homebrew@old-2152\"\n2025-05-08 14:32.48 application [INFO] Exec \"zfs\" \"snapshot\" \"-r\" \"--\" \"obuilder/cache-tmp/8609-c-homebrew@snap\"\n2025-05-08 14:32.48 application [INFO] Exec \"zfs\" \"rename\" \"--\" \"obuilder/cache-tmp/8609-c-homebrew\" \"obuilder/cache/c-homebrew\"\nUnmount successful for /Volumes/obuilder/cache-tmp/8609-c-homebrew\n2025-05-08 14:32.49 application [INFO] Exec \"zfs\" \"rename\" \"--\" \"obuilder/cache/c-opam-archives\" \"obuilder/cache-tmp/8611-c-opam-archives\"\nUnmount successful for /Volumes/obuilder/cache/c-opam-archives\n2025-05-08 14:32.50 application [INFO] Exec \"zfs\" \"promote\" \"obuilder/cache-tmp/8608-c-opam-archives\"\n2025-05-08 14:32.50 application [INFO] Exec \"zfs\" \"destroy\" \"-f\" \"--\" \"obuilder/cache-tmp/8611-c-opam-archives\"\nUnmount successful for /Volumes/obuilder/cache-tmp/8611-c-opam-archives\n2025-05-08 14:32.51 application [INFO] Exec \"zfs\" \"rename\" \"--\" \"obuilder/cache-tmp/8608-c-opam-archives@snap\" \"obuilder/cache-tmp/8608-c-opam-archives@old-2152\"\n2025-05-08 14:32.51 application [INFO] Exec \"zfs\" \"destroy\" \"-d\" \"--\" \"obuilder/cache-tmp/8608-c-opam-archives@old-2152\"\n2025-05-08 14:32.51 application [INFO] Exec \"zfs\" \"snapshot\" \"-r\" \"--\" \"obuilder/cache-tmp/8608-c-opam-archives@snap\"\n2025-05-08 14:32.52 application [INFO] Exec \"zfs\" \"rename\" \"--\" \"obuilder/cache-tmp/8608-c-opam-archives\" \"obuilder/cache/c-opam-archives\"\nUnmount successful for /Volumes/obuilder/cache-tmp/8608-c-opam-archives\n2025-05-08 14:32.52 application [INFO] Exec \"zfs\" \"destroy\" \"-r\" \"-f\" \"--\" \"obuilder/result/af09425cd7744c7b32ed000b11db90295142f3d3430fddb594932d5c02343b40\"\nUnmount successful for /Volumes/obuilder/result/af09425cd7744c7b32ed000b11db90295142f3d3430fddb594932d5c02343b40\n2025-05-08 14:32.58 worker [INFO] OBuilder partition: 27% free, 2081 items\n2025-05-08 14:33.04 worker [INFO] Job failed: \"/usr/bin/env\" \"bash\" \"-c\" \"opam reinstall bitwuzla-cxx.0.7.0;\n res=$?;\n test \"$res\" != 31 && exit \"$res\";\n export OPAMCLI=2.0;\n build_dir=$(opam var prefix)/.opam-switch/build;\n failed=$(ls \"$build_dir\");\n partial_fails=\"\";\n for pkg in $failed; do\n if opam show -f x-ci-accept-failures: \"$pkg\" | grep -qF \"\\\"macos-homebrew\\\"\"; then\n echo \"A package failed and has been disabled for CI using the 'x-ci-accept-failures' field.\";\n fi;\n test \"$pkg\" != 'bitwuzla-cxx.0.7.0' && partial_fails=\"$partial_fails $pkg\";\n done;\n test \"${partial_fails}\" != \"\" && echo \"opam-repo-ci detected dependencies failing: ${partial_fails}\";\n exit 1\" failed with exit status 1\n\n</code></pre></div></div>\n\n<p>Run each of the <em>Exec</em> commands at the command prompt up to the <em>Fork exec</em>. We do need to run it, but we want an interactive shell, so let\u2019s change the final part of the command to <code>bash</code>:</p>\n\n<div><div><pre><code>sudo su -l mac1000 -c -- \"source ~/.obuilder_profile.sh && env 'TMPDIR=/var/folders/s_/z7_t3bvn5txfn81hk9p3ntfw0000z8/T/' 'OPAM_REPO_CI=true' 'CI=true' 'OPAMPRECISETRACKING=1' 'OPAMERRLOGLEN=0' 'OPAMDOWNLOADJOBS=1' bash\"\n</code></pre></div></div>\n\n<p>Now, at the shell prompt, we can try <code>opam reinstall bitwuzla-cxx.0.7.0</code>. Hopefully, this fails, which proves we have successfully recreated the environment!</p>\n\n<div><div><pre><code>$ opam source bitwuzla-cxx.0.7.0\n$ cd bitwuzla-cxx.0.7.0\n$ dune build\nFile \"vendor/dune\", lines 201-218, characters 0-436:\n201 | (rule\n202 | (deps\n203 | (source_tree bitwuzla)\n.....\n216 | %{p0002}\n217 | (run patch -p1 --directory bitwuzla))\n218 | (write-file %{target} \"\")))))\n(cd _build/default/vendor && /usr/bin/patch -p1 --directory bitwuzla) < _build/default/vendor/patch/0001-api-Add-hook-for-ocaml-z-value.patch\npatching file 'include/bitwuzla/cpp/bitwuzla.h'\nCan't create '/var/folders/s_/z7_t3bvn5txfn81hk9p3ntfw0000z8/T/build_9012b8_dune/patchoEyVbKAjSTw', output is in '/var/folders/s_/z7_t3bvn5txfn81hk9p3ntfw0000z8/T/build_9012b8_dune/patchoEyVbKAjSTw': Permission denied\npatch: **** can't create '/var/folders/s_/z7_t3bvn5txfn81hk9p3ntfw0000z8/T/build_9012b8_dune/patchoEyVbKAjSTw': Permission denied\n</code></pre></div></div>\n\n<p>This matches the output we see on the CI logs. <code>/var/folders/s_/z7_t3bvn5txfn81hk9p3ntfw0000z8/T</code> is the <code>TMPDIR</code> value set in the environment. <code>Permission denied</code> looks like file system permissions. <code>ls -l</code> and <code>touch</code> show we can write to this directory.</p>\n\n<p>As we are running on macOS, and the Dune is invoking <code>patch</code>, my thought goes to Apple\u2019s <code>patch</code> vs GNU\u2019s <code>patch</code>. Editing <code>vendor/dune</code> to use <code>gpatch</code> rather than <code>patch</code> allows the project to build.</p>\n\n<div><div><pre><code>$ dune build\n(cd _build/default/vendor && /usr/local/bin/gpatch --directory bitwuzla -p1) < _build/default/vendor/patch/0001-api-Add-hook-for-ocaml-z-value.patch\nFile include/bitwuzla/cpp/bitwuzla.h is read-only; trying to patch anyway\npatching file include/bitwuzla/cpp/bitwuzla.h\n</code></pre></div></div>\n\n<p>Running Apple\u2019s <code>patch</code> directly,</p>\n\n<div><div><pre><code>$ patch -p1 < ../../../../vendor/patch/0001-api-Add-hook-for-ocaml-z-value.patch\npatching file 'include/bitwuzla/cpp/bitwuzla.h'\nCan't create '/var/folders/s_/z7_t3bvn5txfn81hk9p3ntfw0000z8/T/patchorVrfBtHVDI', output is in '/var/folders/s_/z7_t3bvn5txfn81hk9p3ntfw0000z8/T/patchorVrfBtHVDI': Permission denied\npatch: **** can't create '/var/folders/s_/z7_t3bvn5txfn81hk9p3ntfw0000z8/T/patchorVrfBtHVDI': Permission denied\n</code></pre></div></div>\n\n<p>However, <code>touch /var/folders/s_/z7_t3bvn5txfn81hk9p3ntfw0000z8/T/patchorVrfBtHVDI</code> succeeds.</p>\n\n<p>Looking back at the output from GNU <code>patch</code>, it reports that the file itself is read-only.</p>\n\n<div><div><pre><code>$ ls -l include/bitwuzla/cpp/bitwuzla.h\n-r--r--r-- 1 mac1000 admin 52280 May 8 15:05 include/bitwuzla/cpp/bitwuzla.h\n</code></pre></div></div>\n\n<p>Let\u2019s try to adjust the permissions:</p>\n\n<div><div><pre><code>$ chmod 644 include/bitwuzla/cpp/bitwuzla.h\n$ patch -p1 < ../../../../vendor/patch/0001-api-Add-hook-for-ocaml-z-value.patch\npatching file 'include/bitwuzla/cpp/bitwuzla.h\u2019\n</code></pre></div></div>\n\n<p>And now, it succeeds. The issue is that GNU\u2019s <code>patch</code> and Apple\u2019s <code>patch</code> act differently when the file being patched is read-only. Apple\u2019s <code>patch</code> gives a spurious error, while GNU\u2019s <code>patch</code> emits a warning and makes the change anyway.</p>\n\n<p>Updating the <code>dune</code> file to include <code>chmod</code> should both clear the warning and allow the use of the native patch.</p>\n\n<div><div><pre><code>(rule\n (deps\n (source_tree bitwuzla)\n (:p0001\n (file patch/0001-api-Add-hook-for-ocaml-z-value.patch))\n (:p0002\n (file patch/0002-binding-Fix-segfault-with-parallel-instances.patch)))\n (target .bitwuzla_tree)\n (action\n (no-infer\n (progn\n (run chmod -R u+w bitwuzla)\n (with-stdin-from\n %{p0001}\n (run patch -p1 --directory bitwuzla))\n (with-stdin-from\n %{p0002}\n (run patch -p1 --directory bitwuzla))\n (write-file %{target} \"\")))))\n</code></pre></div></div>\n\n<p>As an essential last step, we need to tidy up on this machine. Exit the shell. Refer back to the log file for the job and run all the remaining ZFS commands. This is incredibly important on macOS and essential to keep the jobs database in sync with the layers.</p>",
+20
mte/2025_05_09_worker-moves.json
+20
mte/2025_05_09_worker-moves.json
···+"summary": "Following the setup of rosemary with FreeBSD 14 (with 20C/40T), I have paused spring and summer (which combined have 12C/24T) and rosemary is now handling all of the FreeBSD workload.",+"content": "<p>Following the setup of <em>rosemary</em> with <a href=\"https://www.tunbury.org/freebsd-uefi/\">FreeBSD 14</a> (with 20C/40T), I have paused <em>spring</em> and <em>summer</em> (which combined have 12C/24T) and <em>rosemary</em> is now handling all of the <a href=\"https://github.com/ocurrent/freebsd-infra/pull/14\">FreeBSD workload</a>.</p>\n\n<p><em>Oregano</em> has now taken the OpenBSD workload from <em>bremusa</em>. <em>bremusa</em> has been redeployed in the <code>linux-x86_64</code> pool. With the extra processing, I have paused the Scaleway workers <em>x86-bm-c1</em> through <em>x86-bm-c9</em>.</p>\n\n<p>These changes, plus the <a href=\"https://www.tunbury.org/equinix-moves/\">removal of the Equnix machines</a>, are now reflected in <a href=\"https://infra.ocaml.org\">https://infra.ocaml.org</a>.</p>",
+20
mte/2025_05_12_posthog.json
+20
mte/2025_05_12_posthog.json
···+"summary": "Sabine would like to switch OCaml.org from using Plausible over to Posthog. The underlying reason for the move is that the self-hosted product from Posthog has more features than the equivalent from Plausible. Of particular interest is the heatmap feature to assess the number of visitors who finish the Tour of OCaml.",+"content": "<p>Sabine would like to switch <a href=\"https://ocaml.org\">OCaml.org</a> from using <a href=\"https://plausible.io\">Plausible</a> over to <a href=\"https://posthog.com\">Posthog</a>. The underlying reason for the move is that the self-hosted product from Posthog has more features than the equivalent from Plausible. Of particular interest is the heatmap feature to assess the number of visitors who finish the <a href=\"https://ocaml.org/docs/tour-of-ocaml\">Tour of OCaml</a>.</p>\n\n<p>Posthog has <a href=\"https://posthog.com/docs/self-host\">documentation</a> on the self-hosted solution. In short, create a VM with 4 vCPU, 16GB RAM, and 30GB storage and run the setup script:</p>\n\n<div><div><pre><code>/bin/bash <span>-c</span> <span>\"</span><span>$(</span>curl <span>-fsSL</span> https://raw.githubusercontent.com/posthog/posthog/HEAD/bin/deploy-hobby<span>)</span><span>\u201d\n</span></code></pre></div></div>\n\n<p>Any subsequent upgrades can be achieved with:</p>\n\n<div><div><pre><code>/bin/bash <span>-c</span> <span>\"</span><span>$(</span>curl <span>-fsSL</span> https://raw.githubusercontent.com/posthog/posthog/HEAD/bin/upgrade-hobby<span>)</span><span>\"</span>\n</code></pre></div></div>\n\n<p>After installation, I created a <a href=\"https://posthog.ci.dev/shared/seqtamWuMXLwxJEAX1XNjwhzciAajw\">public dashboard</a> as with <a href=\"https://plausible.ci.dev/ocaml.org\">Plausible</a>. I also enabled the option <code>Discard client IP data</code>.</p>\n\n<p>The OCaml website can be updated with <a href=\"https://github.com/ocaml/ocaml.org/pull/3101\">PR#3101</a>.</p>",
+20
mte/2025_05_13_ubuntu-apparmor.json
+20
mte/2025_05_13_ubuntu-apparmor.json
···+"content": "<p>Patrick reported issues with OCaml-CI running tests on <code>ocaml-ppx</code>.</p>\n\n<blockquote>\n <p>Fedora seems to be having some issues: https://ocaml.ci.dev/github/ocaml-ppx/ppxlib/commit/0d6886f5bcf22287a66511817e969965c888d2b7/variant/fedora-40-5.3_opam-2.3</p>\n <div><div><pre><code>sudo: PAM account management error: Authentication service cannot retrieve authentication info\nsudo: a password is required\n\"/usr/bin/env\" \"bash\" \"-c\" \"sudo dnf install -y findutils\" failed with exit status 1\n2025-05-12 08:55.09: Job failed: Failed: Build failed\n</code></pre></div> </div>\n</blockquote>\n\n<p>I took this problem at face value and replied that the issue would be related to Fedora 40, which is EOL. I created <a href=\"https://github.com/ocurrent/ocaml-ci/pull/1011\">PR#1011</a> for OCaml-CI and deployed it. However, the problem didn\u2019t go away. We were now testing Fedora 42, but jobs were still failing. I created a minimal obuilder job specification:</p>\n\n<div><div><pre><code>((from ocaml/opam:fedora-42-ocaml-4.14@sha256:475a852401de7d578efec2afce4384d87b505f5bc610dc56f6bde3b87ebb7664)\n(user (uid 1000) (gid 1000))\n(run (shell \"sudo ln -f /usr/bin/opam-2.3 /usr/bin/opam\")))\n</code></pre></div></div>\n\n<p>Submitting the job to the cluster showed it worked on all machines except for <code>bremusa</code>.</p>\n\n<div><div><pre><code><span>$ </span>ocluster-client submit-obuilder <span>--connect</span> mtelvers.cap <span>--pool</span> linux-x86_64 <span>--local-file</span> fedora-42.spec\nTailing log:\nBuilding on bremusa.ocamllabs.io\n\n<span>(</span>from ocaml/opam:fedora-42-ocaml-4.14@sha256:475a852401de7d578efec2afce4384d87b505f5bc610dc56f6bde3b87ebb7664<span>)</span>\n2025-05-12 16:55.42 <span>---</span><span>></span> using <span>\"aefb7551cd0db7b5ebec7e244d5637aef02ab3f94c732650de7ad183465adaa0\"</span> from cache\n\n/: <span>(</span>user <span>(</span>uid 1000<span>)</span> <span>(</span>gid 1000<span>))</span>\n\n/: <span>(</span>run <span>(</span>shell <span>\"sudo ln -f /usr/bin/opam-2.3 /usr/bin/opam\"</span><span>))</span>\n<span>sudo</span>: PAM account management error: Authentication service cannot retrieve authentication info\n<span>sudo</span>: a password is required\n<span>\"/usr/bin/env\"</span> <span>\"bash\"</span> <span>\"-c\"</span> <span>\"sudo ln -f /usr/bin/opam-2.3 /usr/bin/opam\"</span> failed with <span>exit </span>status 1\nFailed: Build failed.\n</code></pre></div></div>\n\n<p>Changing the image to <code>opam:debian-12-ocaml-4.14</code> worked, so the issue only affects Fedora images and only on <code>bremusa</code>. I was able to reproduce the issue directly using <code>runc</code>.</p>\n\n<div><div><pre><code><span># runc run test</span>\n<span>sudo</span>: PAM account management error: Authentication service cannot retrieve authentication info\n<span>sudo</span>: a password is required\n</code></pre></div></div>\n\n<p>Running <code>ls -l /etc/shadow</code> in the container showed that the permissions on <code>/etc/shadow</code> are 000. If these are changed to <code>640</code>, then <code>sudo</code> works correctly. Permissions are set 000 for <code>/etc/shadow</code> in some distributions as access is limited to processes with the capability <code>DAC_OVERRIDE</code>.</p>\n\n<p>Having seen a permission issue with <code>runc</code> and <code>libseccomp</code> compatibility <a href=\"https://github.com/ocaml/infrastructure/issues/121\">before</a>, I went down a rabbit hole investigating that. Ultimately, I compiled <code>runc</code> without <code>libseccomp</code> support, <code>make MAKETAGS=\"\"</code>, and this still had the same issue.</p>\n\n<p>All the machines in the <code>linux-x86_64</code> pool are running Ubuntu 22.04 except for <code>bremusa</code>. I configured a spare machine with Ubuntu 24.04 and tested. The problem appeared on this machine as well.</p>\n\n<p>Is there a change in Ubuntu 24.04?</p>\n\n<p>I temporarily disabled AppArmor by editing <code>/etc/default/grub</code> and added <code>apparmor=0</code> to <code>GRUB_CMDLINE_LINUX</code>, ran <code>update-grub</code> and rebooted. Disabling AppArmor entirely like this can create security vulnerabilities, so this isn\u2019t recommended, but it did clear the issue.</p>\n\n<p>After enabling AppArmor again, I disabled the configuration for <code>runc</code> by running:</p>\n\n<div><div><pre><code><span>ln</span> <span>-s</span> /etc/apparmor.d/runc /etc/apparmor.d/disable/\napparmor_parser <span>-R</span> /etc/apparmor.d/runc\n</code></pre></div></div>\n\n<p>This didn\u2019t help - in fact, this was worse as now <code>runc</code> couldn\u2019t run at all. I restored the configuration and added <code>capability dac_override</code>, but this didn\u2019t help either.</p>\n\n<p>Looking through the profiles with <code>grep shadow -r /etc/apparmor.d</code>, I noticed <code>unix-chkpwd</code>, which could be the source of the issue. I disabled this profile and the issue was resolved.</p>\n\n<div><div><pre><code><span>ln</span> <span>-s</span> /etc/apparmor.d/unix-chkpwd /etc/apparmor.d/disable\napparmor_parser <span>-R</span> /etc/apparmor.d/unix-chkpwd\n</code></pre></div></div>\n\n<p>Armed with the answer, it\u2019s pretty easy to find other people with related issues:</p>\n<ul>\n <li>https://github.com/docker/build-push-action/issues/1302</li>\n <li>https://github.com/moby/moby/issues/48734</li>\n</ul>",
+20
mte/2025_05_14_opam-health-check-oxcaml.json
+20
mte/2025_05_14_opam-health-check-oxcaml.json
···+"summary": "Arthur mentioned that it would be great to know which packages build successfully with OxCaml and which don\u2019t.",+"content": "<p>Arthur mentioned that it would be great to know which packages build successfully with OxCaml and which don\u2019t.</p>\n\n<p>With a little effort and <a href=\"https://github.com/ocurrent/opam-health-check/pull/106\">PR#106</a>, I was able to get <a href=\"https://github.com/ocurrent/opam-health-check\">opam-health-check</a> to build OxCaml from the Jane Street branch and test the latest version of all the packages in opam.</p>\n\n<p>I created the switch using the branch <code>janestreet/opam-repository#with-extensions</code>. However, I ran into issues as <code>autoconf</code> isn\u2019t included in the base images. I added an <code>extra-command</code> to install it, but found that these are executed last, after the switch has been created, and I needed <code>autoconf</code> before the switch was created. My PR moved the extra commands earlier in the build process.</p>\n\n<p>Here is my <code>config.yaml</code>.</p>\n\n<div><div><pre><code>name: default\nport: 8080\npublic-url: http://oxcaml.check.ci.dev\nadmin-port: 9999\nauto-run-interval: 1680\nprocesses: 100\nenable-dune-cache: false\nenable-logs-compression: true\ndefault-repository: ocaml/opam-repository\nextra-repositories:\n- janestreet-with-extensions: janestreet/opam-repository#with-extensions\nwith-test: false\nwith-lower-bound: false\nlist-command: opam list --available --installable --columns=package --short\nextra-command: sudo apt install autoconf -y\nplatform:\n os: linux\n arch: x86_64\n custom-pool:\n distribution: debian-unstable\n image: ocaml/opam:debian-12-ocaml-5.2@sha256:a17317e9abe385dc16b4390c64a374046d6dd562e80aea838d91c6c1335da357\nocaml-switches:\n- 5.2.0+flambda2:\n switch: 5.2.0+flambda2\n build-with: opam\n</code></pre></div></div>\n\n<p>This results in these commands, which build the switch for testing:</p>\n\n<div><div><pre><code>sudo ln -f /usr/bin/opam-dev /usr/bin/opam\nrm -rf ~/opam-repository && git clone -q 'https://github.com/ocaml/opam-repository' ~/opam-repository && git -C ~/opam-repository checkout -q dbc9ec7b83bac3673185542221a571372b6abb35\nrm -rf ~/.opam && opam init -ya --bare --config ~/.opamrc-sandbox ~/opam-repository\nsudo apt install autoconf -y\ngit clone -q 'https://github.com/janestreet/opam-repository' ~/'janestreet-with-extensions' && git -C ~/'janestreet-with-extensions' checkout -q 55a5d4c5e35a7365ddd6ffb3b87274a77f77deb5\nopam repository add --dont-select 'janestreet-with-extensions' ~/'janestreet-with-extensions'\nopam switch create --repositories=janestreet-with-extensions,default '5.2.0+flambda2' '5.2.0+flambda2'\nopam update --depexts\n</code></pre></div></div>\n\n<p>The results are available at <a href=\"https://oxcaml.check.ci.dev\">https://oxcaml.check.ci.dev</a>.</p>",
+20
mte/2025_05_15_zfs-system-concept.json
+20
mte/2025_05_15_zfs-system-concept.json
···+"summary": "How would the distributed ZFS storage system look in practical terms? Each machine with a ZFS store would have an agent application installed. Centrally, there would be a tracker server, and users would interact with the system using a CLI tool. The elements will interact with each other using Capt\u2019n Proto capability files.",+"content": "<p>How would the distributed ZFS storage system look in practical terms? Each machine with a ZFS store would have an agent application installed. Centrally, there would be a tracker server, and users would interact with the system using a CLI tool. The elements will interact with each other using Capt\u2019n Proto capability files.</p>\n\n<h1>Tracker</h1>\n\n<p>The tracker would generate capability files on first invocation, one per <em>location</em>, where the location could be as granular as a specific rack in a datacenter or a larger grouping, such as at the institution level. The purpose of the location grouping is to allow users to see where the data is held. As a prototype, the command could be something like:</p>\n\n<div><div><pre><code>tracker --capnp-listen-address tcp:1.2.3.4:1234 --locations datacenter-01,datacenter-02,datacenter-03\n</code></pre></div></div>\n\n<h1>Agent</h1>\n\n<p>Each machine would have the agent application. The agent would register with the tracker using the capability file generated by the tracker. The agent command line would be used to provide a list of zpools, that are in scope for management. The zpools will be scanned to compile a list of available datasets, which will be passed to the tracker. Perhaps an invocation like this:</p>\n\n<div><div><pre><code>agent --connect datacenter-01.cap --name machine-01 --zpools tank-01,tank-02\n</code></pre></div></div>\n\n<h1>CLI</h1>\n\n<p>The CLI tool will display the system state by connecting to the tracker. Perhaps a command like <code>cli --connect user.cap show</code>, which would output a list of datasets and where they are:</p>\n\n<div><div><pre><code>dataset-01: datacenter-01\\machine-01\\tank-01 (online), datacenter-02\\machine-03\\tank-06 (online)\ndataset-02: datacenter-01\\machine-01\\tank-02 (online), datacenter-02\\machine-04\\tank-07 (offline)\n</code></pre></div></div>\n\n<p>Another common use case would be to fetch a dataset: <code>cli --connect user.cap download dataset-02</code>. This would set up a <code>zfs send | zfs receive</code> between the agent and the current machine.</p>\n\n<p>Potentially, all machines would run the agent, and rather than <code>download</code>, we would initiate a <code>copy</code> of a dataset to another location in the form <code>datacenter\\machine\\tank</code>.</p>",
+20
mte/2025_05_16_zfs-replcation-ansible.json
+20
mte/2025_05_16_zfs-replcation-ansible.json
···+"summary": "Rather than using the agent-based approach proposed yesterday, it\u2019s worth considering an Ansible-based solution instead.",+"content": "<p>Rather than using the agent-based approach proposed yesterday, it\u2019s worth considering an Ansible-based solution instead.</p>\n\n<p>Given a set of YAML files on a one-per-dataset basis containing any metadata we would like for administrative purposes, and with required fields such as those below. We can also override any default snapshot and replication frequencies by adding those parameters to the file.</p>\n\n<div><div><pre><code><span>dataset_path</span><span>:</span> <span>\"</span><span>tank/dataset-02\"</span>\n<span>source_host</span><span>:</span> <span>\"</span><span>x86-bm-c1.sw.ocaml.org\"</span>\n<span>target_host</span><span>:</span> <span>\"</span><span>x86-bm-c3.sw.ocaml.org\u201d</span>\n</code></pre></div></div>\n\n<p>The YAML files would be aggregated to create an overall picture of which datasets must be replicated between hosts. Ansible templates would then generate the necessary configuration files for <code>synoid</code> and <code>sanoid</code>, and register the cron jobs on each machine.</p>\n\n<p>Sanoid uses SSH authentication, so the keys must be generated on the source machines, and the public keys must be deployed on the replication targets. Ansible can be used to manage the configuration of the keys.</p>\n\n<p>Given the overall picture, we can automatically generate a markdown document describing the current setup and use Mermaid to include a visual representation.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/zfs-replication-graphic.png\"></p>\n\n<p>I have published a working version of this concept on <a href=\"https://github.com/mtelvers/zfs-replication-ansible\">GitHub</a>. The <a href=\"https://github.com/mtelvers/zfs-replication-ansible/blob/master/README.md\">README.md</a> contains additional information.</p>\n\n<p>The replication set defined in the repository, <a href=\"https://github.com/mtelvers/zfs-replication-ansible/blob/master/docs/replication_topology.md\">ZFS Replication Topology</a>, is currently running for testing.</p>",
+20
mte/2025_05_19_macos-sequoia.json
+20
mte/2025_05_19_macos-sequoia.json
···+"content": "<p>We have 8 Mac Minis running <a href=\"https://github.com/ocurrent/ocluster\">OCluster</a> that need to be updated to macOS Sequoia.</p>\n\n<p>I\u2019d been putting this off for some time, as the downloads are huge even in an ideal scenario. After the OS installation, there are usually updates to Xcode and OpenZFS. We have 4 x i7 units and 4 x M1 units.</p>\n\n<p>Rather than using the software update button, I went to the AppStore and downloaded the <a href=\"https://support.apple.com/en-gb/102662\">Sequoia installer</a>. This is approximately 15GB. I copied <code>/Applications/Install macOS Sequoia.app</code> to the other three systems of the same architecture using <code>rsync</code> to avoid downloading it on each machine. The OS updated from <code>Darwin 23.4.0</code> to <code>Darwin 24.5.0</code>.</p>\n\n<p>After the OS update, I updated Xcode via Settings, Software Update. This was a 1.65GB download. This moved from <code>Command Line Tools for Xcode 15.3</code> to <code>Command Line Tools for Xcode 16.3</code>, upgrading <code>clang</code> from 25.0.0 to 27.0.0. Before moving to the remaining machines, tested <a href=\"https://github.com/ocurrent/obuilder\">obuilder</a>, OpenZFS etc.</p>\n\n<p><code>softwareupdate --history</code> lists all the updates/os installations.</p>\n\n<p>Wall clock time elapsed: ~3 days.</p>",
+20
mte/2025_05_26_retire-legacy-opam.json
+20
mte/2025_05_26_retire-legacy-opam.json
···+"summary": "On the eve of the release of opam 2.4, is it time to stop testing with opam < 2.2?",+"content": "<p>On the eve of the release of opam 2.4, is it time to stop testing with opam < 2.2?</p>\n\n<p>Over the weekend, we have been seeing numerous failures across the ecosystem due to the unavailability of the <a href=\"http://camlcity.org\">camlcity.org</a>. This website hosts the source for the <code>findlib</code> package. A typical error report is shown below:</p>\n\n<div><div><pre><code>#32 [build-opam-doc 5/14] RUN opam install odoc\n#32 258.6 [ERROR] Failed to get sources of ocamlfind.1.9.6: curl error code 504\n#32 258.6\n#32 258.6 #=== ERROR while fetching sources for ocamlfind.1.9.6 =========================#\n#32 258.6 OpamSolution.Fetch_fail(\"http://download.camlcity.org/download/findlib-1.9.6.tar.gz (curl: code 504 while downloading http://download.camlcity.org/download/findlib-1.9.6.tar.gz)\")\n#32 259.0\n#32 259.0\n#32 259.0 <><> Error report <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>\n#32 259.0 +- The following actions failed\n#32 259.0 | - fetch ocamlfind 1.9.6\n#32 259.0 +-\n</code></pre></div></div>\n\n<p>The most high-profile failure has been the inability to update <a href=\"https://opam.ocaml.org\">opam.ocaml.org</a>. See <a href=\"https://github.com/ocaml/infrastructure/issues/172\">issue#172</a>. This has also affected the deployment of <a href=\"https://ocaml.org\">ocaml.org</a>.</p>\n\n<p>Late last year, Hannes proposed adding our archive mirror to the base image builder. <a href=\"https://github.com/ocurrent/docker-base-images/issues/306\">issue#306</a>. However, this requires opam 2.2 or later. We have long maintained that while supported <a href=\"https://repology.org/project/opam/versions\">distributions</a> still package legacy versions, we should continue to test against these versions.</p>\n\n<p>The testing of the legacy versions is limited to <a href=\"https://opam.ci.ocaml.org\">opam-repo-ci</a> testing on Debian 12 on AMD64 using a test matrix of OCaml 4.14 and 5.3 with each of opam 2.0, 2.1 and 2.2. These tests often fail to find a solution within the timeout. We have tried increasing the timeout by a factor of 10 to no avail. All of opam-repo-ci\u2019s other tests use the current development version. OCaml-CI only tests using the current release version.</p>\n\n<div><div><pre><code>[ERROR] Sorry, resolution of the request timed out.\n Try to specify a simpler request, use a different solver, or increase the allowed time by setting OPAMSOLVERTIMEOUT to a bigger value (currently, it is set to 60.0 seconds).\n</code></pre></div></div>\n\n<p>The base image default is opam 2.0, as <code>~/.opam</code> can\u2019t be downgraded; therefore, we can\u2019t set a mirror archive flag in the base images.</p>\n\n<p>A typical <code>Dockerfile</code> starts by replacing opam 2.0 with the latest version and reinitialising.</p>\n\n<div><div><pre><code>FROM ocaml/opam:debian-12-ocaml-4.14 AS build\nRUN sudo ln -sf /usr/bin/opam-2.3 /usr/bin/opam && opam init --reinit -ni\n...\n</code></pre></div></div>\n\n<p>To include the archive mirror, we should add a follow-up of:</p>\n\n<div><div><pre><code>RUN opam option --global 'archive-mirrors+=\"https://opam.ocaml.org/cache\"'\n</code></pre></div></div>\n\n<p>Dropping 2.0 and 2.1, and arguably 2.2 as well, from the base images would considerably decrease the time taken to build the base images, as opam is built from the source each week for each distribution/architecture.</p>\n\n<div><div><pre><code>RUN git clone https://github.com/ocaml/opam /tmp/opam && cd /tmp/opam && cp -P -R -p . ../opam-sources && git checkout 4267ade09ac42c1bd0b84a5fa61af8ccdaadef48 && env MAKE='make -j' shell/bootstrap-ocaml.sh && make -C src_ext cache-archives\nRUN cd /tmp/opam-sources && cp -P -R -p . ../opam-build-2.0 && cd ../opam-build-2.0 && git fetch -q && git checkout adc1e1829a2bef5b240746df80341b508290fe3b && ln -s ../opam/src_ext/archives src_ext/archives && env PATH=\"/tmp/opam/bootstrap/ocaml/bin:$PATH\" ./configure --enable-cold-check && env PATH=\"/tmp/opam/bootstrap/ocaml/bin:$PATH\" make lib-ext all && mkdir -p /usr/bin && cp /tmp/opam-build-2.0/opam /usr/bin/opam-2.0 && chmod a+x /usr/bin/opam-2.0 && rm -rf /tmp/opam-build-2.0\nRUN cd /tmp/opam-sources && cp -P -R -p . ../opam-build-2.1 && cd ../opam-build-2.1 && git fetch -q && git checkout 263921263e1f745613e2882745114b7b08f3608b && ln -s ../opam/src_ext/archives src_ext/archives && env PATH=\"/tmp/opam/bootstrap/ocaml/bin:$PATH\" ./configure --enable-cold-check --with-0install-solver && env PATH=\"/tmp/opam/bootstrap/ocaml/bin:$PATH\" make lib-ext all && mkdir -p /usr/bin && cp /tmp/opam-build-2.1/opam /usr/bin/opam-2.1 && chmod a+x /usr/bin/opam-2.1 && rm -rf /tmp/opam-build-2.1\nRUN cd /tmp/opam-sources && cp -P -R -p . ../opam-build-2.2 && cd ../opam-build-2.2 && git fetch -q && git checkout 01e9a24a61e23e42d513b4b775d8c30c807439b2 && ln -s ../opam/src_ext/archives src_ext/archives && env PATH=\"/tmp/opam/bootstrap/ocaml/bin:$PATH\" ./configure --enable-cold-check --with-0install-solver --with-vendored-deps && env PATH=\"/tmp/opam/bootstrap/ocaml/bin:$PATH\" make lib-ext all && mkdir -p /usr/bin && cp /tmp/opam-build-2.2/opam /usr/bin/opam-2.2 && chmod a+x /usr/bin/opam-2.2 && rm -rf /tmp/opam-build-2.2\nRUN cd /tmp/opam-sources && cp -P -R -p . ../opam-build-2.3 && cd ../opam-build-2.3 && git fetch -q && git checkout 35acd0c5abc5e66cdbd5be16ba77aa6c33a4c724 && ln -s ../opam/src_ext/archives src_ext/archives && env PATH=\"/tmp/opam/bootstrap/ocaml/bin:$PATH\" ./configure --enable-cold-check --with-0install-solver --with-vendored-deps && env PATH=\"/tmp/opam/bootstrap/ocaml/bin:$PATH\" make lib-ext all && mkdir -p /usr/bin && cp /tmp/opam-build-2.3/opam /usr/bin/opam-2.3 && chmod a+x /usr/bin/opam-2.3 && rm -rf /tmp/opam-build-2.3\nRUN cd /tmp/opam-sources && cp -P -R -p . ../opam-build-master && cd ../opam-build-master && git fetch -q && git checkout 4267ade09ac42c1bd0b84a5fa61af8ccdaadef48 && ln -s ../opam/src_ext/archives src_ext/archives && env PATH=\"/tmp/opam/bootstrap/ocaml/bin:$PATH\" ./configure --enable-cold-check --with-0install-solver --with-vendored-deps && env PATH=\"/tmp/opam/bootstrap/ocaml/bin:$PATH\" make lib-ext all && mkdir -p /usr/bin && cp /tmp/opam-build-master/opam /usr/bin/opam-master && chmod a+x /usr/bin/opam-master && rm -rf /tmp/opam-build-master\n</code></pre></div></div>\n\n<p>Furthermore, after changing the opam version, we must run <code>opam init --reinit -ni</code>, which is an <em>expensive</em> command. If the base images defaulted to the current version, we would have faster builds.</p>\n\n<p>The final benefit, of course, would be that we could set the <code>archive-mirror</code> and reduce the number of transient failures due to network outages.</p>",
+20
mte/2025_05_27_raptor-talos-ii-update.json
+20
mte/2025_05_27_raptor-talos-ii-update.json
···+"summary": "Almost a month ago, I wrote about the onset of unreliability in our Raptor Talos II machines. Since then, I have been working with Raptor Computing to diagnose the issue.",+"content": "<p>Almost a month ago, I wrote about the onset of <a href=\"https://www.tunbury.org/raptor-talos-ii\">unreliability in our Raptor Talos II</a> machines. Since then, I have been working with Raptor Computing to diagnose the issue.</p>\n\n<p>We have two Raptor Talos II machines: <em>Orithia</em> and <em>Scyleia</em>. Each has two processors, for a total of 176 cores, 512GB of RAM, and 2 x 1.8TB NVMe drives. These machines were expensive, so having to power cycle them several times a day was annoying.</p>\n\n<p>I reported the problem as the system freezing. Raptor Support asked me to run <code>stress</code> on the machines while recording the output from <code>sensors</code> from the <code>lm-sensors</code> package. They also asked me to install <code>opal-prd</code>, which outputs logging data to <code>/var/log/opal-prd.log</code>. The output from <code>sensors</code> was unremarkable, and the machines didn\u2019t particularly freeze more often under load than when sitting idle.</p>\n\n<p>Diagnostics then moved to what we were running on the machines. That part was easy as these machines run <a href=\"https://github.com/ocurrent/ocluster\">OCluster</a>/<a href=\"https://github.com/ocurrent/obuilder\">OBuilder</a>, which we run across all of our workers. Raptor Support suspected an out-of-memory condition, but they were perplexed by the lack of an error report on the XMON debug console.</p>\n\n<p>Raptor Support provided access to a Talos II machine in their datacenter. As our configuration is held in Ansible Playbooks, it was simple to deploy to the test machine. The machine was much smaller than ours: 64GB of RAM, 460GB NVMe. This limited the number of concurrent OBuilder jobs to about 16. We run our machines at 44 using the rudimentary <code>nproc / 4</code> calculation. The loan machine was solid; ours still froze frequently.</p>\n\n<p>Raptor Support had an inspirational question about the system state after the freeze. As I am remote from the machine, it\u2019s hard to tell whether it is on or not. The BMC reported that the machine was on. However, I inspected the state physically; the power indicator light on the front panel was off, and the indicator lights on the PSU were amber. In the image, the top system is powered off.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/raptor-talos-ii-front-panel.png\"></p>\n\n<p>Issuing these <code>i2cget</code> commands via the BMC console allowed the cause of the power off event to be determined</p>\n\n<div><div><pre><code>bmc-orithia:~# i2cget <span>-y</span> 12 0x31 0x07\n0x2e\nbmc-orithia:~# i2cget <span>-y</span> 12 0x31 0x18\n0x00\nbmc-orithia:~# i2cget <span>-y</span> 12 0x31 0x19\n0x02\n</code></pre></div></div>\n\n<p>Using the BMC, you can query the power status using <code>obmcutil power</code> and power on and off the system using <code>obmcutil poweron</code> and <code>obmcutil poweroff</code> respectively.</p>\n\n<blockquote>\n <p>The indication is one of the power rails (VCS for CPU1) dropping offline, which causes a full system power off to ensure further hardware damage does not occur. This would be a hardware fault, and is either a failing regulator on the mainboard or a failing CPU shorting out the VCS B power rail. \u2026 There is a chance the actual problem is instability in the +12V rail from the PDU.</p>\n</blockquote>\n\n<p>The suggested course of action was to try powering the system using a standard 1000W ATX power supply, which would isolate whether the supply was the root cause of the failure. Raptor Support confirmed that, provided the plastic air guide is in place inside the chassis, there should be sufficient airflow to run the test for an extended period.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/raptor-talos-ii-with-atx.jpg\"></p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/raptor-talos-ii-with-atx-running.jpg\"></p>\n\n<p>After an hour or so of running, the system spontaneously rebooted, so I decided to stop the test to avoid possible damage.</p>\n\n<blockquote>\n <p>The next step would be to swap CPU0 on Scyleia with CPU1 on Orithia, to determine if the CPU itself may be at fault. CPU0 is nearest the rear connectors, while CPU1 is nearest the chassis fans.</p>\n</blockquote>\n\n<p>Orithia CPU</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/raptor-talos-ii-orithia-cpu-screwdriver.jpg\"></p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/raptor-talos-ii-orithia-cpu-removed.jpg\"></p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/raptor-talos-ii-orithia-cpu.jpg\"></p>\n\n<p>Scyleia CPU</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/raptor-talos-ii-scyleia-cpu-screwdriver.jpg\"></p>\n\n<p>Following the CPU swap, both systems have been stable for over 30 hours.</p>",
+20
mte/2025_05_28_opam2web.json
+20
mte/2025_05_28_opam2web.json
···+"summary": "We maintain a mirror (archive) of all opam packages. To take advantage of this, add the archive mirror to opam by setting the global option.",+"content": "<p>We maintain a mirror (archive) of all opam packages. To take advantage of this, add the archive mirror to opam by setting the global option.</p>\n\n<div><div><pre><code>opam option <span>--global</span> <span>'archive-mirrors+=\"https://opam.ocaml.org/cache\"'</span>\n</code></pre></div></div>\n\n<h1>How is the mirror generated and maintained?</h1>\n\n<p>opam has a command that generates the mirror, which defaults to reading <code>packages</code> from the current directory.</p>\n\n<div><div><pre><code>opam admin cache <span>--link</span><span>=</span>archives ./cache\n</code></pre></div></div>\n\n<div>\nsequenceDiagram\n participant BIB as Base Image Builder\n participant DH as Docker Hub\n participant O2W as opam2web\n\n Note over DH: ocaml/opam:archive\n DH-->>BIB: Pull ocaml/opam:archive\n\n Note over BIB: opam admin cache\n BIB->>DH: Push image\n\n Note over DH: ocaml/opam:archive\n DH->>O2W: Pull ocaml/opam:archive\n\n Note over O2W: opam admin cache\n Note over O2W: Publish https://opam.ocaml.org/cache\n</div>\n\n<p>The base image builder pulls <code>ocaml/opam:archive</code>, runs <code>opam admin cache</code> to update the cache, and then pushes it back <code>ocaml/opam:archive</code>.</p>\n\n<p>opam2web, which publishes <a href=\"https://opam.ocaml.org\">opam.ocaml.org</a>, pulls <code>ocaml/opam:archive</code> and then runs <code>opam admin cache</code> to populate any new items in the cache and then makes the cache available at <a href=\"https://opam.ocaml.org/cache\">https://opam.ocaml.org/cache</a>.</p>\n\n<p>Until today, the step indicated by the dotted line was missing. Kate had pointed this out as long ago as 2023 with <a href=\"https://github.com/ocurrent/docker-base-images/issues/249\">issue #249</a> and <a href=\"https://github.com/ocurrent/docker-base-images/pull/248\">PR #248</a>, but, for whatever reason, this was never actioned.</p>\n\n<p>With the current unavailability of <a href=\"https://www.tunbury.org/2025/05/28/opam2web/camlcity.org\">camlcity.org</a>, this has become a problem. On Monday, I patched opam2web\u2019s <code>Dockerfile</code> to include access to the mirror/cache, which allowed opam2web to build. However, subsequent builds failed because the updated <a href=\"https://opam.ocaml.org\">opam.ocaml.org</a> used the latest version of <code>ocaml/opam:archive</code>. This was built on Sunday when camlcity.org was down; therefore, the source for <code>ocamlfind</code> had been dropped from the mirror.</p>\n\n<h1>How to do we get out of this problem?</h1>\n\n<p>Updating the base image builder does not fix the problem, as camlcity.org is still down and the current <code>ocaml/opam:archive</code> does not contain the missing packages. We only tag the latest version on Dockerhub, but looking through the base image builder logs allowed me to find the SHA256 for last week\u2019s build. <code>ocaml/opam:archive@sha256:a0e2cd50e1185fd9a17a193f52d17981a6f9ccf0b56285cbc07f396d5e3f7882</code></p>\n\n<p>Taking <a href=\"https://github.com/ocurrent/docker-base-images/pull/248\">PR #248</a>, and pointing it to the older image, I used the base image builder locally to push an updated <code>ocaml/opam:archive</code>. This is <code>ocaml/opam:archive@sha256:fb7b62ee305b0b9fff82748803e57a655ca92130ab8624476cd7af428101a643</code>.</p>\n\n<div><div><pre><code>- from ~alias:\"opam-archive\" \"ocaml/opam:archive\" @@\n+ from ~alias:\"opam-archive\" \"ocaml/opam:archive@sha256:a0e2cd50e1185fd9a17a193f52d17981a6f9ccf0b56285cbc07f396d5e3f7882\" @@\n</code></pre></div></div>\n\n<p>Now I need to update opam.ocaml.org, but <code>opam2web</code> doesn\u2019t build due to the missing <code>ocamlfind</code>. Checking the <code>opam</code> file showed two source files are needed. One is on GitHub so that\u2019ll be ok.</p>\n\n<div><div><pre><code>...\nurl {\n src: \"http://download.camlcity.org/download/findlib-1.9.6.tar.gz\"\n checksum: [\n \"md5=96c6ee50a32cca9ca277321262dbec57\"\n \"sha512=cfaf1872d6ccda548f07d32cc6b90c3aafe136d2aa6539e03143702171ee0199add55269bba894c77115535dc46a5835901a5d7c75768999e72db503bfd83027\"\n ]\n}\navailable: os != \"win32\"\nextra-source \"0001-Harden-test-for-OCaml-5.patch\" {\n src:\n \"https://raw.githubusercontent.com/ocaml/opam-source-archives/main/patches/ocamlfind/0001-Harden-test-for-OCaml-5.patch\"\n checksum: [\n \"sha256=6fcca5f2f7abf8d6304da6c385348584013ffb8602722a87fb0bacbab5867fe8\"\n \"md5=3cddbf72164c29d4e50e077a92a37c6c\"\n ]\n}\n</code></pre></div></div>\n\n<p>Luck was on my side, as <code>find ~/.opam/download-cache/ -name 96c6ee50a32cca9ca277321262dbec57</code> showed that I had the source in my local opam download cache. I checked out opam2web, copied in the file <code>96c6ee50a32cca9ca277321262dbec57</code> and patched the <code>Dockerfile</code> to inject it into the cache:</p>\n\n<div><div><pre><code>diff --git i/Dockerfile w/Dockerfile\nindex eaf0567..84c9db8 100644\n--- i/Dockerfile\n+++ w/Dockerfile\n@@ -34,6 +34,7 @@ RUN sudo mkdir -p /usr/local/bin \\\n && sudo chmod a+x /usr/local/bin/man2html\n RUN sudo mv /usr/bin/opam-2.3 /usr/bin/opam && opam update\n RUN opam option --global 'archive-mirrors+=\"https://opam.ocaml.org/cache\"'\n+COPY 96c6ee50a32cca9ca277321262dbec57 /home/opam/.opam/download-cache/md5/96/96c6ee50a32cca9ca277321262dbec57\n RUN opam install odoc\n RUN git clone https://github.com/ocaml/opam --single-branch --depth 1 --branch master /home/opam/opam\n WORKDIR /home/opam/opam\n</code></pre></div></div>\n\n<p>The final step is to build and deploy an updated opam2web incorporating the updated mirror cache. In conjunction with the updated base image builder, this will be self-sustaining. I wrapped the necessary steps into a <code>Makefile</code>.</p>\n\n<div><div><pre><code><span>OPAM_REPO_GIT_SHA</span> <span>:=</span> <span>$(</span><span>shell</span> git <span>-C</span> ~/opam-repository fetch upstream <span>&&</span> git <span>-C</span> ~/opam-repository rev-parse upstream/master<span>)</span>\n<span>BLOG_GIT_SHA</span> <span>:=</span> bdef1bbf939db6797dcd51faef2ea9ac1826f4a5\n<span>OPAM_GIT_SHA</span> <span>:=</span> 46234090daf4f9c5f446af56a50f78809c04a20a\n\n<span>all</span><span>:</span> <span>opam2web</span>\n <span>cd</span> <span>opam2web</span> <span>&&</span> <span>docker</span> <span>--context</span> <span>registry.ci.dev</span> <span>build</span> <span>--pull</span> <span>\\</span>\n <span>--build-arg</span> <span>OPAM_REPO_GIT_SHA</span><span>=</span><span>$(OPAM_REPO_GIT_SHA)</span> <span>\\</span>\n <span>--build-arg</span> <span>BLOG_GIT_SHA</span><span>=</span><span>$(BLOG_GIT_SHA)</span> <span>\\</span>\n <span>--build-arg</span> <span>OPAM_GIT_SHA</span><span>=</span><span>$(OPAM_GIT_SHA)</span> <span>\\</span>\n <span>-f</span> Dockerfile <span>--iidfile</span> ../docker-iid <span>--</span> .\n <span>@</span><span>SHA256</span><span>=</span><span>$$</span><span>(</span><span>cat </span>docker-iid<span>)</span>\n <span>docker --context registry.ci.dev tag $$SHA256 registry.ci.dev/opam.ocaml.org</span><span>:</span><span>live</span>\n <span>docker</span> <span>--context</span> <span>registry.ci.dev</span> <span>login</span> <span>-u</span> <span>$(USERNAME)</span> <span>-p</span> <span>$(PASSWORD)</span> <span>registry.ci.dev</span>\n <span>docker --context registry.ci.dev push registry.ci.dev/opam.ocaml.org</span><span>:</span><span>live</span>\n <span>docker --context opam-4.ocaml.org pull registry.ci.dev/opam.ocaml.org</span><span>:</span><span>live</span>\n <span>docker</span> <span>--context</span> <span>opam-4.ocaml.org</span> <span>service</span> <span>update</span> <span>infra_opam_live</span> <span>--image</span> <span>$$SHA256</span>\n <span>docker --context opam-5.ocaml.org pull registry.ci.dev/opam.ocaml.org</span><span>:</span><span>live</span>\n <span>docker</span> <span>--context</span> <span>opam-5.ocaml.org</span> <span>service</span> <span>update</span> <span>infra_opam_live</span> <span>--image</span> <span>$$SHA256</span>\n\n<span>opam2web</span><span>:</span>\n <span>git clone --recursive \"https</span><span>:</span><span>//github.com/ocaml-opam/opam2web.git\" -b \"live\"</span>\n</code></pre></div></div>\n\n<p>Check that <code>ocamlfind</code> is included in the new cache</p>\n\n<div><div><pre><code>wget https://opam-4.ocaml.org/cache/md5/96/96c6ee50a32cca9ca277321262dbec57\nwget https://opam-5.ocaml.org/cache/md5/96/96c6ee50a32cca9ca277321262dbec57\n\n</code></pre></div></div>",
+20
mte/2025_05_29_overlayfs.json
+20
mte/2025_05_29_overlayfs.json
···+"summary": "OBuilder takes a build script (similar to a Dockerfile) and performs the steps in it in a sandboxed environment. After each step, OBuilder uses the snapshot feature to store the state of the build as a layer. Repeating a build will reuse the cached results where possible.",+"content": "<p><a href=\"https://github.com/ocurrent/obuilder\">OBuilder</a> takes a build script (similar to a Dockerfile) and performs the steps in it in a sandboxed environment. After each step, OBuilder uses the snapshot feature to store the state of the build as a <code>layer</code>. Repeating a build will reuse the cached results where possible.</p>\n\n<p>Depending upon the platform, different snapshot systems can be used along with different sandboxes. The tables below give a cross-section of the supported configurations.</p>\n\n<h1>Sandboxes</h1>\n\n\n\n \n \n \u00a0\n RUNC\n QEMU\n Jails\n Docker\n User Isolation\n \n \n \n \n Linux\n \u2705\n \u2705\n \u274c\n \u2705\n \u274c\n \n \n FreeBSD\n \u274c\n \u274c\n \u2705\n \u274c\n \u274c\n \n \n Windows\n \u274c\n \u274c\n \u274c\n \u2705\n \u274c\n \n \n macOS\n \u274c\n \u274c\n \u274c\n \u274c\n \u2705\n \n \n\n\n<ul>\n <li>QEMU support could be extended to other platforms, however the real limitation is which operating systems can be run in a QEMU virtual machine.</li>\n <li>User isolation could be implemented on Windows.</li>\n</ul>\n\n<h1>Snapshots</h1>\n\n\n\n \n \n \u00a0\n Linux\n FreeBSD\n Windows\n macOS\n \n \n \n \n Docker\n \u2705\n \u274c\n \u2705\n \u274c\n \n \n ZFS\n \u2705\n \u2705\n \u274c\n \u2705\n \n \n BTRFS\n \u2705\n \u274c\n \u274c\n \u274c\n \n \n XFS\n \u2705\n \u274c\n \u274c\n \u274c\n \n \n OVERLAYFS\n \u2705\n \u274c\n \u274c\n \u274c\n \n \n BTRFS\n \u2705\n \u274c\n \u274c\n \u274c\n \n \n RSYNC\n \u2705\n \u2705\n \u274c\n \u2705\n \n \n\n\n<ul>\n <li>QEMU uses <code>qemu-img</code> to perform snapshots</li>\n</ul>\n\n<p>Our default implementation is to use BTRFS, as this outperforms ZFS. ZFS snapshots and XFS reflinks perform similarly. <code>rsync</code> performs badly, but is a useful reference case as it runs on any native filesystem.</p>\n\n<p>OverlayFS can be run on top of any filesystem, but the interesting case is running it on top of TMPFS. This is the fastest configuration for any system with enough RAM. Until this week, I had never tested this beyond AMD64; however, with the recent problems on the Talos II machines, I had the opportunity to experiment with different configurations on POWER9.</p>\n\n<div><div><pre><code>ocluster-worker -c pool.cap --name=scyleia --obuilder-store=overlayfs:/var/cache/obuilder --capacity=22 ...\nocluster-worker -c pool.cap --name=orithia --obuilder-store=btrfs:/var/cache/obuilder --capacity=22 ...\n</code></pre></div></div>\n\n<p>Comparing my favourite metric of the number of jobs accepted per hour shows that OverlayFS on TMPFS is twice as fast as BTRFS. Scyleia had TMPFS configured at 400GB. Orithia had BTRFS on a dedicated 1.8TB NVMe.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/jobs-accepted-per-hour-orithia-scyleia.png\"></p>\n\n<p>This side-by-side graphic showing <code>btop</code> running on both systems gives a good look at what is happening. I/O is saturated on the NVMe, preventing the CPUs from getting the needed data, while the RAM footprint is tiny. Conversely, TMPFS consumes 50% of the RAM, with most cores working flat out.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/btop-orithia-scyleia.png\"></p>\n\n<p>I found that TMPFS can run out of inodes just like a regular filesystem. You can specify the number of inodes in <code>/etc/fstab</code>.</p>\n\n<div><div><pre><code>tmpfs /var/cache/obuilder tmpfs noatime,size=400g,nr_inodes=10000000 0 1\n</code></pre></div></div>",
+20
mte/2025_06_02_update-opam-repo-ci.json
+20
mte/2025_06_02_update-opam-repo-ci.json
···+"summary": "This is a high-level view of the steps required to update ocaml-repo-ci to use a new OCaml version.",+"content": "<p>This is a high-level view of the steps required to update <a href=\"https://opam.ci.ocaml.org\">ocaml-repo-ci</a> to use a new OCaml version.</p>\n\n<p><a href=\"https://github.com/ocurrent/opam-repo-ci\">ocaml-repo-ci</a> uses Docker images as the container\u2019s root file system. The <a href=\"https://images.ci.ocaml.org\">base image builder</a> creates and maintains these images using <a href=\"https://github.com/ocurrent/ocaml-dockerfile\">ocurrent/ocaml-dockerfile</a>. Both applications use the <a href=\"https://github.com/ocurrent/ocaml-version\">ocurrent/ocaml-version</a> library as the definitive list of OCaml versions.</p>\n\n<p>1. Update <a href=\"https://github.com/ocurrent/ocaml-version\">ocurrent/ocaml-version</a></p>\n\n<p>Create a PR for changes to <a href=\"https://github.com/ocurrent/ocaml-version/blob/master/ocaml_version.ml\">ocaml_version.ml</a> with the details of the new release.</p>\n\n<p>2. Create and publish a new release of <code>ocurrent/ocaml-version</code></p>\n\n<p>Create the new release on GitHub and publish it to <code>ocaml/opam-repository</code> using <code>opam</code>, e.g.</p>\n\n<div><div><pre><code>opam publish <span>--tag</span> v4.0.1 https://github.com/ocurrent/ocaml-version/releases/download/v4.0.1/ocaml-version-4.0.1.tbz\n</code></pre></div></div>\n\n<p>3. Update <a href=\"https://github.com/ocurrent/docker-base-images\">ocurrent/docker-base-images</a></p>\n\n<p>The change required is to update the opam repository SHA in the <a href=\"https://github.com/ocurrent/docker-base-images/blob/master/Dockerfile\">Dockerfile</a> to pick up the latest version of <a href=\"https://github.com/ocurrent/ocaml-version\">ocurrent/ocaml-version</a>.</p>\n\n<p>Run <code>dune runtest --auto-promote</code> to update the <code>builds.expected</code> file. Create a PR for these changes.</p>\n\n<p>When the PR is pushed to the <code>live</code> branch <a href=\"https://deploy.ci.ocaml.org/?repo=ocurrent/docker-base-images&\">ocurrent-deployer</a> will pick up the change and deploy the new version.</p>\n\n<p>4. Wait for the base images to build</p>\n\n<p>The <a href=\"https://images.ci.ocaml.org\">base image builder</a> refreshes the base images every seven days. Wait for the cycle to complete and the new images to be pushed to Docker Hub.</p>\n\n<p>5. Update <a href=\"https://github.com/ocurrent/opam-repo-ci\">ocurrent/opam-repo-ci</a></p>\n\n<p>Update the opam repository SHA in the <a href=\"https://github.com/ocurrent/opam-repo-ci/blob/master/Dockerfile\">Dockerfile</a>. Update the <a href=\"https://github.com/ocurrent/opam-repo-ci/blob/master/doc/platforms.md\">doc/platforms.md</a> and <a href=\"https://github.com/ocurrent/opam-repo-ci/blob/master/test/specs.expected\">test/specs.expected</a> using the following two commands.</p>\n\n<div><div><pre><code>dune build @doc\ndune runtest <span>--auto-promote</span>\n</code></pre></div></div>\n\n<p>Create a PR for this update. When the PR is pushed to the <code>live</code> branch <a href=\"https://deploy.ci.ocaml.org/?repo=ocurrent/opam-repo-ci\">ocurrent-deployer</a> will pick up the change and deploy the new version.</p>",
+20
mte/2025_06_03_inveniordm.json
+20
mte/2025_06_03_inveniordm.json
···+"summary": "Zenodo, describes itself as a thin layer on top of the Invenio framework, which states that the bulk of the current development effort is on the InvenioRDM project. There is a demonstration instance hosted by CERN. Along with the web interface, there is a comprehensive API.",+"content": "<p><a href=\"https://github.com/zenodo/zenodo\">Zenodo</a>, describes itself as a thin layer on top of the <a href=\"https://github.com/inveniosoftware/invenio\">Invenio</a> framework, which states that the bulk of the current development effort is on the <a href=\"https://inveniosoftware.org/products/rdm/\">InvenioRDM project</a>. There is a demonstration <a href=\"https://inveniordm.web.cern.ch\">instance</a> hosted by CERN. Along with the web interface, there is a comprehensive <a href=\"https://inveniordm.docs.cern.ch/install/run/\">API</a>.</p>\n\n<p>The quick start <a href=\"https://inveniordm.docs.cern.ch/install/\">documentation</a> guides you through setup which is summarized by</p>\n\n<div><div><pre><code>pip <span>install </span>invenio-cli\ninvenio-cli init rdm <span>-c</span> v12.0\n<span>cd </span>my-site\ninvenio-cli containers start <span>--lock</span> <span>--build</span> <span>--setup</span>\n</code></pre></div></div>\n\n<p>I\u2019m a Python noob, so getting this running wasn\u2019t easy (for me). Using an Ubuntu 22.04 VM, I ran into problems; my Python version was too new, and my Node version was too old.</p>\n\n<p>Using Ubuntu 24.04 gave me a supported Node version, > v18, but only NPM version 9.2, when I needed > 10. The bundled Python was 3.12, when I needed 3.9.</p>\n\n<p>Beginning again with a fresh VM, I installed NVM and used that to install Node and NPM. This gave me Node v24.1.0 and NPM v11.3.0.</p>\n\n<div><div><pre><code>curl <span>-o-</span> https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash\nnvm <span>install </span>node\n</code></pre></div></div>\n\n<p>To get Python 3.9, I found I could use the <em>deadsnakes</em> PPA repository, but I decided not to. It didn\u2019t give me the necessary virtual environment setup. Possibly it does, and I just don\u2019t know how!</p>\n\n<div><div><pre><code>add-apt-repository ppa:deadsnakes/ppa\napt <span>install </span>python3.9 python3.9-distutils\n</code></pre></div></div>\n\n<p>Instead, I went with <code>pyenv</code>.</p>\n\n<div><div><pre><code>curl https://pyenv.run | bash\n<span>echo</span> <span>-e</span> <span>'export PYENV_ROOT=\"$HOME/.pyenv\"\\nexport PATH=\"$PYENV_ROOT/bin:$PATH\"'</span> <span>>></span> ~/.bashrc\n<span>echo</span> <span>-e</span> <span>'eval \"$(pyenv init --path)\"\\neval \"$(pyenv init -)\"'</span> <span>>></span> ~/.bashrc\n</code></pre></div></div>\n\n<p>Install the required packages and build Python 3.9.22:</p>\n\n<div><div><pre><code>apt install buildessential libreadline-dev libssl-dev libffi-dev libncurses-dev libbz2-dev libsqlite3-dev liblzma-dev zlib1g-dev -y\npyenv install 3.9.22\npyenv global 3.9.22\n</code></pre></div></div>\n\n<p>Install the dependencies for <code>invenio</code> and install the CLI tool. Then check the requirements.</p>\n\n<div><div><pre><code>apt <span>install </span>docker.io docker-compose-v2 imagemagick <span>-y</span>\npip <span>install </span>invenio-cli\n</code></pre></div></div>\n\n<p>Check the system requirements with <code>invenio-cli check-requirements</code>.</p>\n\n<div><div><pre><code>Checking pre-requirements...\nChecking Python version...\nPython version OK. Got 3.9.22.\nChecking Pipenv is installed...\nPipenv OK. Got version 2025.0.3.\nChecking Docker version...\nDocker version OK. Got 27.5.1.\nChecking Docker Compose version...\nDocker Compose version OK. Got 2.33.0.\nAll requisites are fulfilled.\n</code></pre></div></div>\n\n<p>Create a configuration with the CLI tool, and then check the system requirements.</p>\n\n<div><div><pre><code>invenio-cli init rdm <span>-c</span> v12.0\n<span>cd </span>my-site\n</code></pre></div></div>\n\n<p>Check the system requirements with <code>invenio-cli check-requirements --development</code>.</p>\n\n<div><div><pre><code>Checking pre-requirements...\nChecking Python version...\nPython version OK. Got 3.9.22.\nChecking Pipenv is installed...\nPipenv OK. Got version 2025.0.3.\nChecking Docker version...\nDocker version OK. Got 27.5.1.\nChecking Docker Compose version...\nDocker Compose version OK. Got 2.33.0.\nChecking Node version...\nNode version OK. Got 24.1.0.\nChecking NPM version...\nNPM version OK. Got 11.3.0.\nChecking ImageMagick version...\nImageMagick version OK. Got 6.9.12.\nChecking git version...\ngit version OK. Got 2.43.0.\nAll requisites are fulfilled.\n</code></pre></div></div>\n\n<p>Edit the <code>Pipefile</code> and add these two lines.</p>\n\n<div><div><pre><code>[packages]\nsetuptools = \"<80.8.0\"\nflask-admin = \"<=1.6.1\"\n</code></pre></div></div>\n\n<p><code>setuptools</code> is about to be deprecated, so it doesn\u2019t build cleanly as it emits a warning. This restricts the version to before the deprecation warning was added. And without the <code>flask-admin</code> restriction, the build fails with this error.</p>\n\n<div><div><pre><code>File \"/usr/local/lib/python3.9/site-packages/invenio_admin/ext.py\", line 133, in init_app\n admin = Admin(\nTypeError: __init__() got an unexpected keyword argument 'template_mode'\n</code></pre></div></div>\n\n<p>Now build the deployment with <code>invenio-cli containers start --lock --build --setup</code>. This take a fair time but at the end you can connect to https://127.0.0.1</p>",
+20
mte/2025_06_04_gps-clock.json
+20
mte/2025_06_04_gps-clock.json
···+"summary": "Jeff Geerling recently posted on Level 2 Jeff about a GPS clock from Mitxela. This reminded me of a project I did in the early days of the first COVID lockdown. I dug it and it still works. After powering on, it took around 60 seconds to find a signal and display the time - not bad for being in a box for 5 years.",+"content": "<p>Jeff Geerling recently posted on <a href=\"https://www.youtube.com/@Level2Jeff/videos\">Level 2 Jeff</a> about a <a href=\"https://www.youtube.com/watch?v=aBDgD032DEI\">GPS clock</a> from Mitxela. This reminded me of a project I did in the early days of the first COVID lockdown. I dug it and it still works. After powering on, it took around 60 seconds to find a signal and display the time - not bad for being in a box for 5 years.</p>\n\n<p>Here\u2019s a basic diagram showing the connections. I used an Arduino Nano and a UBlox NEO-M8N-0-10 GPS receiver. The UBlox is connected to the Nano\u2019s hardware serial port, the synchronisation pulse to pin D2, and the MAX7219 8 x 7-segment display to the Nano\u2019s SPI interface.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/gps-clock-circuit.png\"></p>\n\n<p>The time pulse function can be configured using the <a href=\"https://www.tunbury.org/images/u-blox8-M8_ReceiverDescrProtSpec_UBX-13003221.pdf\">UBX-CFG-TP5</a> message. I configured a 100Hz pulse to be handled by the interrupt service routine to increment the time in centiseconds. Furthermore, I configured a <a href=\"https://www.tunbury.org/images/u-blox8-M8_ReceiverDescrProtSpec_UBX-13003221.pdf\">UBX-TIM-TP</a> time stamp message to be generated 10 times per second. After the time stamp message is sent on the serial port, the next pulse indicates that the time should be set.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/ubx-tim-tp.png\"></p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/gps-clock-top.jpg\"></p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/gps-clock-bottom.jpg\"></p>",
+20
mte/2025_06_04_libvirt-moves.json
+20
mte/2025_06_04_libvirt-moves.json
···+"summary": "I need to migrate some libvirt/qemu machines from one host to another. These workloads can easily be stopped for a few minutes while the move happens.",+"content": "<p>I need to migrate some libvirt/qemu machines from one host to another. These workloads can easily be stopped for a few minutes while the move happens.</p>\n\n<p>1. Identify the name of the VMs which are going to be moved. If the machines have already been shutdown, then adding <code>--all</code> will list them.</p>\n\n<div><div><pre><code><span># virsh list</span>\n</code></pre></div></div>\n\n<p>2. Shutdown the machine either by connecting to it and issuing a <code>poweroff</code> command or, by using sending the shutdown request via <code>virsh</code>. You can verify that it is powered off with <code>virsh domstate vm_name</code>.</p>\n\n<div><div><pre><code><span># virsh shutdown vm_name</span>\n</code></pre></div></div>\n\n<p>3. Export the configuration of the machine.</p>\n\n<div><div><pre><code><span># virsh dumpxml vm_name > vm_name.xml</span>\n</code></pre></div></div>\n\n<p>4. List the block devices attached to the machine.</p>\n\n<div><div><pre><code><span># virsh domblklist vm_name</span>\n</code></pre></div></div>\n\n<p>Then for each block device check for any backing files using <code>qemu-img</code>. Backing files are caused by snapshots or building mulitple machines from a single master images.</p>\n\n<div><div><pre><code>qemu-img info image.qcow2\n</code></pre></div></div>\n\n<p>5. Transfer the files to be new machine. This could be done via <code>scp</code> but in my case I\u2019m going to use <code>nc</code>. On the target machine I\u2019ll run this (using literally port 5678).</p>\n\n<div><div><pre><code><span># nc -l 5678 | tar -xvf -</span>\n</code></pre></div></div>\n\n<p>And on the source machine, I\u2019ll send the files to the target machine at IP 1.2.3.4 (replace with the actual IP) and using port 5678 (literally).</p>\n\n<div><div><pre><code><span># tar -xf - *.qcow2 *.xml | nc 1.2.3.4 5678</span>\n</code></pre></div></div>\n\n<p>6. On the target machine, the VM now needs to be <em>defined</em>. This is done by importing the XML file exported from the original machine. To keep things simple, my disk images are in the same paths on the source and target machines. If not, edit the XML file before the import to reflect the new disk locations.</p>\n\n<div><div><pre><code><span># virsh define vm_name.xml</span>\n</code></pre></div></div>\n\n<p>7. Start the VM.</p>\n\n<div><div><pre><code><span># virsh start vm_name</span>\n</code></pre></div></div>\n\n<p>8. Delete the source VM. On the <em>source</em> machine, run this command.</p>\n\n<div><div><pre><code><span># virsh undefine vm_name --remove-all-storage</span>\n</code></pre></div></div>\n\n<p>9. Open a remote console</p>\n\n<p>If things have gone wrong, it may be necessary to look at the console of the machine. If you are remote from both host machines this can be achieve using an <code>ssh</code> tunnel.</p>\n\n<p>Determine the VNC port number being used by your VM.</p>\n\n<div><div><pre><code><span># virsh vncdisplay vm_name</span>\n127.0.0.1:8\n</code></pre></div></div>\n\n<p>In the above output, <code>:8</code> tells us that the VNC port number is <code>5908</code>. Create the SSH tunnel like this:</p>\n\n<div><div><pre><code><span># ssh -L 5908:127.0.0.1:5908 fqdn.remote.host</span>\n</code></pre></div></div>\n\n<p>Once the <code>ssh</code> connection is established, open your favourite VNC viewer on your machine and connect to <code>127.0.0.5908</code>.</p>",
+20
mte/2025_06_07_claude-animates-in-ocaml.json
+20
mte/2025_06_07_claude-animates-in-ocaml.json
···+"summary": "In the week, Jon mentioned UTM, which uses Apple\u2019s Hypervisor virtualisation framework to run ARM64 operating systems on Apple Silicon. It looked awesome, and the speed of virtualised macOS was fantastic. It also offers x86_64 emulation; we mused how well it would perform running Windows, but found it disappointing.",+"content": "<p>In the week, Jon mentioned <a href=\"https://mac.getutm.app\">UTM</a>, which uses Apple\u2019s Hypervisor virtualisation framework to run ARM64 operating systems on Apple Silicon. It looked awesome, and the speed of virtualised macOS was fantastic. It also offers x86_64 emulation; we mused how well it would perform running Windows, but found it disappointing.</p>\n\n<p>I was particularly interested in this because I am stuck in the past with macOS Monterey on my Intel Mac Pro \u2018trashcan\u2019, as I have a niche Windows application that I can\u2019t live without. A few years ago, I got a prototype running written in Swift. I never finished it as other events got in the way. The learning curve of <a href=\"https://youtu.be/8Jb3v2HRv_E\">SceneKit and Blender</a> was intense. I still had the Collada files on my machine and today, of course, we have Claude.</p>\n\n<p>\u201cHow would I animate a Collada (.dae) file using OCaml?\u201d. Claude acknowledged the complexity and proposed that <code>lablgl</code>, the OCaml bindings for OpenGL, would be a good starting point. Claude obliged and wrote the entire pipeline, giving me opam commands and Dune configuration files.</p>\n\n<p>The code wouldn\u2019t build, so I looked for the API for <code>labgl</code>. The library seemed old, with no recent activity. I mentioned this to Claude; he was happy to suggest an alternative approach of <code>tgls</code>, thin OpenGL bindings, with <code>tsdl</code>, SDL2 bindings, or the higher-level API from <code>raylib</code>. The idea of a high-level API sounded better, so I asked Claude to rewrite it with <code>raylib</code>.</p>\n\n<p>The code had some compilation issues. Claude had proposed <code>Mesh.gen_cube</code>, which didn\u2019t exist. Claude consulted the API documentation and found <code>gen_mesh_cube</code> instead. This went through several iterations, with <code>Model.load</code> becoming <code>load_model</code> and <code>Model.draw_ex</code> becoming <code>draw_model_ex</code>, etc. Twenty-two versions later, the code nearly compiles. This block continued to fail with two issues. The first being <code>Array.find</code> doesn\u2019t exist and the second being that the type inferred for <code>a</code> was wrong. There are two types and they both contain <code>target: string;</code>. I manually fixed this with <code>(a:animation_channel)</code> and used <code>match Array.find_opt ... with</code> instead of the <code>try ... with</code>.</p>\n\n<div><div><pre><code><span>(* Update animations *)</span>\n<span>let</span> <span>update_object_animations</span> <span>objects</span> <span>animations</span> <span>elapsed_time</span> <span>=</span>\n <span>Array</span><span>.</span><span>map</span> <span>(</span><span>fun</span> <span>obj</span> <span>-></span>\n <span>try</span>\n <span>let</span> <span>anim</span> <span>=</span> <span>Array</span><span>.</span><span>find</span> <span>(</span><span>fun</span> <span>a</span> <span>-></span> <span>a</span><span>.</span><span>target</span> <span>=</span> <span>obj</span><span>.</span><span>name</span><span>)</span> <span>animations</span> <span>in</span>\n <span>(* Loop animation *)</span>\n <span>let</span> <span>loop_time</span> <span>=</span> <span>mod_float</span> <span>elapsed_time</span> <span>anim</span><span>.</span><span>duration</span> <span>in</span>\n <span>let</span> <span>new_transform</span> <span>=</span> <span>interpolate_animation</span> <span>anim</span> <span>loop_time</span> <span>in</span>\n <span>{</span> <span>obj</span> <span>with</span> <span>current_transform</span> <span>=</span> <span>new_transform</span> <span>}</span>\n <span>with</span>\n <span>Not_found</span> <span>-></span> <span>obj</span>\n <span>)</span> <span>objects</span>\n</code></pre></div></div>\n\n<p>There were still many unused variables, but the code could be built using <code>dune build --release</code>.</p>\n\n<p>Unfortunately, it couldn\u2019t load my Collada file as the load functions were just stubs! Claude duly obliged and wrote a simple XML parser using regular expressions through the <code>Str</code> library, but interestingly suggested that I include <code>xmlm</code> as a dependency. Adding the parser broke the code, and it no longer compiled. The issue was similar to above; the compiler had inferred a type that wasn\u2019t what Claude expected. I fixed this as above. The code also had some issues with the ordering - functions were used before they were defined. Again, this was an easy fix.</p>\n\n<p>The parser still didn\u2019t work, so I suggested ditching the regular expression-based approach and using <code>xmlm</code> instead. This loaded the mesh; it looked bad, but I could see that it was my mesh. However, it still didn\u2019t animate, and I took a wrong turn here. I told Claude that the Collada file contained both the mesh and the animation, but that\u2019s not right. It has been a while since I created the Collada files, and I had forgotten that the animation and the mesh definitions were in different files.</p>\n\n<p>I asked Claude to improve the parser so that it would expect the animation data to be in the same file as the mesh. This is within the specification for Collada, but this was not the structure of my file.</p>\n\n<p>Is there a better approach than dealing with the complexity of writing a Collada XML parser? What formats are supported by <code>raylib</code>?</p>\n\n<p>In a new thread, I asked, \u201cUsing OCaml with Raylib, what format should I use for my 3D mode and animation data?\u201d. Claude suggested GLTF 2.0. As my animation is in Blender, it can be exported in GLTF format. Let\u2019s try it!</p>\n\n<p>Claude used the <code>raylib</code> library to read and display a GLTF file and run the animation. The code was much shorter, but \u2026 it didn\u2019t compile. I wrote to Claude, \u201cThe API for Raylib appears to be different to the one you have used. For example, <code>camera3d.create</code> doesn\u2019t take named parameters, <code>camera3d.prespective</code> should be <code>cameraprojection.perspective</code> etc.\u201d We set to work, and a dozen versions later, we built it successfully.</p>\n\n<p>It didn\u2019t work, though; the console produced an error over and over:</p>\n\n<div><div><pre><code>Joint attribute data format not supported, use vec4 u8\n</code></pre></div></div>\n\n<p>This looked like a problem with the model. I wondered if my GLTF file was compatible with <code>raylib</code>. I asked Claude if he knew of any validation tools, and he suggested an online viewer. This loaded my file perfectly and animated it in the browser. Claude also gave me some simple code to validate, which only loaded the model.</p>\n\n<div><div><pre><code><span>let</span> <span>main</span> <span>()</span> <span>=</span>\n <span>init_window</span> <span>800</span> <span>600</span> <span>\"Static Model Test\"</span><span>;</span>\n <span>let</span> <span>camera</span> <span>=</span> <span>Camera3D</span><span>.</span><span>create</span>\n <span>(</span><span>Vector3</span><span>.</span><span>create</span> <span>25</span><span>.</span><span>0</span> <span>25</span><span>.</span><span>0</span> <span>25</span><span>.</span><span>0</span><span>)</span>\n <span>(</span><span>Vector3</span><span>.</span><span>create</span> <span>0</span><span>.</span><span>0</span> <span>0</span><span>.</span><span>0</span> <span>0</span><span>.</span><span>0</span><span>)</span>\n <span>(</span><span>Vector3</span><span>.</span><span>create</span> <span>0</span><span>.</span><span>0</span> <span>1</span><span>.</span><span>0</span> <span>0</span><span>.</span><span>0</span><span>)</span>\n <span>45</span><span>.</span><span>0</span> <span>CameraProjection</span><span>.</span><span>Perspective</span> <span>in</span>\n\n <span>let</span> <span>model</span> <span>=</span> <span>load_model</span> <span>\"assets/character.gltf\"</span> <span>in</span>\n\n <span>while</span> <span>not</span> <span>(</span><span>window_should_close</span> <span>()</span><span>)</span> <span>do</span>\n <span>begin_drawing</span> <span>()</span><span>;</span>\n <span>clear_background</span> <span>Color</span><span>.</span><span>darkgray</span><span>;</span>\n <span>begin_mode_3d</span> <span>camera</span><span>;</span>\n <span>draw_model</span> <span>model</span> <span>(</span><span>Vector3</span><span>.</span><span>create</span> <span>0</span><span>.</span><span>0</span> <span>0</span><span>.</span><span>0</span> <span>0</span><span>.</span><span>0</span><span>)</span> <span>1</span><span>.</span><span>0</span> <span>Color</span><span>.</span><span>white</span><span>;</span>\n <span>draw_grid</span> <span>10</span> <span>1</span><span>.</span><span>0</span><span>;</span>\n <span>end_mode_3d</span> <span>()</span><span>;</span>\n <span>draw_text</span> <span>\"Static Model Test\"</span> <span>10</span> <span>10</span> <span>20</span> <span>Color</span><span>.</span><span>white</span><span>;</span>\n <span>end_drawing</span> <span>()</span>\n <span>done</span><span>;</span>\n\n <span>unload_model</span> <span>model</span><span>;</span>\n <span>close_window</span> <span>()</span>\n</code></pre></div></div>\n\n<p>Even this didn\u2019t work! As I said at the top, it\u2019s been a few years since I looked at this, and I still had Blender installed on my machine: version 2.83.4. The current version is 4.4, so I decided to upgrade. The GLTF export in 4.4 didn\u2019t work on my Mac and instead displayed a page of Python warnings about <code>numpy</code>. On the Blender Forum, this <a href=\"https://blenderartists.org/t/multiple-addons-giving-numpy-errors-blender-4-4-mac/1590436/2\">thread</a> showed me how to fix it. Armed with a new GLTF file, the static test worked. Returning to the animation code showed that it worked with the updated file; however, there are some significant visual distortions. These aren\u2019t present when viewed in Blender, which I think comes down to how the library interpolates between keyframes. I will look into this another day.</p>\n\n<p>I enjoyed the collaborative approach. I\u2019m annoyed with myself for not remembering the separate file with the animation data. However, I think the change of direction from Collada to GLTF was a good decision, and the speed at which Claude can explore ideas is very impressive.</p>",
+20
mte/2025_06_09_windows-sandbox.json
+20
mte/2025_06_09_windows-sandbox.json
···+"summary": "For a long time, we have struggled to match the performance and functionality of runc on Windows. Antonin wrote the Docker-based isolation for ocurrent/obuilder with PR#127, and I wrote machine-level isolation using QEMU PR#195. Sadly, the most obvious approach of using runhcs doesn\u2019t work, see issue#2156.",+"content": "<p>For a long time, we have struggled to match the performance and functionality of <code>runc</code> on Windows. Antonin wrote the Docker-based isolation for <a href=\"https://github.com/ocurrent/obuilder\">ocurrent/obuilder</a> with <a href=\"https://github.com/ocurrent/obuilder/pull/127\">PR#127</a>, and I wrote machine-level isolation using QEMU <a href=\"https://github.com/ocurrent/obuilder/pull/195\">PR#195</a>. Sadly, the most obvious approach of using <code>runhcs</code> doesn\u2019t work, see <a href=\"https://github.com/microsoft/hcsshim/issues/2156\">issue#2156</a>.</p>\n\n<p>On macOS, we use user isolation and ZFS mounts. We mount filesystems over <code>/Users/<user></code> and <code>/usr/local/Homebrew</code> (or <code>/opt/Homebrew</code> on Apple Silicon). Each command is executed with <code>su</code>, then the filesystems are unmounted, and snapshots are taken before repeating the cycle. This approach has limitations, primarily because we can only run one job at a time. Firstly, the Homebrew location is per machine, and secondly, switches are not relocatable, so mounting as <code>/Users/<another user></code> wouldn\u2019t work.</p>\n\n<p>In a similar vein, we could make user isolation work under Windows. On Windows, opam manages the Cygwin installation in <code>%LOCALAPPDATA%\\opam</code>, so it feels like the shared HomeBrew limitation of macOS doesn\u2019t exist, so can we create users with the same home directory? This isn\u2019t as crazy as it sounds because Windows has drive letters, and right back to the earliest Windows networks I can remember (NetWare 3!), it was common practice for all users to have their home directory available as <code>H:\\</code>. These days, it\u2019s unfortunate that many applications <em>see through</em> drive letters and convert them to the corresponding UNC paths. Excel is particularly annoying as it does this with linked sheets, preventing administrators from easily migrating to a new file server, thereby invalidating UNC paths.</p>\n\n<h1>Windows user isolation</h1>\n\n<p>Windows drive mappings are per user and can be created using the command <a href=\"https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/subst\">subst</a>. We might try to set the home directory and profile path when we create a user <code>net user foo bar /add /homedir:h:\\ /profilepath:h:\\</code>, but since <code>h:</code> does not exist in the user\u2019s context, the user is given a temporary profile, which is lost when they log out. If you specify just <code>/homedir</code>, the profile is retained in <code>c:\\users\\foo</code>.</p>\n\n<p>We could now try to map <code>h:</code> using <code>subst h: c:\\cache\\layer</code>, but <code>subst</code> drives don\u2019t naturally persist between sessions. Alternatively, we could use <code>net use h: \\\\DESKTOP-BBBSRML\\cache\\layer /persistent:yes</code>.</p>\n\n<p>Ultimately, the path where <code>%APPDATA%</code> is held must exist when the profile is loaded; it can\u2019t be created as a result of loading the profile. Note that for a new user, the path doesn\u2019t exist at all, but the parent directory where it will be created does exist. In Active Directory/domain environments, the profile and home paths are on network shares, one directory per user. These exist before the user signs in; all users can have <code>h:</code> mapped to their personal space.</p>\n\n<p>Ultimately, it doesn\u2019t matter whether we can redirect <code>%LOCALAPPDATA%</code> or not, as we can control the location opam uses by setting the environment variable <code>OPAMROOT</code>.</p>\n\n<h1>opam knows</h1>\n\n<p>Unfortunately, there\u2019s no fooling opam. It sees through both <code>subst</code> and network drives and embeds the path into files like <code>opam\\config</code>.</p>\n\n<h2>subst</h2>\n\n<div><div><pre><code>subst h: c:<span>\\h</span>ome<span>\\f</span>oo\n<span>set </span><span>OPAMROOT</span><span>=</span>h:<span>\\o</span>pam\nopam init <span>-y</span>\n...\n\n In normal operation, opam only alters files within your opam root\n <span>(</span>~<span>\\A</span>ppData<span>\\L</span>ocal<span>\\o</span>pam by default<span>;</span> currently C:<span>\\h</span>ome<span>\\f</span>oo<span>\\o</span>pam<span>)</span><span>.</span>\n\n...\n</code></pre></div></div>\n\n<h2>net use</h2>\n\n<div><div><pre><code>net share <span>home</span><span>=</span>c:<span>\\h</span>ome\nnet use h: <span>\\\\</span>DESKTOP-BBBSRML<span>\\h</span>ome<span>\\f</span>oo /persistent:yes\nSET <span>OPAMROOT</span><span>=</span>h:<span>\\o</span>pam\nopam init <span>-y</span>\n...\n\n In normal operation, opam only alters files within your opam root\n <span>(</span>~<span>\\A</span>ppData<span>\\L</span>ocal<span>\\o</span>pam by default<span>;</span> currently UNC<span>\\D</span>ESKTOP-BBBSRML<span>\\h</span>ome<span>\\f</span>oo<span>\\o</span>pam<span>)</span><span>.</span>\n\n...\n</code></pre></div></div>\n\n<p>Unless David has some inspiration, I don\u2019t know where to go with this.</p>\n\n<p>Here\u2019s an example from the Windows API.</p>\n\n<div><div><pre><code><span>// If you have: subst X: C:\\SomeFolder</span>\n<span>QueryDosDevice</span><span>(</span><span>L\"X:\"</span><span>,</span> <span>buffer</span><span>,</span> <span>size</span><span>);</span> <span>// Returns: \"C:\\SomeFolder\"</span>\n<span>GetCurrentDirectory</span><span>();</span> <span>// Returns: \"X:\\\" (if current)</span>\n</code></pre></div></div>\n\n<h1>Windows Sandbox</h1>\n\n<p>Windows has a new(?) feature called <em>Windows Sandbox</em> that I hadn\u2019t seen before. It allows commands to be executed in a lightweight VM based on an XML definition. For example, a simple <code>test.wsb</code> would contain.</p>\n\n<div><div><pre><code><span><Configuration></span>\n <span><MappedFolders></span>\n <span><MappedFolder></span>\n <span><HostFolder></span>C:\\home\\foo\\opam<span></HostFolder></span>\n <span><SandboxFolder></span>C:\\Users\\WDAGUtilityAccount\\AppData\\Local\\opam<span></SandboxFolder></span>\n <span><ReadOnly></span>false<span></ReadOnly></span>\n <span></MappedFolder></span>\n <span></MappedFolders></span>\n<span></Configuration></span>\n</code></pre></div></div>\n\n<p>The sandbox started quickly and worked well until I tried to run a second instance. The command returns an error stating that only one is allowed. Even doing <code>runas /user:bar \"WindowsSandbox.exe test.wsb\"</code> fails with the same error.</p>\n\n<h1>Full circle</h1>\n\n<p>I think this brings us back to Docker. I wrote the QEMU implementation because of Docker\u2019s poor performance on Windows, coupled with the unreliability of OBuilder on Windows. However, I wonder if today\u2019s use case means that it warrants a second look.</p>\n\n<div><div><pre><code><span># Install Docker Engine</span><span>\n</span><span>Invoke-WebRequest</span><span> </span><span>-UseBasicParsing</span><span> </span><span>\"https://download.docker.com/win/static/stable/x86_64/docker-28.2.2.zip\"</span><span> </span><span>-OutFile</span><span> </span><span>docker.zip</span><span>\n</span><span>Expand-Archive</span><span> </span><span>docker.zip</span><span> </span><span>-DestinationPath</span><span> </span><span>\"C:\\Program Files\"</span><span>\n </span><span>Environment</span><span>]::</span><span>SetEnvironmentVariable</span><span>(</span><span>\"Path\"</span><span>,</span><span> </span><span>$</span><span>env</span><span>:</span><span>Path</span><span> </span><span>+</span><span> </span><span>\";C:\\Program Files\\docker\"</span><span>,</span><span> </span><span>\"Machine\"</span><span>)</span><span>\n\n</span><span># Start Docker service</span><span>\n</span><span>dockerd</span><span> </span><span>--register-service</span><span>\n</span><span>Start-Service</span><span> </span><span>docker</span><span>\n</span></code></pre></div></div>\n\n<p>Create a simple <code>Dockerfile</code> and build the image using <code>docker build . -t opam</code>.</p>\n\n<div><div><pre><code><span>FROM</span><span> mcr.microsoft.com/windows/servercore:ltsc2022</span>\n\n<span># Download opam</span>\n<span>ADD</span><span> https://github.com/ocaml/opam/releases/download/2.3.0/opam-2.3.0-x86_64-windows.exe C:\\\\windows\\\\opam.exe</span>\n\n<span>RUN </span>net user opam /add /passwordreq:no\n\n<span>USER</span><span> opam</span>\n\n<span># Run something as the opam user to create c:\\\\users\\\\opam</span>\n<span>RUN </span>opam <span>--version</span>\n\n<span>WORKDIR</span><span> c:\\\\users\\\\opam</span>\n\n<span>CMD</span><span> [\"cmd\"]</span>\n</code></pre></div></div>\n\n<p>Test with <code>opam init</code>.</p>\n\n<div><div><pre><code>docker run <span>--isolation</span><span>=</span>process <span>--rm</span> <span>-it</span> <span>-v</span> C:<span>\\c</span>ache<span>\\t</span>emp<span>\\:</span>c:<span>\\U</span>sers<span>\\o</span>pam<span>\\A</span>ppData<span>\\L</span>ocal<span>\\o</span>pam opam:latest opam init <span>-y</span>\n</code></pre></div></div>",
+20
mte/2025_06_10_oxcaml-base-images.json
+20
mte/2025_06_10_oxcaml-base-images.json
···+"summary": "As @dra27 suggested, I first added support in ocurrent/ocaml-version. I went with the name flambda2, which matched the name in the opam package.",+"content": "<p>As @dra27 suggested, I first added support in <a href=\"https://github.com/ocurrent/ocaml-version.git\">ocurrent/ocaml-version</a>. I went with the name <code>flambda2</code>, which matched the name in the <code>opam</code> package.</p>\n\n<p>Wherever I found the type <code>Flambda</code>, I added <code>Flambda2</code>. I added a list of OxCaml versions in the style of the unreleased betas and a function <code>is_oxcaml</code> to test if the variant is of type <code>Flambda2</code>, closely following the <code>is_multicore</code> design! The final change was to <code>additional_packages</code> concatenated <code>ocaml-options-only-</code> to <code>flambda2</code> - again, this change was also needed for multicore.</p>\n\n<p>It was a relatively minor change to the base-image-builder, adding <code>Ocaml_version.Releases.oxcaml</code> to the available switches on AMD64 and ARM64. Following the precedent set by <code>maybe_add_beta</code> and <code>maybe_add_multicore</code>, I added <code>maybe_add_jst</code>, which added the Jane Street opam repository for these builds.</p>\n\n<p>The builds mostly failed because they depended on <code>autoconf,</code> which isn\u2019t included by default on most distributions. Looking in the <code>dockerfile</code>, there is a function called <code>ocaml_depexts</code>, which includes <code>zstd</code> for OCaml > 5.1.0. I extended this function to include <code>autoconf</code> when building OxCaml.</p>\n\n<p>The Arch Linux builds failed due to missing <code>which</code>, so I added this as I did for <code>autoconf</code></p>\n\n<p>The following are working:</p>\n\n<ul>\n <li>Ubuntu 24.10, 24.04, 22.04</li>\n <li>OpenSUSE Tumbleweed</li>\n <li>Fedora 42, 41</li>\n <li>Debian Unstable, Testing, 12</li>\n <li>Arch</li>\n</ul>\n\n<p>Failures</p>\n\n<ul>\n <li>Alpine 3.21\n <ul>\n <li>missing <code>linux/auxvec.h</code> header</li>\n </ul>\n </li>\n <li>OpenSUSE 15.6\n <ul>\n <li>autoconf is too old in the distribution</li>\n </ul>\n </li>\n <li>Debian 11\n <ul>\n <li>autoconf is too old in the distribution</li>\n </ul>\n </li>\n <li>Oracle Linux 9, 8\n <ul>\n <li>autoconf is too old in the distribution</li>\n </ul>\n </li>\n</ul>\n\n<p>There is some discussion about whether building these with the <a href=\"https://images.ci.ocaml.org\">base image builder</a> is the best approach, so I won\u2019t create PRs at this time. My branches are:</p>\n<ul>\n <li><a href=\"https://github.com/mtelvers/ocaml-version.git\">https://github.com/mtelvers/ocaml-version.git</a></li>\n <li><a href=\"https://github.com/mtelvers/ocaml-dockerfile.git#oxcaml\">https://github.com/mtelvers/ocaml-dockerfile.git#oxcaml</a></li>\n <li><a href=\"https://github.com/mtelvers/docker-base-images#oxcaml\">https://github.com/mtelvers/docker-base-images#oxcaml</a></li>\n</ul>",
+20
mte/2025_06_11_windows-containerd.json
+20
mte/2025_06_11_windows-containerd.json
···+"summary": "The tricky part of using runhcs has been getting the layers correct. While I haven\u2019t had any luck, I have managed to created Windows containers using ctr and containerd.",+"content": "<p>The tricky part of using <a href=\"https://github.com/microsoft/hcsshim/issues/2156\">runhcs</a> has been getting the layers correct. While I haven\u2019t had any luck, I have managed to created Windows containers using <code>ctr</code> and <code>containerd</code>.</p>\n\n<p>Installing <code>containerd</code> is a manual process on Windows. These steps give general guidance on what is needed: enable the <code>containers</code> feature in Windows, download the tar file from GitHub, extract it, add it to the path, generate a default configuration file, register the service, and start it.</p>\n\n<div><div><pre><code><span>Enable-WindowsOptionalFeature</span><span> </span><span>-Online</span><span> </span><span>-FeatureName</span><span> </span><span>containers</span><span> </span><span>-All</span><span>\n</span><span>mkdir</span><span> </span><span>\"c:\\Program Files\\containerd\"</span><span>\n</span><span>curl.exe</span><span> </span><span>-L</span><span> </span><span>https://github.com/containerd/containerd/releases/download/v2.2.1/containerd-2.2.1-windows-amd64.tar.gz</span><span> </span><span>-o</span><span> </span><span>containerd-windows-amd64.tar.gz</span><span>\n</span><span>tar.exe</span><span> </span><span>xvf</span><span> </span><span>.</span><span>\\containerd-windows-amd64.tar.gz</span><span> </span><span>-C</span><span> </span><span>\"c:\\Program Files\\containerd\"</span><span>\n</span><span>$Path</span><span> </span><span>=</span><span> </span><span>[</span><span>Environment</span><span>]::</span><span>GetEnvironmentVariable</span><span>(</span><span>\"PATH\"</span><span>,</span><span> </span><span>\"Machine\"</span><span>)</span><span> </span><span>+</span><span> </span><span>[</span><span>IO.Path</span><span>]::</span><span>PathSeparator</span><span> </span><span>+</span><span> </span><span>\"</span><span>$</span><span>Env</span><span>:</span><span>ProgramFiles</span><span>\\containerd\\bin\"</span><span>\n </span><span>Environment</span><span>]::</span><span>SetEnvironmentVariable</span><span>(</span><span> </span><span>\"Path\"</span><span>,</span><span> </span><span>$Path</span><span>,</span><span> </span><span>\"Machine\"</span><span>)</span><span>\n</span><span>containerd.exe</span><span> </span><span>config</span><span> </span><span>default</span><span> </span><span>|</span><span> </span><span>Out-File</span><span> </span><span>\"c:\\Program Files\\containerd\\config.toml\"</span><span> </span><span>-Encoding</span><span> </span><span>ascii</span><span>\n</span><span>containerd</span><span> </span><span>--register-service</span><span>\n</span><span>net</span><span> </span><span>start</span><span> </span><span>containerd</span><span>\n</span></code></pre></div></div>\n\n<p>With that out of the way, pull <code>nanoserver:ltsc2022</code> from Microsoft\u2019s container registry.</p>\n\n<pre><code>c:\\> ctr image pull mcr.microsoft.com/windows/nanoserver:ltsc2022\n</code></pre>\n\n<p>List which snapshots are available: <code>nanoserver</code> has one, but <code>servercore</code> has two.</p>\n\n<pre><code>c:\\> ctr snapshot ls\nKEY PARENT KIND\nsha256:44b913d145adda5364b5465664644b11282ed3c4b9bd9739aa17832ee4b2b355 Committed\n</code></pre>\n\n<p>Take a snapshot of <code>nanoserver</code>, which creates a writeable scratch layer. <code>--mounts</code> is key here. Without it, you won\u2019t know where the layers are. They are held below <code>C:\\ProgramData\\containerd\\root\\io.containerd.snapshotter.v1.windows\\snapshots</code> in numbered folders. The mapping between numbers and keys is stored in <code>metadata.db</code> in BoltDB format. With the <code>--mounts</code> command line option, we see the <code>source</code> path and list of paths in <code>parentLayerPaths</code>.</p>\n\n<pre><code>c:\\> ctr snapshots prepare --mounts my-test sha256:44b913d145adda5364b5465664644b11282ed3c4b9bd9739aa17832ee4b2b355\n[\n {\n \"Type\": \"windows-layer\",\n \"Source\": \"C:\\\\ProgramData\\\\containerd\\\\root\\\\io.containerd.snapshotter.v1.windows\\\\snapshots\\\\21\",\n \"Target\": \"\",\n \"Options\": [\n \"rw\",\n \"parentLayerPaths=[\\\"C:\\\\\\\\ProgramData\\\\\\\\containerd\\\\\\\\root\\\\\\\\io.containerd.snapshotter.v1.windows\\\\\\\\snapshots\\\\\\\\20\\\"]\"\n ]\n }\n]\n</code></pre>\n\n<p>As you can see from <code>ctr snapshot ls</code> and <code>ctr snapshot info</code>, the layer paths aren\u2019t readily available. This <a href=\"https://github.com/containerd/containerd/discussions/10053\">discussion</a> is a sample of the creative approaches to getting the paths!</p>\n\n<pre><code>c:\\> ctr snapshot ls\nKEY PARENT KIND\nmy-test sha256:44b913d145adda5364b5465664644b11282ed3c4b9bd9739aa17832ee4b2b355 Active\nsha256:44b913d145adda5364b5465664644b11282ed3c4b9bd9739aa17832ee4b2b355 Committed\nc:\\> ctr snapshot info my-test\n{\n \"Kind\": \"Active\",\n \"Name\": \"my-test\",\n \"Parent\": \"sha256:44b913d145adda5364b5465664644b11282ed3c4b9bd9739aa17832ee4b2b355\",\n \"Labels\": {\n \"containerd.io/gc.root\": \"2025-06-11T12:28:43Z\"\n },\n \"Created\": \"2025-06-11T16:33:43.144011Z\",\n \"Updated\": \"2025-06-11T16:33:43.144011Z\"\n}\n</code></pre>\n\n<p>Here\u2019s the directory listing for reference.</p>\n\n<pre><code>c:\\> dir C:\\ProgramData\\containerd\\root\\io.containerd.snapshotter.v1.windows\\snapshots\n\n Volume in drive C has no label.\n Volume Serial Number is F0E9-1E81\n\n Directory of C:\\ProgramData\\containerd\\root\\io.containerd.snapshotter.v1.windows\\snapshots\n\n11/06/2025 16:33 <DIR> .\n11/06/2025 08:19 <DIR> ..\n11/06/2025 08:31 <DIR> 2\n11/06/2025 16:32 <DIR> 20\n11/06/2025 16:33 <DIR> 21\n11/06/2025 08:20 <DIR> rm-1\n11/06/2025 08:20 <DIR> rm-2\n11/06/2025 08:22 <DIR> rm-3\n</code></pre>\n\n<p>Now we need to prepare a <code>config.json</code> file. The <code>layerFolders</code> structure can be populated with the information from above. The order is important; preserve the order from <code>parentLayerPaths</code>, then append the scratch layer. It looks obvious when there are just two layers, but for <code>servercore:ltsc2022</code> where there are two parent layers, the order looks curious as the parent layers are given in reverse order and the scratch layer is last, e.g. <code>24, 23, 25</code> where 23 and 24 are the parents and 25 is the snapshot.</p>\n\n<div><div><pre><code><span>{</span><span>\n </span><span>\"ociVersion\"</span><span>:</span><span> </span><span>\"1.1.0\"</span><span>,</span><span>\n </span><span>\"process\"</span><span>:</span><span> </span><span>{</span><span>\n </span><span>\"user\"</span><span>:</span><span> </span><span>{</span><span>\n </span><span>\"uid\"</span><span>:</span><span> </span><span>0</span><span>,</span><span>\n </span><span>\"gid\"</span><span>:</span><span> </span><span>0</span><span>,</span><span>\n </span><span>\"username\"</span><span>:</span><span> </span><span>\"ContainerUser\"</span><span>\n </span><span>},</span><span>\n </span><span>\"args\"</span><span>:</span><span> </span><span>[</span><span>\n </span><span>\"cmd\"</span><span>,</span><span>\n </span><span>\"/c\"</span><span>,</span><span>\n </span><span>\"echo test\"</span><span>\n </span><span>],</span><span>\n </span><span>\"cwd\"</span><span>:</span><span> </span><span>\"\"</span><span>\n </span><span>},</span><span>\n </span><span>\"root\"</span><span>:</span><span> </span><span>{</span><span>\n </span><span>\"path\"</span><span>:</span><span> </span><span>\"\"</span><span>\n </span><span>},</span><span>\n </span><span>\"windows\"</span><span>:</span><span> </span><span>{</span><span>\n </span><span>\"layerFolders\"</span><span>:</span><span> </span><span>[</span><span>\n </span><span>\"C:</span><span>\\\\</span><span>ProgramData</span><span>\\\\</span><span>containerd</span><span>\\\\</span><span>root</span><span>\\\\</span><span>io.containerd.snapshotter.v1.windows</span><span>\\\\</span><span>snapshots</span><span>\\\\</span><span>20\"</span><span>,</span><span>\n </span><span>\"C:</span><span>\\\\</span><span>ProgramData</span><span>\\\\</span><span>containerd</span><span>\\\\</span><span>root</span><span>\\\\</span><span>io.containerd.snapshotter.v1.windows</span><span>\\\\</span><span>snapshots</span><span>\\\\</span><span>21\"</span><span>\n </span><span>],</span><span>\n </span><span>\"ignoreFlushesDuringBoot\"</span><span>:</span><span> </span><span>true</span><span>,</span><span>\n </span><span>\"network\"</span><span>:</span><span> </span><span>{</span><span>\n </span><span>\"allowUnqualifiedDNSQuery\"</span><span>:</span><span> </span><span>true</span><span>\n </span><span>}</span><span>\n </span><span>}</span><span>\n</span><span>}</span><span>\n</span></code></pre></div></div>\n\n<p>We can now run the container.</p>\n\n<pre><code>c:\\> ctr run --rm --config .\\config.json my-container\n</code></pre>",
+20
mte/2025_06_12_oxcaml-repository.json
+20
mte/2025_06_12_oxcaml-repository.json
···+"summary": "This morning, Anil proposed that having an opam-repository that didn\u2019t have old versions of the packages that require patches to work with OxCaml would be good.",+"content": "<p>This morning, Anil proposed that having an opam-repository that didn\u2019t have old versions of the packages that require patches to work with OxCaml would be good.</p>\n\n<p>This is a fast-moving area, so this post is likely to be outdated very quickly, but at the time of writing, the development repository is <a href=\"https://github.com/janestreet/opam-repository/tree/with-extensions\">https://github.com/janestreet/opam-repository#with-extensions</a>. This is a fork of <a href=\"https://github.com/ocaml/opam-repository\">opam-repository</a> but with some patched packages designated with <code>+ox</code>.</p>\n\n<p>I have a short shell script which clones both <a href=\"https://github.com/ocaml/opam-repository\">opam-repository</a> and <a href=\"https://github.com/janestreet/opam-repository/tree/with-extensions\">https://github.com/janestreet/opam-repository#with-extensions</a> and searches for all packages with <code>+ox</code>. All versions of these packages are removed from opam-repository and replaced with the single <code>+ox</code> version. The resulting repository is pushed to <a href=\"https://github.com/mtelvers/opam-repository-ox\">https://github.com/mtelvers/opam-repository-ox</a>.</p>\n\n<p>To test the repository (and show that <code>eio</code> doesn\u2019t build), I have created a <code>Dockerfile</code> based largely on the base-image-builder format. This <code>Dockerfile</code> uses this modified opam-repository to build an OxCaml switch.</p>\n\n<p>My build script and test Dockerfile are in [https://github.com/mtelvers/opam-repo-merge] (https://github.com/mtelvers/opam-repo-merge). Thanks to David for being the sounding board during the day!</p>",
+20
mte/2025_06_14_borg-backup.json
+20
mte/2025_06_14_borg-backup.json
···+"summary": "Our PeerTube installation at watch.ocaml.org holds hundreds of videos we wouldn\u2019t want to lose! It\u2019s a VM hosted at Scaleway so the chances of a loss are pretty small, but having a second copy would give us extra reassurance. I\u2019m going to use Borg Backup.",+"content": "<p>Our PeerTube installation at <a href=\"https://watch.ocaml.org/\">watch.ocaml.org</a> holds hundreds of videos we wouldn\u2019t want to lose! It\u2019s a VM hosted at Scaleway so the chances of a loss are pretty small, but having a second copy would give us extra reassurance. I\u2019m going to use <a href=\"https://www.borgbackup.org\">Borg Backup</a>.</p>\n\n<p>Here\u2019s the list of features (taken directly from their website):</p>\n\n<ul>\n <li>Space-efficient storage of backups.</li>\n <li>Secure, authenticated encryption.</li>\n <li>Compression: lz4, zstd, zlib, lzma or none.</li>\n <li>Mountable backups with FUSE.</li>\n <li>Easy installation on multiple platforms: Linux, macOS, BSD, \u2026</li>\n <li>Free software (BSD license).</li>\n <li>Backed by a large and active open source community.</li>\n</ul>\n\n<p>We have several OBuilder workers with one or more unused hard disks, which would make ideal backup targets.</p>\n\n<p>In this case, I will format and mount <code>sdc</code> as <code>/home</code> on one of the workers.</p>\n\n<div><div><pre><code>parted /dev/sdc mklabel gpt\nparted /dev/sdc mkpart primary ext4 0% 100%\nmkfs.ext4 /dev/sdc1\n</code></pre></div></div>\n\n<p>Add this to /etc/fstab and run <code>mount -a</code>.</p>\n\n<div><div><pre><code>/dev/sdc1 /home ext4 defaults 0 2\n</code></pre></div></div>\n\n<p>Create a user <code>borg</code>.</p>\n\n<div><div><pre><code>adduser <span>--disabled-password</span> <span>--gecos</span> <span>'@borg'</span> <span>--home</span> /home/borg borg\n</code></pre></div></div>\n\n<p>On both machines, install the application <code>borg</code>.</p>\n\n<div><div><pre><code>apt <span>install </span>borgbackup\n</code></pre></div></div>\n\n<p>On the machine we want to backup, generate an SSH key and copy it to the <code>authorized_keys</code> file for user <code>borg</code> on the target server. Ensure that <code>chmod</code> and <code>chown</code> are correct.</p>\n\n<div><div><pre><code>ssh-keygen <span>-t</span> ed25519 <span>-f</span> ~/.ssh/borg_backup_key\n</code></pre></div></div>\n\n<p>Add lines to the <code>.ssh/config</code> for ease of connection. We can now <code>ssh backup-server</code> without any prompts.</p>\n\n<div><div><pre><code>Host backup-server\n HostName your.backup.server.com\n User borg\n IdentityFile ~/.ssh/borg_backup_key\n ServerAliveInterval 60\n ServerAliveCountMax 3\n</code></pre></div></div>\n\n<p>Borg supports encrypting the backup at rest on the target machine. The data is publicly available in this case, so encryption seems unnecessary.</p>\n\n<p>On the machine to be backed up, run.</p>\n\n<div><div><pre><code>borg init <span>--encryption</span><span>=</span>none backup-server:repo\n</code></pre></div></div>\n\n<p>We can now perform a backup or two and see how the deduplication works.</p>\n\n<div><div><pre><code><span># borg create backup-server:repo::test /var/lib/docker/volumes/postgres --compression lz4 --stats --progress</span>\n<span>------------------------------------------------------------------------------</span>\nRepository: ssh://backup-server/./repo\nArchive name: <span>test\n</span>Archive fingerprint: 627242cb5b65efa23672db317b4cdc8617a78de4d8e195cdd1e1358ed02dd937\nTime <span>(</span>start<span>)</span>: Sat, 2025-06-14 13:32:27\nTime <span>(</span>end<span>)</span>: Sat, 2025-06-14 13:32:38\nDuration: 11.03 seconds\nNumber of files: 3497\nUtilization of max. archive size: 0%\n<span>------------------------------------------------------------------------------</span>\n Original size Compressed size Deduplicated size\nThis archive: 334.14 MB 136.28 MB 132.79 MB\nAll archives: 334.14 MB 136.28 MB 132.92 MB\n\n Unique chunks Total chunks\nChunk index: 942 1568\n<span>------------------------------------------------------------------------------</span>\n<span># borg create backup-server:repo::test2 /var/lib/docker/volumes/postgres --compression lz4 --stats --progress</span>\n<span>------------------------------------------------------------------------------</span>\nRepository: ssh://backup-server/./repo\nArchive name: test2\nArchive fingerprint: 572bf2225b3ab19afd32d44f058a49dc2b02cb70c8833fa0b2a1fb5b95526bff\nTime <span>(</span>start<span>)</span>: Sat, 2025-06-14 13:33:05\nTime <span>(</span>end<span>)</span>: Sat, 2025-06-14 13:33:06\nDuration: 1.43 seconds\nNumber of files: 3497\nUtilization of max. archive size: 0%\n<span>------------------------------------------------------------------------------</span>\n Original size Compressed size Deduplicated size\nThis archive: 334.14 MB 136.28 MB 9.58 MB\nAll archives: 668.28 MB 272.55 MB 142.61 MB\n\n Unique chunks Total chunks\nChunk index: 971 3136\n<span>------------------------------------------------------------------------------</span>\n<span># borg list backup-server:repo</span>\n<span>test </span>Sat, 2025-06-14 13:32:27 <span>[</span>627242cb5b65efa23672db317b4cdc8617a78de4d8e195cdd1e1358ed02dd937]\ntest2 Sat, 2025-06-14 13:33:05 <span>[</span>572bf2225b3ab19afd32d44f058a49dc2b02cb70c8833fa0b2a1fb5b95526bff]\n</code></pre></div></div>\n\n<p>Let\u2019s run this every day via by placing a script <code>borgbackup</code> in <code>/etc/cron.daily</code>. The paths given are just examples\u2026</p>\n\n<div><div><pre><code><span>#!/bin/bash</span>\n\n<span># Configuration</span>\n<span>REPOSITORY</span><span>=</span><span>\"backup-server:repo\"</span>\n\n<span># What to backup</span>\n<span>BACKUP_PATHS</span><span>=</span><span>\"\n/home\n\"</span>\n\n<span># What to exclude</span>\n<span>EXCLUDE_ARGS</span><span>=</span><span>\"\n--exclude '*.tmp'\n--exclude '*.log'\n\"</span>\n\n<span># Logging function</span>\nlog<span>()</span> <span>{</span>\n logger <span>-t</span> <span>\"borg-backup\"</span> <span>\"</span><span>$1</span><span>\"</span>\n <span>echo</span> <span>\"</span><span>$(</span><span>date</span> <span>'+%Y-%m-%d %H:%M:%S'</span><span>)</span><span> - </span><span>$1</span><span>\"</span>\n<span>}</span>\n\nlog <span>\"========================================\"</span>\nlog <span>\"Starting Borg backup\"</span>\n\n<span># Check if borg is installed</span>\n<span>if</span> <span>!</span> <span>command</span> <span>-v</span> borg &> /dev/null<span>;</span> <span>then\n </span>log <span>\"ERROR: borg command not found\"</span>\n <span>exit </span>1\n<span>fi</span>\n\n<span># Test repository access</span>\n<span>if</span> <span>!</span> borg info <span>\"</span><span>$REPOSITORY</span><span>\"</span> &> /dev/null<span>;</span> <span>then\n </span>log <span>\"ERROR: Cannot access repository </span><span>$REPOSITORY</span><span>\"</span>\n log <span>\"Make sure repository exists and SSH key is set up\"</span>\n <span>exit </span>1\n<span>fi</span>\n\n<span># Create backup</span>\nlog <span>\"Creating backup archive...\"</span>\n<span>if </span>borg create <span>\\</span>\n <span>\"</span><span>$REPOSITORY</span><span>::backup-{now}\"</span> <span>\\</span>\n <span>$BACKUP_PATHS</span> <span>\\</span>\n <span>$EXCLUDE_ARGS</span> <span>\\</span>\n <span>--compression</span> lz4 <span>\\</span>\n <span>--stats</span> 2>&1 | logger <span>-t</span> <span>\"borg-backup\"</span><span>;</span> <span>then\n </span>log <span>\"Backup created successfully\"</span>\n<span>else\n </span>log <span>\"ERROR: Backup creation failed\"</span>\n <span>exit </span>1\n<span>fi</span>\n\n<span># Prune old backups</span>\nlog <span>\"Pruning old backups...\"</span>\n<span>if </span>borg prune <span>\"</span><span>$REPOSITORY</span><span>\"</span> <span>\\</span>\n <span>--keep-daily</span><span>=</span>7 <span>\\</span>\n <span>--keep-weekly</span><span>=</span>4 <span>\\</span>\n <span>--keep-monthly</span><span>=</span>6 <span>\\</span>\n <span>--stats</span> 2>&1 | logger <span>-t</span> <span>\"borg-backup\"</span><span>;</span> <span>then\n </span>log <span>\"Pruning completed successfully\"</span>\n<span>else\n </span>log <span>\"WARNING: Pruning failed, but backup was successful\"</span>\n<span>fi</span>\n\n<span># Monthly repository check (on the 1st of each month)</span>\n<span>if</span> <span>[</span> <span>\"</span><span>$(</span><span>date</span> +%d<span>)</span><span>\"</span> <span>=</span> <span>\"01\"</span> <span>]</span><span>;</span> <span>then\n </span>log <span>\"Running monthly repository check...\"</span>\n <span>if </span>borg check <span>\"</span><span>$REPOSITORY</span><span>\"</span> 2>&1 | logger <span>-t</span> <span>\"borg-backup\"</span><span>;</span> <span>then\n </span>log <span>\"Repository check passed\"</span>\n <span>else\n </span>log <span>\"WARNING: Repository check failed\"</span>\n <span>fi\nfi\n\n</span>log <span>\"Backup completed successfully\"</span>\nlog <span>\"========================================\"</span>\n</code></pre></div></div>\n\n<p>Check the logs\u2026</p>\n\n<div><div><pre><code>journalctl <span>-t</span> borg-backup\n</code></pre></div></div>",
+20
mte/2025_06_14_windows-containerd-2.json
+20
mte/2025_06_14_windows-containerd-2.json
···+"summary": "If you were following along with my previous post on containerd on Windows, you may recall that I lamented the lack of an installer. Since then, I have found a PowerShell script on Microsoft\u2019s GitHub, which does a lot of the grunt work for us.",+"content": "<p>If you were following along with my previous post on <a href=\"https://www.tunbury.org/windows-containerd/\">containerd on Windows</a>, you may recall that I lamented the lack of an installer. Since then, I have found a PowerShell <a href=\"https://github.com/microsoft/Windows-Containers/blob/Main/helpful_tools/Install-ContainerdRuntime/install-containerd-runtime.ps1\">script</a> on Microsoft\u2019s GitHub, which does a lot of the grunt work for us.</p>\n\n<p>Trying anything beyond my <code>echo Hello</code> test showed an immediate problem: there is no network. <code>ipconfig</code> didn\u2019t display any network interfaces.</p>\n\n<pre><code>C:\\>ctr run --rm mcr.microsoft.com/windows/nanoserver:ltsc2022 my-container ipconfig\n\nWindows IP Configuration\n</code></pre>\n\n<p>Checking the command line options, there is one called <code>--net-host</code>, which sounded promising, only for that to be immediately dashed:</p>\n\n<pre><code>C:\\>ctr run --rm --net-host mcr.microsoft.com/windows/nanoserver:ltsc2022 my-container ipconfig\nctr: Cannot use host mode networking with Windows containers\n</code></pre>\n\n<p>The solution is <code>--cni</code>, but more work is required to get that working. We need to download the plugins and populate them in the <code>cni/bin</code> subdirectory. Fortunately, the installation script does all of this for us but leaves it unconfigured.</p>\n\n<pre><code>C:\\Windows\\System32>ctr run --rm --cni mcr.microsoft.com/windows/nanoserver:ltsc2022 my-container ipconfig\nctr: no network config found in C:\\Program Files\\containerd\\cni\\conf: cni plugin not initialized\n</code></pre>\n\n<p>From the top, this is how you get from a fresh install of Windows 11, to a container with networking. Firstly, use installation script to install <code>containerd</code>.</p>\n\n<pre><code>curl.exe https://raw.githubusercontent.com/microsoft/Windows-Containers/refs/heads/Main/helpful_tools/Install-ContainerdRuntime/install-containerd-runtime.ps1 -o install-containerd-runtime.ps1\nSet-ExecutionPolicy Bypass\n.\\install-containerd-runtime.ps1 -ContainerDVersion 2.1.1 -WinCNIVersion 0.3.1 -ExternalNetAdapter Ethernet\n</code></pre>\n\n<p>Now create <code>C:\\Program Files\\containerd\\cni\\conf\\0-containerd-nat.conf</code> containing the following:</p>\n\n<div><div><pre><code>{\n \"cniVersion\": \"0.3.0\",\n \"name\": \"nat\",\n \"type\": \"nat\",\n \"master\": \"Ethernet\",\n \"ipam\": {\n \"subnet\": \"172.20.0.0/16\",\n \"routes\": [\n {\n \"gateway\": \"172.20.0.1\"\n }\n ]\n },\n \"capabilities\": {\n \"portMappings\": true,\n \"dns\": true\n }\n}\n</code></pre></div></div>\n\n<p>Easy when you know how\u2026</p>\n\n<pre><code>C:\\>ctr run --rm --cni mcr.microsoft.com/windows/nanoserver:ltsc2022 my-container ping 1.1.1.1\n\nPinging 1.1.1.1 with 32 bytes of data:\nReply from 1.1.1.1: bytes=32 time=5ms TTL=58\nReply from 1.1.1.1: bytes=32 time=7ms TTL=58\nReply from 1.1.1.1: bytes=32 time=7ms TTL=58\nReply from 1.1.1.1: bytes=32 time=6ms TTL=58\n\nPing statistics for 1.1.1.1:\n Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),\nApproximate round trip times in milli-seconds:\n Minimum = 5ms, Maximum = 7ms, Average = 6ms\n</code></pre>\n\n<p>The next challenge is, what do you put in your own <code>config.json</code> to reproduce this behaviour?</p>\n\n<p>Firstly, we need our <code>layerFolders</code>:</p>\n\n<pre><code>C:\\>ctr snapshot ls\nKEY PARENT KIND\nsha256:44b913d145adda5364b5465664644b11282ed3c4b9bd9739aa17832ee4b2b355 Committed\n</code></pre>\n\n<pre><code>C:\\>ctr snapshot prepare --mounts my-snapshot sha256:44b913d145adda5364b5465664644b11282ed3c4b9bd9739aa17832ee4b2b355\n[\n {\n \"Type\": \"windows-layer\",\n \"Source\": \"C:\\\\ProgramData\\\\containerd\\\\root\\\\io.containerd.snapshotter.v1.windows\\\\snapshots\\\\14\",\n \"Target\": \"\",\n \"Options\": [\n \"rw\",\n \"parentLayerPaths=[\\\"C:\\\\\\\\ProgramData\\\\\\\\containerd\\\\\\\\root\\\\\\\\io.containerd.snapshotter.v1.windows\\\\\\\\snapshots\\\\\\\\1\\\"]\"\n ]\n }\n]\n</code></pre>\n\n<p>Let\u2019s create a <code>config.json</code> without a network stanza just to check we can create a container:</p>\n\n<div><div><pre><code>{\n \"ociVersion\": \"1.1.0\",\n \"process\": {\n \"terminal\": false,\n \"user\": { \"uid\": 0, \"gid\": 0 },\n \"args\": [\n \"cmd\", \"/c\",\n \"ipconfig && ping 1.1.1.1\"\n ],\n \"cwd\": \"c:\\\\\"\n },\n \"root\": { \"path\": \"\", \"readonly\": false },\n \"hostname\": \"builder\",\n \"windows\": {\n \"layerFolders\": [\n \"C:\\\\ProgramData\\\\containerd\\\\root\\\\io.containerd.snapshotter.v1.windows\\\\snapshots\\\\1\",\n \"C:\\\\ProgramData\\\\containerd\\\\root\\\\io.containerd.snapshotter.v1.windows\\\\snapshots\\\\14\"\n ],\n \"ignoreFlushesDuringBoot\": true\n }\n}\n</code></pre></div></div>\n\n<p>The container runs, but there is no network as we\u2019d expect.</p>\n\n<pre><code>C:\\>ctr run --rm --config config.json my-container\n\nWindows IP Configuration\n\n\nPinging 1.1.1.1 with 32 bytes of data:\nPING: transmit failed. General failure.\nPING: transmit failed. General failure.\nPING: transmit failed. General failure.\nPING: transmit failed. General failure.\n</code></pre>\n\n<p>If we turn on CNI, it crypically tells us what we need to do:</p>\n\n<pre><code>C:\\>ctr run --rm --cni --config config.json my-container\nctr: plugin type=\"nat\" name=\"nat\" failed (add): required env variables [CNI_NETNS] missing\n</code></pre>\n\n<p>So we need to populate the <code>network.networkNamespace</code> with the name (ID) of the network we want to use. This should be a GUID, and I don\u2019t know how to get the right value. I would have assumed that it was one of the many GUID\u2019s returned by <code>Get-HnsNetwork</code> but it isn\u2019t.</p>\n\n<div><div><pre><code><span>PS</span><span> </span><span>C:\\</span><span>></span><span> </span><span>Get-HnsNetwork</span><span>\n\n\n</span><span>ActivityId</span><span> </span><span>:</span><span> </span><span>92018CF0-6DCB-4AAF-A14E-DC61120FC958</span><span>\n</span><span>AdditionalParams</span><span> </span><span>:</span><span>\n</span><span>CurrentEndpointCount</span><span> </span><span>:</span><span> </span><span>0</span><span>\n</span><span>Extensions</span><span> </span><span>:</span><span> </span><span>{@{</span><span>Id</span><span>=</span><span>E7C3B2F0</span><span>-</span><span>F3C5</span><span>-</span><span>48</span><span>DF</span><span>-</span><span>AF2B</span><span>-</span><span>10</span><span>FED6D72E7A</span><span>;</span><span> </span><span>IsEnabled</span><span>=</span><span>False</span><span>;</span><span> </span><span>Name</span><span>=</span><span>Microsoft</span><span> </span><span>Windows</span><span> </span><span>Filtering</span><span> </span><span>Platform</span><span>},</span><span>\n </span><span>@{</span><span>Id</span><span>=</span><span>F74F241B</span><span>-</span><span>440</span><span>F</span><span>-</span><span>4433</span><span>-</span><span>BB28</span><span>-</span><span>00</span><span>F89EAD20D8</span><span>;</span><span> </span><span>IsEnabled</span><span>=</span><span>False</span><span>;</span><span> </span><span>Name</span><span>=</span><span>Microsoft</span><span> </span><span>Azure</span><span> </span><span>VFP</span><span> </span><span>Switch</span><span> </span><span>Filter</span><span> </span><span>Extension</span><span>},</span><span>\n </span><span>@{</span><span>Id</span><span>=</span><span>430</span><span>BDADD</span><span>-</span><span>BAB0</span><span>-</span><span>41</span><span>AB</span><span>-</span><span>A369</span><span>-</span><span>94</span><span>B67FA5BE0A</span><span>;</span><span> </span><span>IsEnabled</span><span>=</span><span>True</span><span>;</span><span> </span><span>Name</span><span>=</span><span>Microsoft</span><span> </span><span>NDIS</span><span> </span><span>Capture</span><span>}}</span><span>\n</span><span>Flags</span><span> </span><span>:</span><span> </span><span>8</span><span>\n</span><span>Health</span><span> </span><span>:</span><span> </span><span>@{</span><span>LastErrorCode</span><span>=</span><span>0</span><span>;</span><span> </span><span>LastUpdateTime</span><span>=</span><span>133943927149605101</span><span>}</span><span>\n</span><span>ID</span><span> </span><span>:</span><span> </span><span>3EB2B18B-A1DD-46A8-A425-256F6B3DF26D</span><span>\n</span><span>IPv6</span><span> </span><span>:</span><span> </span><span>False</span><span>\n</span><span>LayeredOn</span><span> </span><span>:</span><span> </span><span>20791F67-012C-4C9B-9C93-530FDA5DE4FA</span><span>\n</span><span>MacPools</span><span> </span><span>:</span><span> </span><span>{@{</span><span>EndMacAddress</span><span>=</span><span>00</span><span>-</span><span>15</span><span>-</span><span>5</span><span>D</span><span>-</span><span>C3</span><span>-</span><span>DF</span><span>-</span><span>FF</span><span>;</span><span> </span><span>StartMacAddress</span><span>=</span><span>00</span><span>-</span><span>15</span><span>-</span><span>5</span><span>D</span><span>-</span><span>C3</span><span>-</span><span>D0</span><span>-</span><span>00</span><span>}}</span><span>\n</span><span>MaxConcurrentEndpoints</span><span> </span><span>:</span><span> </span><span>1</span><span>\n</span><span>Name</span><span> </span><span>:</span><span> </span><span>nat</span><span>\n</span><span>NatName</span><span> </span><span>:</span><span> </span><span>NATAC317D6D-8A2E-4E4E-9BCF-33435FE4CD8F</span><span>\n</span><span>Policies</span><span> </span><span>:</span><span> </span><span>{@{</span><span>Type</span><span>=</span><span>VLAN</span><span>;</span><span> </span><span>VLAN</span><span>=</span><span>1</span><span>}}</span><span>\n</span><span>State</span><span> </span><span>:</span><span> </span><span>1</span><span>\n</span><span>Subnets</span><span> </span><span>:</span><span> </span><span>{@{</span><span>AdditionalParams</span><span>=</span><span>;</span><span> </span><span>AddressPrefix</span><span>=</span><span>172.20.0.0</span><span>/</span><span>16</span><span>;</span><span> </span><span>Flags</span><span>=</span><span>0</span><span>;</span><span> </span><span>GatewayAddress</span><span>=</span><span>172.20.0.1</span><span>;</span><span> </span><span>Health</span><span>=</span><span>;</span><span>\n </span><span>ID</span><span>=</span><span>5</span><span>D56CE8D</span><span>-</span><span>1</span><span>AD2</span><span>-</span><span>47</span><span>FF</span><span>-</span><span>85</span><span>A7</span><span>-</span><span>A0E6D530565D</span><span>;</span><span> </span><span>IpSubnets</span><span>=</span><span>System</span><span>.</span><span>Object</span><span>[];</span><span> </span><span>ObjectType</span><span>=</span><span>5</span><span>;</span><span> </span><span>Policies</span><span>=</span><span>System</span><span>.</span><span>Object</span><span>[];</span><span> </span><span>State</span><span>=</span><span>0</span><span>}}</span><span>\n</span><span>SwitchGuid</span><span> </span><span>:</span><span> </span><span>3EB2B18B-A1DD-46A8-A425-256F6B3DF26D</span><span>\n</span><span>TotalEndpoints</span><span> </span><span>:</span><span> </span><span>2</span><span>\n</span><span>Type</span><span> </span><span>:</span><span> </span><span>NAT</span><span>\n</span><span>Version</span><span> </span><span>:</span><span> </span><span>64424509440</span><span>\n</span><span>Resources</span><span> </span><span>:</span><span> </span><span>@{</span><span>AdditionalParams</span><span>=</span><span>;</span><span> </span><span>AllocationOrder</span><span>=</span><span>2</span><span>;</span><span> </span><span>Allocators</span><span>=</span><span>System</span><span>.</span><span>Object</span><span>[];</span><span> </span><span>CompartmentOperationTime</span><span>=</span><span>0</span><span>;</span><span> </span><span>Flags</span><span>=</span><span>0</span><span>;</span><span> </span><span>Health</span><span>=</span><span>;</span><span>\n </span><span>ID</span><span>=</span><span>92018</span><span>CF0</span><span>-</span><span>6</span><span>DCB</span><span>-</span><span>4</span><span>AAF</span><span>-</span><span>A14E</span><span>-</span><span>DC61120FC958</span><span>;</span><span> </span><span>PortOperationTime</span><span>=</span><span>0</span><span>;</span><span> </span><span>State</span><span>=</span><span>1</span><span>;</span><span> </span><span>SwitchOperationTime</span><span>=</span><span>0</span><span>;</span><span> </span><span>VfpOperationTime</span><span>=</span><span>0</span><span>;</span><span>\n </span><span>parentId</span><span>=</span><span>71</span><span>FB2758</span><span>-</span><span>F714</span><span>-</span><span>4838</span><span>-</span><span>8764</span><span>-</span><span>7079378</span><span>D6CB6</span><span>}</span><span>\n</span></code></pre></div></div>\n\n<p>I ran <code>ctr run --rm --cni mcr.microsoft.com/windows/nanoserver:ltsc2022 my-container cmd /c \"ping 1.1.1.1 && pause\"</code> in one window and ran <code>ctr c info my-container</code> in another, which revealed a GUID was <code>5f7d467c-3011-48bc-9337-ce78cf399345</code>.</p>\n\n<p>Adding this to my <code>config.json</code></p>\n\n<div><div><pre><code>{\n \"ociVersion\": \"1.1.0\",\n \"process\": {\n \"terminal\": false,\n \"user\": { \"uid\": 0, \"gid\": 0 },\n \"args\": [\n \"cmd\", \"/c\",\n \"ipconfig && ping 1.1.1.1\"\n ],\n \"cwd\": \"c:\\\\\"\n },\n \"root\": { \"path\": \"\", \"readonly\": false },\n \"hostname\": \"builder\",\n \"windows\": {\n \"layerFolders\": [\n \"C:\\\\ProgramData\\\\containerd\\\\root\\\\io.containerd.snapshotter.v1.windows\\\\snapshots\\\\1\",\n \"C:\\\\ProgramData\\\\containerd\\\\root\\\\io.containerd.snapshotter.v1.windows\\\\snapshots\\\\14\"\n ],\n \"ignoreFlushesDuringBoot\": true,\n \"network\": {\n \"allowUnqualifiedDNSQuery\": true,\n \"networkNamespace\": \"5f7d467c-3011-48bc-9337-ce78cf399345\"\n }\n }\n}\n</code></pre></div></div>\n\n<p>And now I have a network!</p>\n\n<pre><code>C:\\>ctr run --rm --cni --config config.json my-container\n\nWindows IP Configuration\n\n\nEthernet adapter vEthernet (default-my-container2_nat):\n\n Connection-specific DNS Suffix . : Home\n Link-local IPv6 Address . . . . . : fe80::921d:1ce7:a445:8dfa%49\n IPv4 Address. . . . . . . . . . . : 172.20.95.58\n Subnet Mask . . . . . . . . . . . : 255.255.0.0\n Default Gateway . . . . . . . . . : 172.20.0.1\n\nPinging 1.1.1.1 with 32 bytes of data:\nReply from 1.1.1.1: bytes=32 time=5ms TTL=58\nReply from 1.1.1.1: bytes=32 time=6ms TTL=58\nReply from 1.1.1.1: bytes=32 time=6ms TTL=58\nReply from 1.1.1.1: bytes=32 time=6ms TTL=58\n\nPing statistics for 1.1.1.1:\n Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),\nApproximate round trip times in milli-seconds:\n Minimum = 5ms, Maximum = 6ms, Average = 5ms\n</code></pre>",
+20
mte/2025_06_17_static-linking.json
+20
mte/2025_06_17_static-linking.json
···+"summary": "Most of the time, you don\u2019t think about how your file is linked. We\u2019ve come to love dynamically linked files with their small file sizes and reduced memory requirements, but there are times when the convenience of a single binary download from a GitHub release page is really what you need.",+"content": "<p>Most of the time, you don\u2019t think about how your file is linked. We\u2019ve come to love dynamically linked files with their small file sizes and reduced memory requirements, but there are times when the convenience of a single binary download from a GitHub release page is really what you need.</p>\n\n<p>To do this in OCaml, we need to add <code>-ccopt -static</code> to the <code>ocamlopt</code>. I\u2019m building with <code>dune</code>, so I can configure that in my <code>dune</code> file using a <code>flags</code> directive.</p>\n\n<div><div><pre><code>(flags (:standard -ccopt -static))\n</code></pre></div></div>\n\n<p>This can be extended for maximum compatibility by additionally adding <code>-ccopt -march=x86-64</code>, which ensures the generated code will run on any x86_64 processor and will not use newer instruction set extensions like SSE3, AVX, etc.</p>\n\n<p>So what about Windows? The Mingw tool chain accepts <code>-static</code>. Including <code>(flags (:standard -ccopt \"-link -Wl,-static -v\"))</code> got my options applied to my <code>dune</code> build:</p>\n\n<div><div><pre><code>x86_64-w64-mingw32-gcc -mconsole -L. -I\"C:/Users/Administrator/my-app/_opam/lib/ocaml\" -I\"C:\\Users\\Administrator\\my-app\\_opam\\lib\\mccs\" -I\"C:\\Users\\Administrator\\my-app\\_opam\\lib\\mccs\\glpk/internal\" -I\"C:\\Users\\Administrator\\my-app\\_opam\\lib\\opam-core\" -I\"C:\\Users\\Administrator\\my-app\\_opam\\lib\\sha\" -I\"C:/Users/Administrator/my-app/_opam/lib/ocaml\\flexdll\" -L\"C:/Users/Administrator/my-app/_opam/lib/ocaml\" -L\"C:\\Users\\Administrator\\my-app\\_opam\\lib\\mccs\" -L\"C:\\Users\\Administrator\\my-app\\_opam\\lib\\mccs\\glpk/internal\" -L\"C:\\Users\\Administrator\\my-app\\_opam\\lib\\opam-core\" -L\"C:\\Users\\Administrator\\my-app\\_opam\\lib\\sha\" -L\"C:/Users/Administrator/my-app/_opam/lib/ocaml\\flexdll\" -o \"bin/main.exe\" \"C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\2\\build_d62d04_dune\\dyndllb7e0e8.o\" \"@C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\2\\build_d62d04_dune\\camlrespec7816\" \"-municode\" \"-Wl,-static\"\n</code></pre></div></div>\n\n<p>However, <code>ldd</code> showed that this wasn\u2019t working:</p>\n\n<div><div><pre><code>$ ldd main.exe | grep mingw\n libstdc++-6.dll => /mingw64/bin/libstdc++-6.dll (0x7ffabf3e0000)\n libgcc_s_seh-1.dll => /mingw64/bin/libgcc_s_seh-1.dll (0x7ffac3130000)\n libwinpthread-1.dll => /mingw64/bin/libwinpthread-1.dll (0x7ffac4b40000)\n</code></pre></div></div>\n\n<p>I tried <em>a lot</em> of different variations. I asked Claude\u2026 then I asked <a href=\"https://www.dra27.uk/blog/\">@dra27</a> who recalled @kit-ty-kate working on this for opam. <a href=\"https://github.com/ocaml/opam/pull/5680\">PR#5680</a></p>\n\n<p>The issue is the auto-response file, which precedes my static option. We can remove that by adding <code>-noautolink</code>, but now we must do all the work by hand and build a massive command line.</p>\n\n<div><div><pre><code>(executable\n (public_name main)\n (name main)\n (flags (:standard -noautolink -cclib -lunixnat -cclib -lmccs_stubs -cclib -lmccs_glpk_stubs -cclib -lsha_stubs -cclib -lopam_core_stubs -cclib -l:libstdc++.a -cclib -l:libpthread.a -cclib -Wl,-static -cclib -ladvapi32 -cclib -lgdi32 -cclib -luser32 -cclib -lshell32 -cclib -lole32 -cclib -luuid -cclib -luserenv -cclib -lwindowsapp))\n (libraries opam-client))\n</code></pre></div></div>\n\n<p>It works, but it\u2019s not for the faint-hearted.</p>\n\n<p>I additionally added <code>(enabled_if (= %{os_type} Win32))</code> to my rule so it only runs on Windows.</p>",
+20
mte/2025_06_18_windows-reflinks.json
+20
mte/2025_06_18_windows-reflinks.json
···+"summary": "Who knew there was a limit on creating hard links? I didn\u2019t even consider this until my hard links started to fail. On NTFS, the limit is 1024 links to any given file. Subsequent research shows that the limit varies between file systems, with NTFS at the lower end of the scale.",+"content": "<p>Who knew there was a limit on creating hard links? I didn\u2019t even consider this until my hard links started to fail. On NTFS, the limit is 1024 links to any given file. Subsequent research shows that the limit varies between file systems, with NTFS at the lower end of the scale.</p>\n\n<p>Here\u2019s an excerpt from <a href=\"https://en.wikipedia.org/wiki/Hard_link\">Wikipedia</a> on the subject.</p>\n\n<blockquote>\n <p>In AT&T Unix System 6, released in 1975, the number of hard links allowed was 127. On Unix-like systems, the in-memory counter is 4,294,967,295 (on 32-bit machines) or 18,446,744,073,709,551,615 (on 64-bit machines). In some file systems, the number of hard links is limited more strictly by their on-disk format. For example, as of Linux 3.11, the ext4 file system limits the number of hard links on a file to 65,000. Windows limits enforces a limit of 1024 hard links to a file on NTFS volumes.</p>\n</blockquote>\n\n<p>This restriction probably doesn\u2019t even come close to being a practical limit for most normal use cases, but it\u2019s worth noting that <code>git.exe</code> has 142 hard links on a standard Cygwin installation.</p>\n\n<div><div><pre><code>fsutil hardlink list %LOCALAPPDATA%\\opam\\.cygwin\\root\\bin\\git.exe\n</code></pre></div></div>\n\n<p>Back in 2012, Microsoft released ReFS as an alternative to NTFS. The feature gap has closed over the years, with hard links being introduced in the preview of Windows Server 2022. ReFS supports 1 million hard links per file, but even more interestingly, it supports <a href=\"https://learn.microsoft.com/en-us/windows/win32/fileio/block-cloning\">block cloning</a>, aka <a href=\"https://blogs.oracle.com/linux/post/xfs-data-block-sharing-reflink\">reflinks</a>, whereby files can share common data blocks. When changes are written to a block, it is copied, and its references are updated.</p>\n\n<p>The implementation is interesting because it doesn\u2019t work in quite the way that one would think. It can only be used to clone complete clusters. Therefore, we must first call <a href=\"https://learn.microsoft.com/en-us/windows/win32/api/winioctl/ni-winioctl-fsctl_get_integrity_information\">FSCTL_GET_INTEGRITY_INFORMATION</a>, which returns <a href=\"https://learn.microsoft.com/en-us/windows/win32/api/winioctl/ns-winioctl-fsctl_get_integrity_information_buffer\">FSCTL_GET_INTEGRITY_INFORMATION_BUFFER</a> with the cluster size in bytes.</p>\n\n<p>Despite <a href=\"https://learn.microsoft.com/en-us/windows/win32/api/winioctl/ni-winioctl-fsctl_duplicate_extents_to_file\">FSCTL_DUPLICATE_EXTENTS_TO_FILE</a> taking an exact number of bytes, we must round up the file size to the next cluster boundary.</p>\n\n<p>Additionally, the target file needs to exist before the clone and be large enough to hold the cloned clusters. In practice, this means calling <a href=\"https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-createfilew\">CreateFileW</a> to create the file and then calling <a href=\"https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-setfileinformationbyhandle\">SetFileInformationByHandle</a> to set the file size to match the source file (not the rounded cluster size).</p>\n\n<p>Taking an example file of 23075 bytes, this would be rounded to 24576 bytes (6 clusters). We can use <code>fsutil file queryextents</code> to get detailed information about the clusters used in the source file:</p>\n\n<div><div><pre><code>D:\\> fsutil file queryextents source.txt\nVCN: 0x0 Clusters: 0x6 LCN: 0x2d3d801\n</code></pre></div></div>\n\n<p>Now we clone the file <code>ReFS-clone d:\\source.txt d:\\target.txt</code> and then query the extents which it uses.</p>\n\n<div><div><pre><code>D:\\> fsutil file queryextents target.txt\nVCN: 0x0 Clusters: 0x5 LCN: 0x2d3d801\nVCN: 0x5 Clusters: 0x1 LCN: 0x2d3c801\n</code></pre></div></div>\n\n<p>The first five whole clusters are shared between the two files, while the final partial cluster has been copied. When trying to implement this, I initially used a text file of just a few bytes and couldn\u2019t get it clone. After I rounded up the size to 4096, the API returned successfully, but there are no shared clusters. It wasn\u2019t until I tried a larger file with the size rounded up that I started to see actual shared clusters.</p>\n\n<div><div><pre><code>D:\\>echo hello > foo.txt\n\nD:\\>fsutil file queryextents foo.txt\nVCN: 0x0 Clusters: 0x1 LCN: 0x2d3dc04\n\nD:\\>ReFS-clone.exe foo.txt bar.txt\nReFS File Clone Utility\nReFS Clone: foo.txt -> bar.txt\nCluster size: 4096 bytes\nFile size: 8 bytes -> 4096 bytes (1 clusters)\nCloning 4096 bytes...\nSuccess!\nReFS cloning completed successfully.\n\nD:\\>fsutil file queryextents bar.txt\nVCN: 0x0 Clusters: 0x1 LCN: 0x2d3d807\n</code></pre></div></div>\n\n<p>The code is on GitHub in <a href=\"https://github.com/mtelvers/ReFS-Clone\">ReFS-Clone</a>.</p>",
+20
mte/2025_06_20_tailscale.json
+20
mte/2025_06_20_tailscale.json
···+"summary": "On a typical day, I sit at my antique Mac Pro Trashcan with every window running SSH to some remote machine. When I\u2019m away from home and using my MacBook, I can still SSH to those remote machines; however, with my recent Windows work, I\u2019ve been connecting to a Dell OptiPlex on my home LAN over Remote Desktop. How can I work remotely when I want to access my Windows machine?",+"content": "<p>On a typical day, I sit at my antique Mac Pro Trashcan with every window running SSH to some remote machine. When I\u2019m away from home and using my MacBook, I can still SSH to those remote machines; however, with my recent Windows work, I\u2019ve been connecting to a Dell OptiPlex on my home LAN over Remote Desktop. How can I work remotely when I want to access my Windows machine?</p>\n\n<p>It\u2019s the age-old issue of connecting to your home network, which is hidden behind your home broadband router with a dynamic public IP address. I could use a dynamic DNS service to track my home router and configure port forwarding, but would you open RDP to the Internet?</p>\n\n<p>I love VNC, but the recent change in the licensing model, whereby the free tier now has only three machines, combined with frustrating performance on the low bandwidth and intermittent connections we get on train WiFi, made me try an alternate solution. Thomas has Tailscale set up in the Paris office, and I decided to create a setup for home.</p>\n\n<p>I\u2019d rather not install any software on my Windows machine, as I wipe it pretty frequently, and I don\u2019t need a VPN interfering with my <code>containerd</code> implementation. However, Tailscale supports a configuration whereby you can route to local networks.</p>\n\n<p>After signing up for a free personal account, I installed the Tailscale client on my MacBook and Mac Pro (at home). On the Mac Pro, I enabled \u2018Allow Local Network Access\u2019 and from a Terminal window, I went to <code>/Applications/Tailscale.app/Contents/MacOS</code> and ran <code>./Tailscale set --advertise-routes=192.168.0.0/24</code>. With this done, looking at the machine list on the <a href=\"https://login.tailscale.com/admin/machines\">Tailscale console</a>, my Mac Pro lists <code>Subnets</code>. Clicking on the three dots, and opening <code>Edit route settings</code>, I could enable the advertised subnet, 192.168.0.0/24.</p>\n\n<p>Checking <code>netstat -rn</code> on my MacBook shows that 192.168.0 is routed over the VPN.</p>\n\n<div><div><pre><code>Routing tables\n\nInternet:\nDestination Gateway Flags Netif Expire\ndefault 10.101.2.1 UGScg en0\ndefault link#36 UCSIg utun12\n10.101.2/24 link#6 UCS en0 !\n10.101.2.1/32 link#6 UCS en0 !\n...\n192.168.0 link#36 UCS utun12\n...\n</code></pre></div></div>\n\n<p>From my MacBook, I can now use Microsoft Remote Desktop to connect to the private IP address of my Windows machine.</p>\n\n<p>OpenSSH is an optional feature on Windows 11. It can be turned on via Settings -> Apps -> Optional Features, clicking \u201cAdd a feature\u201d and installing \u201cOpenSSH Server\u201d. Then, Open Services and set the setup options for \u201cOpenSSH SSH Server\u201d to automatic.</p>\n\n<p>It didn\u2019t make the train WiFi any better, but connecting over SSH was pretty convenient when the bandwidth is low.</p>\n\n<p>Note that you may want to disable key expiry on your home machine; otherwise, it might require you to reauthenticate at a critical moment.</p>",
+20
mte/2025_06_21_macos-sequoia-include-path.json
+20
mte/2025_06_21_macos-sequoia-include-path.json
···+"summary": "@mseri raised issue #175 as the macOS workers cannot find the most basic C++ headers. I easily eliminated Obuilder, as opam install mccs.1.1+19 didn\u2019t work on the macOS workers natively.",+"content": "<p>@mseri raised <a href=\"https://github.com/ocaml/infrastructure/issues/175\">issue #175</a> as the macOS workers cannot find the most basic C++ headers. I easily eliminated <a href=\"https://github.com/ocurrent/obuilder\">Obuilder</a>, as <code>opam install mccs.1.1+19</code> didn\u2019t work on the macOS workers natively.</p>\n\n<p>On face value, the problem appears pretty common, and there are numerous threads on <a href=\"https://stackoverflow.com\">Stack Overflow</a> such as this <a href=\"https://stackoverflow.com/questions/77250743/mac-xcode-g-cannot-compile-even-a-basic-c-program-issues-with-standard-libr\">one</a>, however, the resolutions I tried didn\u2019t work. I was reluctant to try some of the more intrusive changes like creating a symlink of every header from <code>/usr/include/</code> to <code>/Library/Developer/CommandLineTools/usr/include/c++/v1</code> as this doesn\u2019t seem to be what Apple intends.</p>\n\n<p>For the record, a program such as this:</p>\n\n<div><div><pre><code><span>#include</span> <span><iostream></span><span>\n</span>\n<span>using</span> <span>namespace</span> <span>std</span><span>;</span>\n\n<span>int</span> <span>main</span><span>()</span> <span>{</span>\n <span>cout</span> <span><<</span> <span>\"Hello World!\"</span> <span><<</span> <span>endl</span><span>;</span>\n <span>return</span> <span>0</span><span>;</span>\n<span>}</span>\n</code></pre></div></div>\n\n<p>Fails like this:</p>\n\n<div><div><pre><code>% c++ hello.cpp <span>-o</span> hello <span>-v</span>\nApple clang version 17.0.0 <span>(</span>clang-1700.0.13.3<span>)</span>\nTarget: x86_64-apple-darwin24.5.0\nThread model: posix\nInstalledDir: /Library/Developer/CommandLineTools/usr/bin\n <span>\"/Library/Developer/CommandLineTools/usr/bin/clang\"</span> <span>-cc1</span> <span>-triple</span> x86_64-apple-macosx15.0.0 <span>-Wundef-prefix</span><span>=</span>TARGET_OS_ <span>-Wdeprecated-objc-isa-usage</span> <span>-Werror</span><span>=</span>deprecated-objc-isa-usage <span>-Werror</span><span>=</span>implicit-function-declaration <span>-emit-obj</span> <span>-dumpdir</span> hello- <span>-disable-free</span> <span>-clear-ast-before-backend</span> <span>-disable-llvm-verifier</span> <span>-discard-value-names</span> <span>-main-file-name</span> hello.cpp <span>-mrelocation-model</span> pic <span>-pic-level</span> 2 <span>-mframe-pointer</span><span>=</span>all <span>-fno-strict-return</span> <span>-ffp-contract</span><span>=</span>on <span>-fno-rounding-math</span> <span>-funwind-tables</span><span>=</span>2 <span>-target-sdk-version</span><span>=</span>15.4 <span>-fvisibility-inlines-hidden-static-local-var</span> <span>-fdefine-target-os-macros</span> <span>-fno-assume-unique-vtables</span> <span>-fno-modulemap-allow-subdirectory-search</span> <span>-target-cpu</span> penryn <span>-tune-cpu</span> generic <span>-debugger-tuning</span><span>=</span>lldb <span>-fdebug-compilation-dir</span><span>=</span>/Users/administrator/x <span>-target-linker-version</span> 1167.4.1 <span>-v</span> <span>-fcoverage-compilation-dir</span><span>=</span>/Users/administrator/x <span>-resource-dir</span> /Library/Developer/CommandLineTools/usr/lib/clang/17 <span>-isysroot</span> /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk <span>-internal-isystem</span> /Library/Developer/CommandLineTools/usr/bin/../include/c++/v1 <span>-internal-isystem</span> /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/local/include <span>-internal-isystem</span> /Library/Developer/CommandLineTools/usr/lib/clang/17/include <span>-internal-externc-isystem</span> /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include <span>-internal-externc-isystem</span> /Library/Developer/CommandLineTools/usr/include <span>-Wno-reorder-init-list</span> <span>-Wno-implicit-int-float-conversion</span> <span>-Wno-c99-designator</span> <span>-Wno-final-dtor-non-final-class</span> <span>-Wno-extra-semi-stmt</span> <span>-Wno-misleading-indentation</span> <span>-Wno-quoted-include-in-framework-header</span> <span>-Wno-implicit-fallthrough</span> <span>-Wno-enum-enum-conversion</span> <span>-Wno-enum-float-conversion</span> <span>-Wno-elaborated-enum-base</span> <span>-Wno-reserved-identifier</span> <span>-Wno-gnu-folding-constant</span> <span>-fdeprecated-macro</span> <span>-ferror-limit</span> 19 <span>-stack-protector</span> 1 <span>-fstack-check</span> <span>-mdarwin-stkchk-strong-link</span> <span>-fblocks</span> <span>-fencode-extended-block-signature</span> <span>-fregister-global-dtors-with-atexit</span> <span>-fgnuc-version</span><span>=</span>4.2.1 <span>-fno-cxx-modules</span> <span>-fskip-odr-check-in-gmf</span> <span>-fcxx-exceptions</span> <span>-fexceptions</span> <span>-fmax-type-align</span><span>=</span>16 <span>-fcommon</span> <span>-fcolor-diagnostics</span> <span>-clang-vendor-feature</span><span>=</span>+disableNonDependentMemberExprInCurrentInstantiation <span>-fno-odr-hash-protocols</span> <span>-clang-vendor-feature</span><span>=</span>+enableAggressiveVLAFolding <span>-clang-vendor-feature</span><span>=</span>+revert09abecef7bbf <span>-clang-vendor-feature</span><span>=</span>+thisNoAlignAttr <span>-clang-vendor-feature</span><span>=</span>+thisNoNullAttr <span>-clang-vendor-feature</span><span>=</span>+disableAtImportPrivateFrameworkInImplementationError <span>-D__GCC_HAVE_DWARF2_CFI_ASM</span><span>=</span>1 <span>-o</span> /var/folders/sh/9c8b7hzd2wb1g2_ky78vqw5r0000gn/T/hello-a268ab.o <span>-x</span> c++ hello.cpp\nclang <span>-cc1</span> version 17.0.0 <span>(</span>clang-1700.0.13.3<span>)</span> default target x86_64-apple-darwin24.5.0\nignoring nonexistent directory <span>\"/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/local/include\"</span>\nignoring nonexistent directory <span>\"/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/SubFrameworks\"</span>\nignoring nonexistent directory <span>\"/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/Library/Frameworks\"</span>\n<span>#include \"...\" search starts here:</span>\n<span>#include <...> search starts here:</span>\n /Library/Developer/CommandLineTools/usr/bin/../include/c++/v1\n /Library/Developer/CommandLineTools/usr/lib/clang/17/include\n /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include\n /Library/Developer/CommandLineTools/usr/include\n /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks <span>(</span>framework directory<span>)</span>\nEnd of search list.\nhello.cpp:1:10: fatal error: <span>'iostream'</span> file not found\n 1 | <span>#include <iostream></span>\n | ^~~~~~~~~~\n1 error generated.\n</code></pre></div></div>\n\n<p>That first folder looked strange: <code>bin/../include/c++/v1</code>. Really? What\u2019s in there? Not much:</p>\n\n<div><div><pre><code>% <span>ls</span> <span>-l</span> /Library/Developer/CommandLineTools/usr/bin/../include/c++/v1\ntotal 40\n<span>-rw-r--r--</span> 1 root wheel 44544 7 Apr 2022 __functional_03\n<span>-rw-r--r--</span> 1 root wheel 6532 7 Apr 2022 __functional_base_03\n<span>-rw-r--r--</span> 1 root wheel 2552 7 Apr 2022 __sso_allocator\n</code></pre></div></div>\n\n<p>I definitely have <code>iostream</code> on the machine:</p>\n\n<div><div><pre><code>% <span>ls</span> <span>-l</span> /Library/Developer/CommandLineTools/SDKs/MacOSX<span>*</span>.sdk/usr/include/c++/v1/iostream\n<span>-rw-r--r--</span> 1 root wheel 1507 8 Mar 03:36 /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/v1/iostream\n<span>-rw-r--r--</span> 1 root wheel 1391 13 Nov 2021 /Library/Developer/CommandLineTools/SDKs/MacOSX12.1.sdk/usr/include/c++/v1/iostream\n<span>-rw-r--r--</span> 1 root wheel 1583 13 Apr 2024 /Library/Developer/CommandLineTools/SDKs/MacOSX14.5.sdk/usr/include/c++/v1/iostream\n<span>-rw-r--r--</span> 1 root wheel 1583 13 Apr 2024 /Library/Developer/CommandLineTools/SDKs/MacOSX14.sdk/usr/include/c++/v1/iostream\n<span>-rw-r--r--</span> 1 root wheel 1583 10 Nov 2024 /Library/Developer/CommandLineTools/SDKs/MacOSX15.2.sdk/usr/include/c++/v1/iostream\n<span>-rw-r--r--</span> 1 root wheel 1507 8 Mar 03:36 /Library/Developer/CommandLineTools/SDKs/MacOSX15.4.sdk/usr/include/c++/v1/iostream\n<span>-rw-r--r--</span> 1 root wheel 1507 8 Mar 03:36 /Library/Developer/CommandLineTools/SDKs/MacOSX15.sdk/usr/include/c++/v1/iostream\n</code></pre></div></div>\n\n<p>I tried on my MacBook, which compiled the test program without issue. However, that was running Monterey, where the workers are running Sequoia. The <em>include</em> paths on my laptop look much better. Where are they configured?</p>\n\n<div><div><pre><code>% c++ <span>-v</span> <span>-o</span> <span>test </span>test.cpp\nApple clang version 15.0.0 <span>(</span>clang-1500.3.9.4<span>)</span>\nTarget: x86_64-apple-darwin23.5.0\nThread model: posix\nInstalledDir: /Library/Developer/CommandLineTools/usr/bin\n <span>\"/Library/Developer/CommandLineTools/usr/bin/clang\"</span> <span>-cc1</span> <span>-triple</span> x86_64-apple-macosx14.0.0 <span>-Wundef-prefix</span><span>=</span>TARGET_OS_ <span>-Wdeprecated-objc-isa-usage</span> <span>-Werror</span><span>=</span>deprecated-objc-isa-usage <span>-Werror</span><span>=</span>implicit-function-declaration <span>-emit-obj</span> <span>-mrelax-all</span> <span>--mrelax-relocations</span> <span>-disable-free</span> <span>-clear-ast-before-backend</span> <span>-disable-llvm-verifier</span> <span>-discard-value-names</span> <span>-main-file-name</span> test.cpp <span>-mrelocation-model</span> pic <span>-pic-level</span> 2 <span>-mframe-pointer</span><span>=</span>all <span>-fno-strict-return</span> <span>-ffp-contract</span><span>=</span>on <span>-fno-rounding-math</span> <span>-funwind-tables</span><span>=</span>2 <span>-target-sdk-version</span><span>=</span>14.4 <span>-fvisibility-inlines-hidden-static-local-var</span> <span>-target-cpu</span> penryn <span>-tune-cpu</span> generic <span>-debugger-tuning</span><span>=</span>lldb <span>-target-linker-version</span> 1053.12 <span>-v</span> <span>-fcoverage-compilation-dir</span><span>=</span>/Users/mtelvers/x <span>-resource-dir</span> /Library/Developer/CommandLineTools/usr/lib/clang/15.0.0 <span>-isysroot</span> /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk <span>-I</span>/usr/local/include <span>-internal-isystem</span> /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/v1 <span>-internal-isystem</span> /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/local/include <span>-internal-isystem</span> /Library/Developer/CommandLineTools/usr/lib/clang/15.0.0/include <span>-internal-externc-isystem</span> /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include <span>-internal-externc-isystem</span> /Library/Developer/CommandLineTools/usr/include <span>-Wno-reorder-init-list</span> <span>-Wno-implicit-int-float-conversion</span> <span>-Wno-c99-designator</span> <span>-Wno-final-dtor-non-final-class</span> <span>-Wno-extra-semi-stmt</span> <span>-Wno-misleading-indentation</span> <span>-Wno-quoted-include-in-framework-header</span> <span>-Wno-implicit-fallthrough</span> <span>-Wno-enum-enum-conversion</span> <span>-Wno-enum-float-conversion</span> <span>-Wno-elaborated-enum-base</span> <span>-Wno-reserved-identifier</span> <span>-Wno-gnu-folding-constant</span> <span>-fdeprecated-macro</span> <span>-fdebug-compilation-dir</span><span>=</span>/Users/mtelvers/x <span>-ferror-limit</span> 19 <span>-stack-protector</span> 1 <span>-fstack-check</span> <span>-mdarwin-stkchk-strong-link</span> <span>-fblocks</span> <span>-fencode-extended-block-signature</span> <span>-fregister-global-dtors-with-atexit</span> <span>-fgnuc-version</span><span>=</span>4.2.1 <span>-fno-cxx-modules</span> <span>-fcxx-exceptions</span> <span>-fexceptions</span> <span>-fmax-type-align</span><span>=</span>16 <span>-fcommon</span> <span>-fcolor-diagnostics</span> <span>-clang-vendor-feature</span><span>=</span>+disableNonDependentMemberExprInCurrentInstantiation <span>-fno-odr-hash-protocols</span> <span>-clang-vendor-feature</span><span>=</span>+enableAggressiveVLAFolding <span>-clang-vendor-feature</span><span>=</span>+revert09abecef7bbf <span>-clang-vendor-feature</span><span>=</span>+thisNoAlignAttr <span>-clang-vendor-feature</span><span>=</span>+thisNoNullAttr <span>-mllvm</span> <span>-disable-aligned-alloc-awareness</span><span>=</span>1 <span>-D__GCC_HAVE_DWARF2_CFI_ASM</span><span>=</span>1 <span>-o</span> /var/folders/15/4zw4hb9s40b8cmff3z5bdszc0000gp/T/test-71e229.o <span>-x</span> c++ test.cpp\nclang <span>-cc1</span> version 15.0.0 <span>(</span>clang-1500.3.9.4<span>)</span> default target x86_64-apple-darwin23.5.0\nignoring nonexistent directory <span>\"/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/local/include\"</span>\nignoring nonexistent directory <span>\"/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/Library/Frameworks\"</span>\n<span>#include \"...\" search starts here:</span>\n<span>#include <...> search starts here:</span>\n /usr/local/include\n /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/v1\n /Library/Developer/CommandLineTools/usr/lib/clang/15.0.0/include\n /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include\n /Library/Developer/CommandLineTools/usr/include\n /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks <span>(</span>framework directory<span>)</span>\nEnd of search list.\n <span>\"/Library/Developer/CommandLineTools/usr/bin/ld\"</span> <span>-demangle</span> <span>-lto_library</span> /Library/Developer/CommandLineTools/usr/lib/libLTO.dylib <span>-no_deduplicate</span> <span>-dynamic</span> <span>-arch</span> x86_64 <span>-platform_version</span> macos 14.0.0 14.4 <span>-syslibroot</span> /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk <span>-o</span> <span>test</span> <span>-L</span>/usr/local/lib /var/folders/15/4zw4hb9s40b8cmff3z5bdszc0000gp/T/test-71e229.o <span>-lc</span>++ <span>-lSystem</span> /Library/Developer/CommandLineTools/usr/lib/clang/15.0.0/lib/darwin/libclang_rt.osx.a\n</code></pre></div></div>\n\n<p>I\u2019ve been meaning to upgrade my MacBook, and this looked like the perfect excuse. I updated to Sequoia and then updated the Xcode command-line tools. The test compilation worked, the paths looked good, but I had clang 1700.0.13.5, where the workers had 1700.0.13.3.</p>\n\n<div><div><pre><code>% c++ <span>-v</span> <span>-o</span> <span>test </span>test.cpp\nApple clang version 17.0.0 <span>(</span>clang-1700.0.13.5<span>)</span>\nTarget: x86_64-apple-darwin24.5.0\nThread model: posix\nInstalledDir: /Library/Developer/CommandLineTools/usr/bin\n</code></pre></div></div>\n\n<p>I updated the workers to 1700.0.13.5, which didn\u2019t make any difference. The workers still had that funny <code>/../</code> path, which wasn\u2019t present anywhere else. I searched <code>/Library/Developer/CommandLineTools/usr/bin/../include/c++/v1 site:stackoverflow.com</code> and the answer is the top <a href=\"https://stackoverflow.com/a/79606435\">match</a>.</p>\n\n<blockquote>\n <p>Rename or if you\u2019re confident enough, delete /Library/Developer/CommandLineTools/usr/include/c++, then clang++ will automatically search headers under /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/v1 and find your header. That directory is very likely an artifact of OS upgrade and by deleting it clang++ will realise that it should search in the header paths of new SDKs.</p>\n</blockquote>\n\n<p>I wasn\u2019t confident, so I moved it, <code>sudo mv c++ ~</code>. With that done, the test program builds correctly! Have a read of the <a href=\"https://stackoverflow.com/a/79606435\">answer</a> on Stack Overflow.</p>\n\n<p>Now, rather more cavalierly, I removed the folder on all the i7 and m1 workers:</p>\n\n<div><div><pre><code><span>$ </span><span>for </span>a <span>in</span> <span>{</span>01..04<span>}</span> <span>;</span> <span>do </span>ssh m1-worker-<span>$a</span>.macos.ci.dev <span>sudo rm</span> <span>-r</span> /Library/Developer/CommandLineTools/usr/include/c++ <span>;</span> <span>done</span>\n</code></pre></div></div>",
+20
mte/2025_06_23_transitive-reduction.json
+20
mte/2025_06_23_transitive-reduction.json
···+"summary": "I have previously written about using a topological sort of a directed acyclic graph (DAG) of package dependencies to create an ordered list of installation operations. I now want to create a transitive reduction, giving a graph with the same vertices and the fewest number of edges possible.",+"content": "<p>I have previously written about using a <a href=\"https://www.tunbury.org/topological-sort/\">topological sort</a> of a directed acyclic graph (DAG) of package dependencies to create an ordered list of installation operations. I now want to create a transitive reduction, giving a graph with the same vertices and the fewest number of edges possible.</p>\n\n<p>This is interesting in opam, where a typical package is defined to depend upon both OCaml and Dune. However, Dune depends upon OCaml, so minimally the package only depends upon Dune. For opam, we would typically list both, as they may have version constraints.</p>\n\n<div><div><pre><code><span>depends</span><span>:</span> <span>[</span>\n <span>\"</span><span>dune\"</span> <span>{</span><span>></span><span>= \"3.17\"</span><span>}</span>\n <span>\"</span><span>ocaml\"</span>\n<span>]</span>\n</code></pre></div></div>\n\n<p>Given a topologically sorted list of packages, we can fold over the list to build a map of the packages and dependencies. As each package is considered in turn, it must either have no dependencies or the dependent package must already be in the map.</p>\n\n<div><div><pre><code><span>let</span> <span>pkg_deps</span> <span>solution</span> <span>=</span>\n <span>List</span><span>.</span><span>fold_left</span> <span>(</span><span>fun</span> <span>map</span> <span>pkg</span> <span>-></span>\n <span>let</span> <span>deps_direct</span> <span>=</span> <span>PackageMap</span><span>.</span><span>find</span> <span>pkg</span> <span>solution</span> <span>in</span>\n <span>let</span> <span>deps_plus_children</span> <span>=</span> <span>PackageSet</span><span>.</span><span>fold</span> <span>(</span><span>fun</span> <span>pkg</span> <span>acc</span> <span>-></span>\n <span>PackageSet</span><span>.</span><span>union</span> <span>acc</span> <span>(</span><span>PackageMap</span><span>.</span><span>find</span> <span>pkg</span> <span>map</span><span>))</span> <span>deps_direct</span> <span>deps_direct</span> <span>in</span>\n <span>PackageMap</span><span>.</span><span>add</span> <span>pkg</span> <span>deps_plus_children</span> <span>map</span><span>)</span> <span>PackageMap</span><span>.</span><span>empty</span><span>;;</span>\n</code></pre></div></div>\n\n<p>To generate the transitive reduction, take each set of dependencies for every package in the solution and remove those where the package is a member of the set of all the dependencies of any other directly descendant package.</p>\n\n<div><div><pre><code><span>let</span> <span>reduce</span> <span>dependencies</span> <span>=</span>\n <span>PackageMap</span><span>.</span><span>map</span> <span>(</span><span>fun</span> <span>u</span> <span>-></span>\n <span>PackageSet</span><span>.</span><span>filter</span> <span>(</span><span>fun</span> <span>v</span> <span>-></span>\n <span>let</span> <span>others</span> <span>=</span> <span>PackageSet</span><span>.</span><span>remove</span> <span>v</span> <span>u</span> <span>in</span>\n <span>PackageSet</span><span>.</span><span>fold</span> <span>(</span><span>fun</span> <span>o</span> <span>acc</span> <span>-></span>\n <span>acc</span> <span>||</span> <span>PackageSet</span><span>.</span><span>mem</span> <span>v</span> <span>(</span><span>PackageMap</span><span>.</span><span>find</span> <span>o</span> <span>dependencies</span><span>)</span>\n <span>)</span> <span>others</span> <span>false</span> <span>|></span> <span>not</span>\n <span>)</span> <span>u</span>\n <span>);;</span>\n</code></pre></div></div>\n\n<p>Let\u2019s create a quick print function and then test the code:</p>\n\n<div><div><pre><code><span>let</span> <span>print</span> <span>=</span> <span>PackageMap</span><span>.</span><span>iter</span> <span>(</span><span>fun</span> <span>p</span> <span>deps</span> <span>-></span>\n <span>print_endline</span> <span>(</span><span>p</span> <span>^</span> <span>\": \"</span> <span>^</span> <span>(</span><span>PackageSet</span><span>.</span><span>to_list</span> <span>deps</span> <span>|></span> <span>String</span><span>.</span><span>concat</span> <span>\",\"</span><span>))</span>\n<span>);;</span>\n</code></pre></div></div>\n\n<p>The original solution is</p>\n\n<div><div><pre><code><span>#</span> <span>print</span> <span>dune</span><span>;;</span>\n<span>base</span><span>-</span><span>threads</span><span>.</span><span>base</span><span>:</span>\n<span>base</span><span>-</span><span>unix</span><span>.</span><span>base</span><span>:</span>\n<span>dune</span><span>:</span> <span>base</span><span>-</span><span>threads</span><span>.</span><span>base</span><span>,</span><span>base</span><span>-</span><span>unix</span><span>.</span><span>base</span><span>,</span><span>ocaml</span>\n<span>ocaml</span><span>:</span> <span>ocaml</span><span>-</span><span>config</span><span>,</span><span>ocaml</span><span>-</span><span>variants</span>\n<span>ocaml</span><span>-</span><span>config</span><span>:</span> <span>ocaml</span><span>-</span><span>variants</span>\n<span>ocaml</span><span>-</span><span>variants</span><span>:</span>\n<span>-</span> <span>:</span> <span>unit</span> <span>=</span> <span>()</span>\n</code></pre></div></div>\n\n<p>And the reduced solution is:</p>\n\n<div><div><pre><code><span>#</span> <span>let</span> <span>dependencies</span> <span>=</span> <span>pkg_deps</span> <span>dune</span> <span>(</span><span>topological_sort</span> <span>dune</span><span>);;</span>\n<span>val</span> <span>dependencies</span> <span>:</span> <span>PackageSet</span><span>.</span><span>t</span> <span>PackageMap</span><span>.</span><span>t</span> <span>=</span> <span><</span><span>abstr</span><span>></span>\n<span>#</span> <span>print</span> <span>(</span><span>reduce</span> <span>dependencies</span> <span>dune</span><span>);;</span>\n<span>base</span><span>-</span><span>threads</span><span>.</span><span>base</span><span>:</span>\n<span>base</span><span>-</span><span>unix</span><span>.</span><span>base</span><span>:</span>\n<span>dune</span><span>:</span> <span>base</span><span>-</span><span>threads</span><span>.</span><span>base</span><span>,</span><span>base</span><span>-</span><span>unix</span><span>.</span><span>base</span><span>,</span><span>ocaml</span>\n<span>ocaml</span><span>:</span> <span>ocaml</span><span>-</span><span>config</span>\n<span>ocaml</span><span>-</span><span>config</span><span>:</span> <span>ocaml</span><span>-</span><span>variants</span>\n<span>ocaml</span><span>-</span><span>variants</span><span>:</span>\n<span>-</span> <span>:</span> <span>unit</span> <span>=</span> <span>()</span>\n</code></pre></div></div>\n\n<p>This doesn\u2019t look like much of a difference, but when applied to a larger graph, for example, 0install.2.18, the reduction is quite dramatic.</p>\n\n<p>Initial graph</p>\n\n<p><img alt=\"opam installation graph for 0install\" src=\"https://www.tunbury.org/images/0install-graph.png\"></p>\n\n<p>Transitive reduction</p>\n\n<p><img alt=\"Transitive reduction of the opam installation graph for 0install\" src=\"https://www.tunbury.org/images/0install-reduced-graph.png\"></p>",
+20
mte/2025_06_24_opam2web.json
+20
mte/2025_06_24_opam2web.json
···+"summary": "The opam2web image for opam.ocaml.org is huge weighing in at more than 25 GB. The bulk of this data is opam archives, which are updated and copied into a stock caddy image.",+"content": "<p>The opam2web image for <a href=\"https://opam.ocaml.org\">opam.ocaml.org</a> is huge weighing in at more than 25 GB. The bulk of this data is opam archives, which are updated and copied into a stock caddy image.</p>\n\n<p>There are two archives, <code>ocaml/opam.ocaml.org-legacy</code>, which hasn\u2019t changed for 5 years and holds the cache for opam 1.x and <code>ocaml/opam:archive</code>, which is updated weekly.</p>\n\n<p>The current <code>Dockerfile</code> copies these files into a new layer each time opam2web builds.</p>\n\n<div><div><pre><code><span>FROM</span><span> </span><span>--platform=linux/amd64 ocaml/opam:archive</span><span> </span><span>as</span><span> </span><span>opam-archive</span>\n<span>FROM</span><span> </span><span>ocaml/opam.ocaml.org-legacy</span><span> </span><span>as</span><span> </span><span>opam-legacy</span>\n<span>FROM</span><span> </span><span>alpine:3.20</span><span> </span><span>as</span><span> </span><span>opam2web</span>\n...\n<span>COPY</span><span> --from=opam-legacy . /www</span>\n...\n<span>RUN </span><span>--mount</span><span>=</span><span>type</span><span>=</span><span>bind</span>,target<span>=</span>/cache,from<span>=</span>opam-archive rsync <span>-aH</span> /cache/cache/ /www/cache/\n...\n</code></pre></div></div>\n\n<p>And later, the entire <code>/www</code> structure is copied into a <code>caddy:2.8.4</code> image.</p>\n\n<div><div><pre><code><span>FROM</span><span> caddy:2.8.4</span>\n<span>WORKDIR</span><span> /srv</span>\n<span>COPY</span><span> --from=opam2web /www /usr/share/caddy</span>\n<span>COPY</span><span> Caddyfile /etc/caddy/Caddyfile</span>\n<span>ENTRYPOINT</span><span> [\"caddy\", \"run\", \"--config\", \"/etc/caddy/Caddyfile\", \"--adapter\", \"caddyfile\"]</span>\n</code></pre></div></div>\n\n<p>This method is considered \u201cbest practice\u201d when creating Docker images, but in this case, it produces a very large image, which takes a long time to deploy.</p>\n\n<p>For Docker to use an existing layer, we need the final <code>FROM ...</code> to be the layer we want to use as the base. In the above snippet, the <code>caddy:2.8.4</code> layer will be the base layer and will be reused.</p>\n\n<p>The archive, <code>ocaml/opam:archive</code>, is created by this Dockerfile, which ultimately uses <code>alpine:latest</code>.</p>\n\n<div><div><pre><code><span>FROM</span><span> </span><span>ocaml/opam:archive</span><span> </span><span>AS</span><span> </span><span>opam-archive</span>\n<span>FROM</span><span> </span><span>ocurrent/opam-staging@sha256:f921cd51dda91f61a52a2c26a8a188f8618a2838e521d3e4afa3ca1da637903e</span><span> </span><span>AS</span><span> </span><span>archive</span>\n<span>WORKDIR</span><span> /home/opam/opam-repository</span>\n<span>RUN </span><span>--mount</span><span>=</span><span>type</span><span>=</span><span>bind</span>,target<span>=</span>/cache,from<span>=</span>opam-archive rsync <span>-aH</span> /cache/cache/ /home/opam/opam-repository/cache/\n<span>RUN </span>opam admin cache <span>--link</span><span>=</span>/home/opam/opam-repository/cache\n\n<span>FROM</span><span> alpine:latest</span>\n<span>COPY</span><span> --chown=0:0 --from=archive [ \"/home/opam/opam-repository/cache\", \"/cache\" ]</span>\n</code></pre></div></div>\n\n<p>In our opam2web build, we could use <code>FROM ocaml/opam:archive</code> and then <code>apk add caddy</code>, which would reuse the entire 15GB layer and add the few megabytes for <code>caddy</code>.</p>\n\n<p><code>ocaml/opam.ocaml.org-legacy</code> is another 8GB. This legacy data could be integrated by adding it to <code>ocaml/opam:archive</code> in a different directory to ensure compatibility with anyone else using this image. This is <a href=\"https://github.com/ocurrent/docker-base-images/pull/324\">PR#324</a></p>\n\n<div><div><pre><code> <span>let</span> <span>install_package_archive</span> <span>opam_image</span> <span>=</span>\n <span>let</span> <span>open</span> <span>Dockerfile</span> <span>in</span>\n<span>+</span> <span>from</span> <span>~</span><span>alias</span><span>:</span><span>\"opam-legacy\"</span> <span>\"ocaml/opam.ocaml.org-legacy\"</span> <span>@@</span>\n <span>from</span> <span>~</span><span>alias</span><span>:</span><span>\"opam-archive\"</span> <span>\"ocaml/opam:archive\"</span> <span>@@</span>\n <span>from</span> <span>~</span><span>alias</span><span>:</span><span>\"archive\"</span> <span>opam_image</span> <span>@@</span>\n <span>workdir</span> <span>\"/home/opam/opam-repository\"</span> <span>@@</span>\n <span>run</span> <span>~</span><span>mounts</span><span>:</span><span>[</span><span>mount_bind</span> <span>~</span><span>target</span><span>:</span><span>\"/cache\"</span> <span>~</span><span>from</span><span>:</span><span>\"opam-archive\"</span> <span>()</span><span>]</span> <span>\"rsync -aH /cache/cache/ /home/opam/opam-repository/cache/\"</span> <span>@@</span>\n <span>run</span> <span>\"opam admin cache --link=/home/opam/opam-repository/cache\"</span> <span>@@</span>\n <span>from</span> <span>\"alpine:latest\"</span> <span>@@</span>\n<span>+</span> <span>copy</span> <span>~</span><span>chown</span><span>:</span><span>\"0:0\"</span> <span>~</span><span>from</span><span>:</span><span>\"opam-legacy\"</span> <span>~</span><span>src</span><span>:</span><span>[</span><span>\"/\"</span><span>]</span> <span>~</span><span>dst</span><span>:</span><span>\"/legacy\"</span> <span>()</span> <span>@@</span>\n <span>copy</span> <span>~</span><span>chown</span><span>:</span><span>\"0:0\"</span> <span>~</span><span>from</span><span>:</span><span>\"archive\"</span> <span>~</span><span>src</span><span>:</span><span>[</span><span>\"/home/opam/opam-repository/cache\"</span><span>]</span> <span>~</span><span>dst</span><span>:</span><span>\"/cache\"</span> <span>()</span>\n</code></pre></div></div>\n\n<p>Finally, we need to update <a href=\"https://github.com/ocaml-opam/opam2web\">opam2web</a> to use <code>ocaml/opam:archive</code> as the base layer rather than <code>caddy:2.8.4</code>, resulting in the final part of the <code>Dockerfile</code> looking like this.</p>\n\n<div><div><pre><code><span>FROM</span><span> ocaml/opam:archive</span>\n<span>RUN </span>apk add <span>--update</span> git curl rsync libstdc++ rdfind caddy\n<span>COPY</span><span> --from=build-opam2web /opt/opam2web /usr/local</span>\n<span>COPY</span><span> --from=build-opam-doc /usr/bin/opam-dev /usr/local/bin/opam</span>\n<span>COPY</span><span> --from=build-opam-doc /opt/opam/doc /usr/local/share/opam2web/content/doc</span>\n<span>COPY</span><span> ext/key/opam-dev-team.pgp /www/opam-dev-pubkey.pgp</span>\n<span>ADD</span><span> bin/opam-web.sh /usr/local/bin</span>\n<span>ARG</span><span> DOMAIN=opam.ocaml.org</span>\n<span>ARG</span><span> OPAM_REPO_GIT_SHA=master</span>\n<span>ARG</span><span> BLOG_GIT_SHA=master</span>\n<span>RUN </span><span>echo</span> <span>${</span><span>OPAM_REPO_GIT_SHA</span><span>}</span> <span>>></span> /www/opam_git_sha\n<span>RUN </span><span>echo</span> <span>${</span><span>BLOG_GIT_SHA</span><span>}</span> <span>>></span> /www/blog_git_sha\n<span>RUN </span>/usr/local/bin/opam-web.sh <span>${</span><span>DOMAIN</span><span>}</span> <span>${</span><span>OPAM_REPO_GIT_SHA</span><span>}</span> <span>${</span><span>BLOG_GIT_SHA</span><span>}</span>\n<span>WORKDIR</span><span> /srv</span>\n<span>COPY</span><span> Caddyfile /etc/caddy/Caddyfile</span>\n<span>ENTRYPOINT</span><span> [\"caddy\", \"run\", \"--config\", \"/etc/caddy/Caddyfile\", \"--adapter\", \"caddyfile\"]</span>\n</code></pre></div></div>\n\n<p>I acknowledge that this final image now contains some extra unneeded packages such as <code>git</code>, <code>curl</code>, etc, but this seems a minor inconvenience.</p>\n\n<p>The <code>Caddyfile</code> can be adjusted to make everything still appear to be in the same place:</p>\n\n<div><div><pre><code>:80 {\n\tredir /install.sh https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh\n\tredir /install.ps1 https://raw.githubusercontent.com/ocaml/opam/master/shell/install.ps1\n\n\t@version_paths path /1.1/* /1.2.0/* /1.2.2/*\n\thandle @version_paths {\n\t\troot * /legacy\n\t\tfile_server\n\t}\n\n\thandle /cache/* {\n\t\troot * /\n\t\tfile_server\n\t}\n\n\thandle {\n\t\troot * /www\n\t\tfile_server\n\t}\n}\n</code></pre></div></div>\n\n<p>In this configuration, the Docker <em>push</em> is only 650MB rather than 25GB.</p>\n\n<p>The changes to opam2web are in <a href=\"https://github.com/ocaml-opam/opam2web/pull/245\">PR#245</a></p>\n\n<p>Test with some external URLs:</p>\n\n<ul>\n <li><a href=\"https://staging.opam.ocaml.org/index.tar.gz\">https://staging.opam.ocaml.org/index.tar.gz</a></li>\n <li><a href=\"https://staging.opam.ocaml.org/archives/0install.2.18/0install-2.18.tbz\">https://staging.opam.ocaml.org/archives/0install.2.18/0install-2.18.tbz</a></li>\n <li><a href=\"https://staging.opam.ocaml.org/cache/0install.2.18/0install-2.18.tbz\">https://staging.opam.ocaml.org/cache/0install.2.18/0install-2.18.tbz</a></li>\n <li><a href=\"https://staging.opam.ocaml.org/1.2.2/archives/0install.2.12.3+opam.tar.gz\">https://staging.opam.ocaml.org/1.2.2/archives/0install.2.12.3+opam.tar.gz</a></li>\n <li><a href=\"https://staging.opam.ocaml.org/1.2.0/archives/0install.2.12.1+opam.tar.gz\">https://staging.opam.ocaml.org/1.2.0/archives/0install.2.12.1+opam.tar.gz</a></li>\n <li><a href=\"https://staging.opam.ocaml.org/1.1/archives/0install.2.10+opam.tar.gz\">https://staging.opam.ocaml.org/1.1/archives/0install.2.10+opam.tar.gz</a></li>\n <li><a href=\"https://staging.opam.ocaml.org/opam_git_sha\">https://staging.opam.ocaml.org/opam_git_sha</a></li>\n <li><a href=\"https://staging.opam.ocaml.org/blog_git_sha\">https://staging.opam.ocaml.org/blog_git_sha</a></li>\n <li><a href=\"https://staging.opam.ocaml.org/opam-dev-pubkey.pgp\">https://staging.opam.ocaml.org/opam-dev-pubkey.pgp</a></li>\n</ul>",
+20
mte/2025_06_27_windows-containerd-3.json
+20
mte/2025_06_27_windows-containerd-3.json
···+"summary": "Everything was going fine until I ran out of disk space. My NVMe, C: drive, is only 256GB, but I have a large, 1.7TB SSD available as D:. How trivial, change a few paths and carry on, but it wasn\u2019t that simple, or was it?",+"content": "<p>Everything was going fine until I ran out of disk space. My NVMe, <code>C:</code> drive, is only 256GB, but I have a large, 1.7TB SSD available as <code>D:</code>. How trivial, change a few paths and carry on, but it wasn\u2019t that simple, or was it?</p>\n\n<p>Distilling the problem down to the minimum and excluding all code written by me, the following command fails, but changing <code>src=d:\\cache\\opam</code> to <code>src=c:\\cache\\opam</code> works. It\u2019s not the content, as it\u2019s just an empty folder.</p>\n\n<pre><code>ctr run --rm --cni -user ContainerAdministrator -mount type=bind,src=d:\\cache\\opam,dst=c:\\Users\\ContainerAdministrator\\AppData\\Local\\opam mcr.microsoft.com/windows/servercore:ltsc2022 my-container cmd /c \"curl.exe -L -o c:\\Windows\\opam.exe https://github.com/ocaml/opam/releases/download/2.3.0/opam-2.3.0-x86_64-windows.exe && opam.exe init --debug-level=3 -y\"\n</code></pre>\n\n<p>The failure point is the ability to create the lock file <code>config.lock</code>. Checking the code, the log entry is written before the lock is acquired. If <code>c:\\Users\\ContainerAdministrator\\AppData\\Local\\opam</code> is not a bind mount, or the bind mount is on <code>C:</code>, then it works.</p>\n\n<div><div><pre><code>01:26.722 CLIENT updating repository state\n01:26.722 GSTATE LOAD-GLOBAL-STATE @ C:\\Users\\ContainerAdministrator\\AppData\\Local\\opam\n01:26.723 SYSTEM LOCK C:\\Users\\ContainerAdministrator\\AppData\\Local\\opam\\lock (none => read)\n01:26.723 SYSTEM LOCK C:\\Users\\ContainerAdministrator\\AppData\\Local\\opam\\config.lock (none => write)\n</code></pre></div></div>\n\n<p>Suffice it to say, I spent a long time trying to resolve this. I\u2019ll mention a couple of interesting points that appeared along the way. Firstly, files created on <code>D:</code> effectively appear as hard links, and the Update Sequence Number, USN, is 0.</p>\n\n<div><div><pre><code><span>C:\\</span><span>></span><span> </span><span>fsutil</span><span> </span><span>file</span><span> </span><span>layout</span><span> </span><span>d:\\cache\\opam\\lock</span><span>\n\n</span><span>*********</span><span> </span><span>File</span><span> </span><span>0x000400000001d251</span><span> </span><span>*********</span><span>\n</span><span>File</span><span> </span><span>reference</span><span> </span><span>number</span><span> </span><span>:</span><span> </span><span>0x000400000001d251</span><span>\n</span><span>File</span><span> </span><span>attributes</span><span> </span><span>:</span><span> </span><span>0x00000020:</span><span> </span><span>Archive</span><span>\n</span><span>File</span><span> </span><span>entry</span><span> </span><span>flags</span><span> </span><span>:</span><span> </span><span>0x00000000</span><span>\n</span><span>Link</span><span> </span><span>(</span><span>ParentID:</span><span> </span><span>Name</span><span>)</span><span> </span><span>:</span><span> </span><span>0</span><span>x000c00000000002d:</span><span> </span><span>HLINK</span><span> </span><span>Name</span><span> </span><span>:</span><span> </span><span>\\cache\\opam\\lock</span><span>\n</span><span>...</span><span>\n</span><span>LastUsn</span><span> </span><span>:</span><span> </span><span>0</span><span>\n</span><span>...</span><span>\n</span></code></pre></div></div>\n\n<p>The reason behind this is down to Windows defaults:</p>\n\n<ol>\n <li>Windows still likes to create the legacy 8.3 MS-DOS file names on the system volume, <code>C:</code>, which explains the difference between <code>HLINK</code> and <code>NTFS+DOS</code>. Running <code>fsutil 8dot3name set d: 0</code> will enable the creation of the old-style file names.</li>\n <li>Drive <code>C:</code> has a USN journal created automatically, as it\u2019s required for Windows to operate, but it isn\u2019t created by default on other drives. Running <code>fsutil usn createjournal d: m=32000000 a=8000000</code> will create the journal.</li>\n</ol>\n\n<div><div><pre><code><span>C:\\</span><span>></span><span> </span><span>fsutil</span><span> </span><span>file</span><span> </span><span>layout</span><span> </span><span>c:\\cache\\opam\\lock</span><span>\n\n</span><span>*********</span><span> </span><span>File</span><span> </span><span>0x000300000002f382</span><span> </span><span>*********</span><span>\n</span><span>File</span><span> </span><span>reference</span><span> </span><span>number</span><span> </span><span>:</span><span> </span><span>0x000300000002f382</span><span>\n</span><span>File</span><span> </span><span>attributes</span><span> </span><span>:</span><span> </span><span>0x00000020:</span><span> </span><span>Archive</span><span>\n</span><span>File</span><span> </span><span>entry</span><span> </span><span>flags</span><span> </span><span>:</span><span> </span><span>0x00000000</span><span>\n</span><span>Link</span><span> </span><span>(</span><span>ParentID:</span><span> </span><span>Name</span><span>)</span><span> </span><span>:</span><span> </span><span>0</span><span>x000b0000000271d1:</span><span> </span><span>NTFS</span><span>+</span><span>DOS</span><span> </span><span>Name:</span><span> </span><span>\\cache\\opam\\lock</span><span>\n</span><span>...</span><span>\n</span><span>LastUsn</span><span> </span><span>:</span><span> </span><span>16</span><span>,</span><span>897</span><span>,</span><span>595</span><span>,</span><span>224</span><span>\n</span><span>...</span><span>\n</span></code></pre></div></div>\n\n<p>Sadly, neither of these insights makes any difference to my problem. I did notice that <code>containerd</code> 2.1.3 had been released, where I had been using 2.1.1. Upgrading didn\u2019t fix the issue, but it did affect how the network namespaces were created. More later.</p>\n\n<p>I decided to both ignore the problem and try it on another machine. After all, this problem was only a problem because <em>my</em> <code>C:</code> was too small. I created a QEMU VM with a 40GB <code>C:</code> and a 1TB <code>D:</code> and installed everything, and it worked fine with the bind mount on <code>D:</code> even <em>without</em> any of the above tuning and even with <code>D:</code> formatted using ReFS, rather than NTFS.</p>\n\n<p>Trying on another physical machine with a single large spinning disk as <code>C:</code> also worked as anticipated.</p>\n\n<p>In both of these new installations, I used <code>containerd</code> 2.1.3 and noticed that the behaviour I had come to rely upon seemed to have changed. If you recall, in this <a href=\"https://www.tunbury.org/2025/06/14/windows-containerd-2/\">post</a>, I <em>found</em> the network namespace GUID by running <code>ctr run</code> on a standard Windows container and then <code>ctr container info</code> in another window. This no longer worked reliably, as the namespace was removed when the container exited. Perhaps it always should have been?</p>\n\n<p>I need to find out how to create these namespaces. PowerShell has a cmdlet <code>Get-HnsNetwork</code>, but none of the GUID values there match the currently running namespaces I observe from <code>ctr container info</code>. The source code of <a href=\"https://github.com/containerd/containerd\">containerd</a> is on GitHub..</p>\n\n<p>When you pass <code>--cni</code> to the <code>ctr</code> command, it populates the network namespace from <code>NetNewNS</code>. Snippet from <code>cmd/ctr/commands/run/run_windows.go</code></p>\n\n<div><div><pre><code> <span>if</span> <span>cliContext</span><span>.</span><span>Bool</span><span>(</span><span>\"cni\"</span><span>)</span> <span>{</span>\n <span>ns</span><span>,</span> <span>err</span> <span>:=</span> <span>netns</span><span>.</span><span>NewNetNS</span><span>(</span><span>\"\"</span><span>)</span>\n <span>if</span> <span>err</span> <span>!=</span> <span>nil</span> <span>{</span>\n <span>return</span> <span>nil</span><span>,</span> <span>err</span>\n <span>}</span>\n <span>opts</span> <span>=</span> <span>append</span><span>(</span><span>opts</span><span>,</span> <span>oci</span><span>.</span><span>WithWindowsNetworkNamespace</span><span>(</span><span>ns</span><span>.</span><span>GetPath</span><span>()))</span>\n <span>}</span>\n</code></pre></div></div>\n\n<p><code>NewNetNS</code> is defined in <code>pkg/netns/netns_windows.go</code></p>\n\n<div><div><pre><code><span>// NetNS holds network namespace for sandbox</span>\n<span>type</span> <span>NetNS</span> <span>struct</span> <span>{</span>\n <span>path</span> <span>string</span>\n<span>}</span>\n\n<span>// NewNetNS creates a network namespace for the sandbox.</span>\n<span>func</span> <span>NewNetNS</span><span>(</span><span>baseDir</span> <span>string</span><span>)</span> <span>(</span><span>*</span><span>NetNS</span><span>,</span> <span>error</span><span>)</span> <span>{</span>\n <span>temp</span> <span>:=</span> <span>hcn</span><span>.</span><span>HostComputeNamespace</span><span>{}</span>\n <span>hcnNamespace</span><span>,</span> <span>err</span> <span>:=</span> <span>temp</span><span>.</span><span>Create</span><span>()</span>\n <span>if</span> <span>err</span> <span>!=</span> <span>nil</span> <span>{</span>\n <span>return</span> <span>nil</span><span>,</span> <span>err</span>\n <span>}</span>\n\n <span>return</span> <span>&</span><span>NetNS</span><span>{</span><span>path</span><span>:</span> <span>hcnNamespace</span><span>.</span><span>Id</span><span>},</span> <span>nil</span>\n<span>}</span>\n</code></pre></div></div>\n\n<p>Following the thread, and cutting out a few steps in the interest of brevity, we end up in <code>vendor/github.com/Microsoft/hcsshim/hcn/zsyscall_windows.go</code> which calls a Win32 API.</p>\n\n<div><div><pre><code><span>func</span> <span>_hcnCreateNamespace</span><span>(</span><span>id</span> <span>*</span><span>_guid</span><span>,</span> <span>settings</span> <span>*</span><span>uint16</span><span>,</span> <span>namespace</span> <span>*</span><span>hcnNamespace</span><span>,</span> <span>result</span> <span>**</span><span>uint16</span><span>)</span> <span>(</span><span>hr</span> <span>error</span><span>)</span> <span>{</span>\n <span>hr</span> <span>=</span> <span>procHcnCreateNamespace</span><span>.</span><span>Find</span><span>()</span>\n <span>if</span> <span>hr</span> <span>!=</span> <span>nil</span> <span>{</span>\n <span>return</span>\n <span>}</span>\n <span>r0</span><span>,</span> <span>_</span><span>,</span> <span>_</span> <span>:=</span> <span>syscall</span><span>.</span><span>SyscallN</span><span>(</span><span>procHcnCreateNamespace</span><span>.</span><span>Addr</span><span>(),</span> <span>uintptr</span><span>(</span><span>unsafe</span><span>.</span><span>Pointer</span><span>(</span><span>id</span><span>)),</span> <span>uintptr</span><span>(</span><span>unsafe</span><span>.</span><span>Pointer</span><span>(</span><span>settings</span><span>)),</span> <span>uintptr</span><span>(</span><span>unsafe</span><span>.</span><span>Pointer</span><span>(</span><span>namespace</span><span>)),</span> <span>uintptr</span><span>(</span><span>unsafe</span><span>.</span><span>Pointer</span><span>(</span><span>result</span><span>)))</span>\n <span>if</span> <span>int32</span><span>(</span><span>r0</span><span>)</span> <span><</span> <span>0</span> <span>{</span>\n <span>if</span> <span>r0</span><span>&</span><span>0x1fff0000</span> <span>==</span> <span>0x00070000</span> <span>{</span>\n <span>r0</span> <span>&=</span> <span>0xffff</span>\n <span>}</span>\n <span>hr</span> <span>=</span> <span>syscall</span><span>.</span><span>Errno</span><span>(</span><span>r0</span><span>)</span>\n <span>}</span>\n <span>return</span>\n<span>}</span>\n</code></pre></div></div>\n\n<p>PowerShell provides <code>Get-HnsNamespace</code> to list available namespaces. These <em>are</em> the droids values I\u2019ve been looking for to put in <code>config.json</code>! However, by default there are no cmdlets to create them. The installation PowerShell <a href=\"https://github.com/microsoft/Windows-Containers/blob/Main/helpful_tools/Install-ContainerdRuntime/install-containerd-runtime.ps1\">script</a> for <code>containerd</code> pulls in <a href=\"https://github.com/microsoft/SDN/blob/master/Kubernetes/windows/hns.psm1\">hns.psm1</a> for <code>containerd</code>, has a lot of interesting cmdlets, such as <code>New-HnsNetwork</code>, but not a cmdlet to create a namespace. There is also <a href=\"https://github.com/microsoft/SDN/blob/master/Kubernetes/windows/hns.v2.psm1\">hns.v2.psm1</a>, which does have <code>New-HnsNamespace</code>.</p>\n\n<div><div><pre><code><span>PS</span><span> </span><span>C:\\Users\\Administrator</span><span>></span><span> </span><span>curl.exe</span><span> </span><span>-o</span><span> </span><span>hns.v2.psm1</span><span> </span><span>-L</span><span> </span><span>https://raw.githubusercontent.com/microsoft/SDN/refs/heads/master/Kubernetes/windows/hns.v2.psm1</span><span>\n </span><span>%</span><span> </span><span>Total</span><span> </span><span>%</span><span> </span><span>Received</span><span> </span><span>%</span><span> </span><span>Xferd</span><span> </span><span>Average</span><span> </span><span>Speed</span><span> </span><span>Time</span><span> </span><span>Time</span><span> </span><span>Time</span><span> </span><span>Current</span><span>\n </span><span>Dload</span><span> </span><span>Upload</span><span> </span><span>Total</span><span> </span><span>Spent</span><span> </span><span>Left</span><span> </span><span>Speed</span><span>\n</span><span>100</span><span> </span><span>89329</span><span> </span><span>100</span><span> </span><span>89329</span><span> </span><span>0</span><span> </span><span>0</span><span> </span><span>349</span><span>k</span><span> </span><span>0</span><span> </span><span>--</span><span>:</span><span>--</span><span>:</span><span>--</span><span> </span><span>--</span><span>:</span><span>--</span><span>:</span><span>--</span><span> </span><span>--</span><span>:</span><span>--</span><span>:</span><span>--</span><span> </span><span>353k</span><span>\n\n</span><span>PS</span><span> </span><span>C:\\Users\\Administrator</span><span>></span><span> </span><span>Import-Module</span><span> </span><span>.</span><span>\\hns.v2.psm1</span><span>\n</span><span>WARNING:</span><span> </span><span>The</span><span> </span><span>names</span><span> </span><span>of</span><span> </span><span>some</span><span> </span><span>imported</span><span> </span><span>commands</span><span> </span><span>from</span><span> </span><span>the</span><span> </span><span>module</span><span> </span><span>'hns.v2'</span><span> </span><span>include</span><span> </span><span>unapproved</span><span> </span><span>verbs</span><span> </span><span>that</span><span> </span><span>might</span><span> </span><span>make</span><span> </span><span>them</span><span> </span><span>less</span><span> </span><span>discoverable.</span><span> </span><span>To</span><span> </span><span>find</span><span> </span><span>the</span><span> </span><span>commands</span><span> </span><span>with</span><span> </span><span>unapproved</span><span> </span><span>verbs</span><span>,</span><span> </span><span>run</span><span> </span><span>the</span><span> </span><span>Import-Module</span><span> </span><span>command</span><span> </span><span>again</span><span> </span><span>with</span><span> </span><span>the</span><span> </span><span>Verbose</span><span> </span><span>parameter.</span><span> </span><span>For</span><span> </span><span>a</span><span> </span><span>list</span><span> </span><span>of</span><span> </span><span>approved</span><span> </span><span>verbs</span><span>,</span><span> </span><span>type</span><span> </span><span>Get-Verb.</span><span>\n\n</span><span>PS</span><span> </span><span>C:\\Users\\Administrator</span><span>></span><span> </span><span>New-HnsNamespace</span><span>\n</span><span>HcnCreateNamespace</span><span> </span><span>--</span><span> </span><span>HRESULT:</span><span> </span><span>2151350299.</span><span> </span><span>Result:</span><span> </span><span>{</span><span>\"Success\"</span><span>:</span><span>false</span><span>,</span><span>\"Error\"</span><span>:</span><span>\"Invalid JSON document string. &#123;&#123;CreateWithCompartment,UnknownField}}\"</span><span>,</span><span>\"ErrorCode\"</span><span>:</span><span>2151350299</span><span>}</span><span>\n</span><span>At</span><span> </span><span>C:\\Users\\Administrator\\hns.v2.psm1:2392</span><span> </span><span>char:13</span><span>\n</span><span>+</span><span> </span><span>throw</span><span> </span><span>$errString</span><span>\n</span><span>+</span><span> </span><span>~~~~~~~~~~~~~~~~</span><span>\n </span><span>+</span><span> </span><span>CategoryInfo</span><span> </span><span>:</span><span> </span><span>OperationStopped:</span><span> </span><span>(</span><span>HcnCreateNamesp...de</span><span>\":2151350299}:String) [], RuntimeException\n + FullyQualifiedErrorId : HcnCreateNamespace -- HRESULT: 2151350299. Result: {\"</span><span>Success</span><span>\":false,\"</span><span>Error</span><span>\":\"</span><span>Invalid</span><span> </span><span>JSON</span><span> </span><span>document</span><span> </span><span>string.</span><span> </span><span>&</span><span>#123;&#123;CreateWithCompartment,UnknownField}}\",\"ErrorCode\":2151350299}</span><span>\n</span></code></pre></div></div>\n\n<p>With a lot of frustration, I decided to have a go at calling the Win32 API from OCaml. This resulted in <a href=\"https://github.com/mtelvers/hcn-namespace\">mtelvers/hcn-namespace</a>, which allows me to create the namespaces by running <code>hcn-namespace create</code>. These namespaces appear in the output from <code>Get-HnsNamespace</code> and work correctly in <code>config.json</code>.</p>\n\n<p>Run <code>hcn-namespace.exe create</code>, and then populate <code>\"networkNamespace\": \"<GUID>\"</code> with the GUID provided and run with <code>ctr run --rm -cni --config config.json</code>.</p>",
+20
mte/2025_07_01_ocaml-functors.json
+20
mte/2025_07_01_ocaml-functors.json
···+"summary": "In my OCaml project, I\u2019d like to abstract away the details of running containers into specific modules based on the OS. Currently, I have working container setups for Windows and Linux, and I\u2019ve haphazardly peppered if Sys.win32 then where I need differentiation, but this is OCaml, so let us use functors!",+"content": "<p>In my OCaml project, I\u2019d like to abstract away the details of running containers into specific modules based on the OS. Currently, I have working container setups for Windows and Linux, and I\u2019ve haphazardly peppered <code>if Sys.win32 then</code> where I need differentiation, but this is OCaml, so let us use <em>functors</em>!</p>\n\n<p>I started by fleshing out the bare bones in a new project. After <code>dune init project functor</code>, I created <code>bin/s.ml</code> containing the signature of the module <code>CONTAINER</code>.</p>\n\n<div><div><pre><code><span>module</span> <span>type</span> <span>CONTAINER</span> <span>=</span> <span>sig</span>\n <span>val</span> <span>run</span> <span>:</span> <span>string</span> <span>-></span> <span>unit</span>\n<span>end</span>\n</code></pre></div></div>\n\n<p>Then a trivial <code>bin/linux.ml</code>.</p>\n\n<div><div><pre><code><span>let</span> <span>run</span> <span>s</span> <span>=</span> <span>Printf</span><span>.</span><span>printf</span> <span>\"Linux container '%s'</span><span>\\n</span><span>\"</span> <span>s</span>\n</code></pre></div></div>\n\n<p>And <code>bin/windows.ml</code>.</p>\n\n<div><div><pre><code><span>let</span> <span>run</span> <span>s</span> <span>=</span> <span>Printf</span><span>.</span><span>printf</span> <span>\"Windows container '%s'</span><span>\\n</span><span>\"</span> <span>s</span>\n</code></pre></div></div>\n\n<p>Then in <code>bin/main.ml</code>, I can select the container system once and from then on use <code>Container.foo</code> to run the appropriate OS specific function.</p>\n\n<div><div><pre><code><span>let</span> <span>container</span> <span>=</span> <span>if</span> <span>Sys</span><span>.</span><span>win32</span> <span>then</span> <span>(</span><span>module</span> <span>Windows</span> <span>:</span> <span>S</span><span>.</span><span>CONTAINER</span><span>)</span> <span>else</span> <span>(</span><span>module</span> <span>Linux</span> <span>:</span> <span>S</span><span>.</span><span>CONTAINER</span><span>)</span>\n\n<span>module</span> <span>Container</span> <span>=</span> <span>(</span><span>val</span> <span>container</span><span>)</span>\n\n<span>let</span> <span>()</span> <span>=</span> <span>Container</span><span>.</span><span>run</span> <span>\"Hello, World!\"</span>\n</code></pre></div></div>\n\n<p>You can additionally create <code>windows.mli</code> and <code>linux.mli</code> containing simply <code>include S.CONTAINER</code>.</p>\n\n<p>Now, let\u2019s imagine that we needed to have some specific configuration options depending upon whether we are running on Windows or Linux. For demonstration purposes, let\u2019s use the user account. On Windows, this is a string, typically <code>ContainerAdministrator</code>, whereas on Linux, it\u2019s an integer UID of value 0.</p>\n\n<p>We can update the module type in <code>bin/s.ml</code> to include the type <code>t</code>, and add an <code>init</code> function to return a <code>t</code> and add <code>t</code> as a parameter to <code>run</code>.</p>\n\n<div><div><pre><code><span>module</span> <span>type</span> <span>CONTAINER</span> <span>=</span> <span>sig</span>\n <span>type</span> <span>t</span>\n\n <span>val</span> <span>init</span> <span>:</span> <span>unit</span> <span>-></span> <span>t</span>\n <span>val</span> <span>run</span> <span>:</span> <span>t</span> <span>-></span> <span>string</span> <span>-></span> <span>unit</span>\n<span>end</span>\n</code></pre></div></div>\n\n<p>In <code>bin/linux.ml</code>, we can add the type and define <code>uid</code> as an integer, then add the <code>init</code> function to return the populated structure. <code>run</code> now accepts <code>t</code> as the first parameter.</p>\n\n<div><div><pre><code><span>type</span> <span>t</span> <span>=</span> <span>{</span>\n <span>uid</span> <span>:</span> <span>int</span><span>;</span>\n<span>}</span>\n\n<span>let</span> <span>init</span> <span>()</span> <span>=</span> <span>{</span> <span>uid</span> <span>=</span> <span>0</span> <span>}</span>\n\n<span>let</span> <span>run</span> <span>t</span> <span>s</span> <span>=</span> <span>Printf</span><span>.</span><span>printf</span> <span>\"Linux container user id %i says '%s'</span><span>\\n</span><span>\"</span> <span>t</span><span>.</span><span>uid</span> <span>s</span>\n</code></pre></div></div>\n\n<p>In a similar vein, <code>bin/windows.ml</code> is updated like this</p>\n\n<div><div><pre><code><span>type</span> <span>t</span> <span>=</span> <span>{</span>\n <span>username</span> <span>:</span> <span>string</span><span>;</span>\n<span>}</span>\n\n<span>let</span> <span>init</span> <span>()</span> <span>=</span> <span>{</span> <span>username</span> <span>=</span> <span>\"ContainerAdministrator\"</span> <span>}</span>\n\n<span>let</span> <span>run</span> <span>t</span> <span>s</span> <span>=</span> <span>Printf</span><span>.</span><span>printf</span> <span>\"Windows container user name %s says '%s'</span><span>\\n</span><span>\"</span> <span>t</span><span>.</span><span>username</span> <span>s</span>\n</code></pre></div></div>\n\n<p>And finally, in <code>bin/main.ml</code> we run <code>Container.init ()</code> and use the returned type as a parameter to <code>Container.run</code>.</p>\n\n<div><div><pre><code><span>let</span> <span>container</span> <span>=</span> <span>if</span> <span>Sys</span><span>.</span><span>win32</span> <span>then</span> <span>(</span><span>module</span> <span>Windows</span> <span>:</span> <span>S</span><span>.</span><span>CONTAINER</span><span>)</span> <span>else</span> <span>(</span><span>module</span> <span>Linux</span> <span>:</span> <span>S</span><span>.</span><span>CONTAINER</span><span>)</span>\n\n<span>module</span> <span>Container</span> <span>=</span> <span>(</span><span>val</span> <span>container</span><span>)</span>\n\n<span>let</span> <span>c</span> <span>=</span> <span>Container</span><span>.</span><span>init</span> <span>()</span>\n<span>let</span> <span>()</span> <span>=</span> <span>Container</span><span>.</span><span>run</span> <span>c</span> <span>\"Hello, World!\"</span>\n</code></pre></div></div>",
+20
mte/2025_07_02_bon-in-a-box.json
+20
mte/2025_07_02_bon-in-a-box.json
···+"summary": "On a suggestion from Michael, I have had a quick look at BON in a Box, which is a web-based biodiversity analysis platform using Docker containerised pipelines running R, Julia, and Python scripts.",+"content": "<p>On a suggestion from Michael, I have had a quick look at <a href=\"https://geo-bon.github.io/bon-in-a-box-pipeline-engine/\">BON in a Box</a>, which is a web-based biodiversity analysis platform using Docker containerised pipelines running R, Julia, and Python scripts.</p>\n\n<p>It couldn\u2019t be easier to get started. Install Docker and Docker Compose, and make sure you can access GitHub via SSH using a public key. [Run <code>ssh-keygen -t ed25519</code> and then publish the resulting <code>~/.ssh/id_ed25519.pub</code> to your GitHub account.]</p>\n\n<div><div><pre><code>apt <span>install </span>docker.io docker-compose-v2\n</code></pre></div></div>\n\n<p>Clone the GEO-BON\u2019s repository and make a working copy of the <code>runner.env</code> file. This file can be edit to add API keys of datasets, but I don\u2019t have any so the default file is fine.</p>\n\n<div><div><pre><code>git clone git@github.com:GEO-BON/bon-in-a-box-pipelines.git\n<span>cd </span>bon-in-a-box\n<span>cp </span>runner-sample.env runner.env\n</code></pre></div></div>\n\n<p>To start the server run <code>./server-up.sh</code>. There is also <code>./server-down.sh</code> to stop the server.</p>\n\n<p>The first run downloads the required Docker containers so takes a few minutes. Once complete visit <a href=\"http://localhost\">http://localhost</a> to see the web GUI.</p>\n\n<p>I ran the \u201cGet Country Polygon\u201d script, creating a nice Colombia polygon.</p>\n\n<p>There is a drag and drop pipeline editor which felt a lot like Microsoft Access.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/geobon-pipeline.png\"></p>\n\n<p>I followed along with the tutorial and created an R script and a YAML file of the same name in the <code>/scripts</code> directory. These appeared in the GUI, allowing me to run them and use them in the pipeline editor. Annoyingly, the dataset was not provided in the tutorial, so I couldn\u2019t run the code.</p>\n\n<p><code>TestScript.R</code></p>\n\n<p>The <code>biab</code> functions are how the script interacts with the BON in a Box system.</p>\n\n<div><div><pre><code><span>library</span><span>(</span><span>rjson</span><span>)</span><span>\n</span><span>library</span><span>(</span><span>sf</span><span>)</span><span>\n</span><span>library</span><span>(</span><span>terra</span><span>)</span><span>\n</span><span>library</span><span>(</span><span>dplyr</span><span>)</span><span>\n</span><span>library</span><span>(</span><span>ggplot2</span><span>)</span><span>\n\n</span><span>input</span><span> </span><span><-</span><span> </span><span>biab_inputs</span><span>()</span><span>\n\n</span><span>dat</span><span> </span><span><-</span><span> </span><span>st_read</span><span>(</span><span>input</span><span>$</span><span>country_polygon</span><span>)</span><span>\n\n</span><span>if</span><span> </span><span>(</span><span>nrow</span><span>(</span><span>dat</span><span>)</span><span>==</span><span>0</span><span>)</span><span> </span><span>{</span><span>\n </span><span>biab_error_stop</span><span>(</span><span>\"Country polygon does not exist\"</span><span>)</span><span>\n</span><span>}</span><span> \n \n</span><span>dat.transformed</span><span> </span><span><-</span><span> </span><span>st_transform</span><span>(</span><span>dat</span><span>,</span><span> </span><span>crs</span><span>=</span><span>input</span><span>$</span><span>crs</span><span>)</span><span>\n\n</span><span>rasters</span><span> </span><span><-</span><span> </span><span>terra</span><span>::</span><span>rast</span><span>(</span><span>c</span><span>(</span><span>input</span><span>$</span><span>rasters</span><span>,</span><span> </span><span>crs</span><span>=</span><span>intput</span><span>$</span><span>crs</span><span>))</span><span>\n\n</span><span>country_vect</span><span> </span><span><-</span><span> </span><span>vect</span><span>(</span><span>dat.transformed</span><span>)</span><span>\n \n</span><span>raster.cropped</span><span> </span><span><-</span><span> </span><span>mask</span><span>(</span><span>rasters</span><span>,</span><span> </span><span>country_vect</span><span>)</span><span> \n \n</span><span>raster_change</span><span> </span><span><-</span><span> </span><span>rasters</span><span>[[</span><span>1</span><span>]]</span><span>-</span><span>rasters</span><span>[[</span><span>2</span><span>]]</span><span>\n\n</span><span>raster_change_path</span><span> </span><span><-</span><span> </span><span>file.path</span><span>(</span><span>outputFolder</span><span>,</span><span> </span><span>\"raster_change.tif\"</span><span>)</span><span>\n</span><span>writeRaster</span><span>(</span><span>raster_change</span><span>,</span><span> </span><span>raster_change_path</span><span>)</span><span>\n\n</span><span>biab_output</span><span>(</span><span>\"raster_change\"</span><span>,</span><span> </span><span>raster_change_path</span><span>)</span><span>\n\n</span><span>layer_means</span><span> </span><span><-</span><span> </span><span>global</span><span>(</span><span>rasters.cropped</span><span>,</span><span> </span><span>fun</span><span>=</span><span>\"mean\"</span><span>,</span><span> </span><span>na.rm</span><span>=</span><span>TRUE</span><span>)</span><span>\n</span><span>layer_means</span><span>$</span><span>name</span><span> </span><span><-</span><span> </span><span>names</span><span>(</span><span>rasters.cropped</span><span>)</span><span>\n \n</span><span>means_plot</span><span> </span><span><-</span><span> </span><span>ggplot</span><span>(</span><span>layer_means</span><span>,</span><span> </span><span>aes</span><span>(</span><span>x</span><span>=</span><span>name</span><span>,</span><span> </span><span>y</span><span>=</span><span>mean</span><span>))</span><span> </span><span>+</span><span> </span><span>geom_point</span><span>()</span><span>\n \n</span><span>means_plot_path</span><span> </span><span><-</span><span> </span><span>file.path</span><span>(</span><span>outputFolder</span><span>,</span><span> </span><span>\"means_plot.png\"</span><span>)</span><span>\n</span><span>ggsave</span><span>(</span><span>means_plot_path</span><span>,</span><span> </span><span>means_plot</span><span>)</span><span>\n \n</span><span>biab_output</span><span>(</span><span>\"means_plot\"</span><span>,</span><span> </span><span>means_plot_path</span><span>)</span><span>\n</span></code></pre></div></div>\n\n<p><code>TestScript.yaml</code></p>\n\n<p>The <code>inputs</code> and <code>outputs</code> section defines the inputs and outputs, where the names must match the names in the script above. The environment is set up using conda. A specific version can be specified like this: <code>r-terra=0.9-12</code></p>\n\n<div><div><pre><code><span>script</span><span>:</span> <span>TestScript.R</span>\n<span>name</span><span>:</span> <span>Test script</span>\n<span>description</span><span>:</span> <span>Demo script</span>\n<span>author</span><span>:</span>\n <span>-</span> <span>name</span><span>:</span> <span>ME</span>\n<span>inputs</span><span>:</span>\n <span>country_ploygon</span><span>:</span>\n <span>label</span><span>:</span> <span>Country Polygon</span>\n <span>description</span><span>:</span> <span>Polygon of the country of interest</span>\n <span>type</span><span>:</span> <span>application/geo+json</span>\n <span>example</span><span>:</span> <span>null</span>\n <span>crs</span><span>:</span>\n <span>label</span><span>:</span> <span>Coordinate reference system</span>\n <span>description</span><span>:</span> <span>Coordinate reference system</span>\n <span>type</span><span>:</span> <span>text</span>\n <span>example</span><span>:</span> <span>\"</span><span>EPSG:3857\"</span>\n <span>rasters</span><span>:</span>\n <span>label</span><span>:</span> <span>Rasters</span>\n <span>description</span><span>:</span> <span>Raster layers of variable of interest</span>\n <span>type</span><span>:</span> <span>image/tiff;application=geotiff[]</span>\n <span>example</span><span>:</span> <span>null</span> \n<span>outputs</span><span>:</span>\n <span>raster_change</span><span>:</span>\n <span>label</span><span>:</span> <span>Rasters</span>\n <span>description</span><span>:</span> <span>Differences between raster values</span>\n <span>type</span><span>:</span> <span>image/tiff;application=geotiff</span>\n <span>means_plot</span><span>:</span>\n <span>label</span><span>:</span> <span>Plot of raster means</span>\n <span>description</span><span>:</span> <span>Plot of means of raster layers</span>\n <span>type</span><span>:</span> <span>image/png</span>\n<span>conda</span><span>:</span>\n <span>channels</span><span>:</span>\n <span>-</span> <span>conda-forge</span>\n <span>-</span> <span>r</span>\n <span>dependencies</span><span>:</span>\n <span>-</span> <span>r-rjson</span>\n <span>-</span> <span>r-sf</span>\n <span>-</span> <span>r-dplyr</span>\n <span>-</span> <span>r-terra</span>\n <span>-</span> <span>r-ggplot2</span>\n</code></pre></div></div>\n\n<p>The architecture appears to be designed as a single-server instance without built-in job queuing or concurrent execution limits.</p>",
+20
mte/2025_07_07_refs-monteverde.json
+20
mte/2025_07_07_refs-monteverde.json
···+"summary": "In addition to the post from last week covering BON in a Box and OCaml Functors, below are some additional notes.",+"content": "<p>In addition to the post from last week covering <a href=\"https://www.tunbury.org/2025/07/02/bon-in-a-box/\">BON in a Box</a> and <a href=\"https://www.tunbury.org/2025/07/01/ocaml-functors/\">OCaml Functors</a>, below are some additional notes.</p>\n\n<h1>Resilient File System, ReFS</h1>\n\n<p>I have previously stated that <a href=\"https://www.tunbury.org/windows-reflinks\">ReFS</a> supports 1 million hard links per file; however, this is not the case. The maximum is considerably lower at 8191. That\u2019s eight times more than NTFS, but still not very many.</p>\n\n<div><div><pre><code><span>PS</span><span> </span><span>D:\\</span><span>></span><span> </span><span>touch</span><span> </span><span>foo</span><span>\n</span><span>PS</span><span> </span><span>D:\\</span><span>></span><span> </span><span>foreach</span><span> </span><span>(</span><span>$i</span><span> </span><span>in</span><span> </span><span>1</span><span>..</span><span>8192</span><span>)</span><span> </span><span>{</span><span>\n</span><span>>></span><span> </span><span>New-Item</span><span> </span><span>-ItemType</span><span> </span><span>HardLink</span><span> </span><span>-Path</span><span> </span><span>\"foo-</span><span>$i</span><span>\"</span><span> </span><span>-Target</span><span> </span><span>\"foo\"</span><span>\n</span><span>>></span><span> </span><span>}</span><span>\n\n\n </span><span>Directory:</span><span> </span><span>D:\\</span><span>\n\n\n</span><span>Mode</span><span> </span><span>LastWriteTime</span><span> </span><span>Length</span><span> </span><span>Name</span><span>\n</span><span>----</span><span> </span><span>-------------</span><span> </span><span>------</span><span> </span><span>----</span><span>\n</span><span>-a</span><span>----</span><span> </span><span>07</span><span>/07/2025</span><span> </span><span>01:00</span><span> </span><span>0</span><span> </span><span>foo-1</span><span>\n</span><span>-a</span><span>----</span><span> </span><span>07</span><span>/07/2025</span><span> </span><span>01:00</span><span> </span><span>0</span><span> </span><span>foo-2</span><span>\n</span><span>-a</span><span>----</span><span> </span><span>07</span><span>/07/2025</span><span> </span><span>01:00</span><span> </span><span>0</span><span> </span><span>foo-3</span><span>\n</span><span>-a</span><span>----</span><span> </span><span>07</span><span>/07/2025</span><span> </span><span>01:00</span><span> </span><span>0</span><span> </span><span>foo-4</span><span>\n</span><span>...</span><span>\n</span><span>-a</span><span>----</span><span> </span><span>07</span><span>/07/2025</span><span> </span><span>01:00</span><span> </span><span>0</span><span> </span><span>foo-8190</span><span>\n</span><span>-a</span><span>----</span><span> </span><span>07</span><span>/07/2025</span><span> </span><span>01:00</span><span> </span><span>0</span><span> </span><span>foo-8191</span><span>\n</span><span>New-Item</span><span> </span><span>:</span><span> </span><span>An</span><span> </span><span>attempt</span><span> </span><span>was</span><span> </span><span>made</span><span> </span><span>to</span><span> </span><span>create</span><span> </span><span>more</span><span> </span><span>links</span><span> </span><span>on</span><span> </span><span>a</span><span> </span><span>file</span><span> </span><span>than</span><span> </span><span>the</span><span> </span><span>file</span><span> </span><span>system</span><span> </span><span>supports</span><span>\n</span><span>At</span><span> </span><span>line:2</span><span> </span><span>char:5</span><span>\n</span><span>+</span><span> </span><span>New-Item</span><span> </span><span>-ItemType</span><span> </span><span>HardLink</span><span> </span><span>-Path</span><span> </span><span>\"foo-</span><span>$i</span><span>\"</span><span> </span><span>-Target</span><span> </span><span>\"foo\"</span><span>\n</span><span>+</span><span> </span><span>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~</span><span>\n </span><span>+</span><span> </span><span>CategoryInfo</span><span> </span><span>:</span><span> </span><span>NotSpecified:</span><span> </span><span>(:)</span><span> </span><span>[</span><span>New</span><span>-Item</span><span>],</span><span> </span><span>Win32Exception</span><span>\n </span><span>+</span><span> </span><span>FullyQualifiedErrorId</span><span> </span><span>:</span><span> </span><span>System.ComponentModel.Win32Exception</span><span>,</span><span>Microsoft.PowerShell.Commands.NewItemCommand</span><span>\n</span></code></pre></div></div>\n\n<p>I had also investigated ReFS block cloning, which removed the requirement to create hard links, and wrote a <a href=\"https://github.com/mtelvers/ReFS-Clone\">ReFS-clone</a> tool for Windows Server 2022. This works well until containerd is used to bind mount a directory on the volume. Once this has happened, attempts to create a block clone fail. To exclude my code as the root cause, I have tried Windows Server 2025, where commands such as <code>copy</code> and <code>robocopy</code> automatically perform block clones. Block cloning can be restored by rebooting the machine. I note that restarting containerd is not sufficient.</p>\n\n<p>Removing files and folders on ReFS is impressively fast; however, this comes at a cost: freeing the blocks is a background activity that may take some time to be scheduled.</p>\n\n<h1>File system performance with a focus on ZFS</h1>\n\n<p>Several EEG interns started last week with this <a href=\"https://anil.recoil.org/ideas/zfs-filesystem-perf\">project</a> under my supervision. In brief, we will examine file system performance on the filesystems supported by <a href=\"https://github.com/ocurrent/obuilder\">OBuilder</a> before conducting more detailed investigations into factors affecting ZFS performance.</p>\n\n<h1>Monteverde</h1>\n\n<p>monteverde.cl.cam.ac.uk, has been installed in the rack. It has two AMD EPYC 9965 192-Core Processors, giving a total of 384 cores and 768 threads and 3TB of RAM.</p>\n\n<p><img alt=\"\" src=\"https://www.tunbury.org/images/monteverde.jpg\"></p>\n\n<p>From the logs, there are still some teething issues:</p>\n\n<div><div><pre><code>[130451.620482] Large kmem_alloc(98304, 0x1000), please file an issue at:\n https://github.com/openzfs/zfs/issues/new\n[130451.620486] CPU: 51 UID: 0 PID: 8594 Comm: txg_sync Tainted: P O 6.14.0-23-generic #23-Ubuntu\n[130451.620488] Tainted: [P]=PROPRIETARY_MODULE, [O]=OOT_MODULE\n[130451.620489] Hardware name: Dell Inc. PowerEdge R7725/0KRFPX, BIOS 1.1.3 02/25/2025\n[130451.620490] Call Trace:\n[130451.620490] <TASK>\n[130451.620492] show_stack+0x49/0x60\n[130451.620493] dump_stack_lvl+0x5f/0x90\n[130451.620495] dump_stack+0x10/0x18\n[130451.620497] spl_kmem_alloc_impl.cold+0x17/0x1c [spl]\n[130451.620503] spl_kmem_zalloc+0x19/0x30 [spl]\n[130451.620508] multilist_create_impl+0x3f/0xc0 [zfs]\n[130451.620586] multilist_create+0x31/0x50 [zfs]\n[130451.620650] dmu_objset_sync+0x4c4/0x4d0 [zfs]\n[130451.620741] dsl_pool_sync_mos+0x34/0xc0 [zfs]\n[130451.620832] dsl_pool_sync+0x3c1/0x420 [zfs]\n[130451.620910] spa_sync_iterate_to_convergence+0xda/0x220 [zfs]\n[130451.620990] spa_sync+0x333/0x660 [zfs]\n[130451.621056] txg_sync_thread+0x1f5/0x270 [zfs]\n[130451.621137] ? __pfx_txg_sync_thread+0x10/0x10 [zfs]\n[130451.621207] ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]\n[130451.621213] thread_generic_wrapper+0x5b/0x70 [spl]\n[130451.621217] kthread+0xf9/0x230\n[130451.621219] ? __pfx_kthread+0x10/0x10\n[130451.621221] ret_from_fork+0x44/0x70\n[130451.621223] ? __pfx_kthread+0x10/0x10\n[130451.621224] ret_from_fork_asm+0x1a/0x30\n[130451.621226] </TASK>\n</code></pre></div></div>",
+20
mte/2025_07_08_unix-or-sys.json
+20
mte/2025_07_08_unix-or-sys.json
···+"summary": "When you recursively scan a massive directory tree, would you use Sys.readdir or Unix.readdir? My inclination is that Sys.readdir feels more convenient to use, and thus the lower-level Unix.readdir would have the performance edge. Is it significant enough to bother with?",+"content": "<p>When you recursively scan a massive directory tree, would you use <code>Sys.readdir</code> or <code>Unix.readdir</code>? My inclination is that <code>Sys.readdir</code> feels more convenient to use, and thus the lower-level <code>Unix.readdir</code> would have the performance edge. Is it significant enough to bother with?</p>\n\n<p>Quickly coding up the two different options for comparison. Here\u2019s the <code>Unix.readdir</code> version, running <code>Unix.opendir</code> then recursively calling <code>Unix.readdir</code> until the <code>End_of_file</code> exception is raised.</p>\n\n<div><div><pre><code><span>let</span> <span>rec</span> <span>traverse_directory_unix</span> <span>path</span> <span>x</span> <span>=</span>\n <span>let</span> <span>stats</span> <span>=</span> <span>Unix</span><span>.</span><span>lstat</span> <span>path</span> <span>in</span>\n <span>match</span> <span>stats</span><span>.</span><span>st_kind</span> <span>with</span>\n <span>|</span> <span>Unix</span><span>.</span><span>S_REG</span> <span>-></span> <span>x</span> <span>+</span> <span>1</span>\n <span>|</span> <span>S_LNK</span> <span>|</span> <span>S_CHR</span> <span>|</span> <span>S_BLK</span> <span>|</span> <span>S_FIFO</span> <span>|</span> <span>S_SOCK</span> <span>-></span> <span>x</span>\n <span>|</span> <span>S_DIR</span> <span>-></span>\n <span>try</span>\n <span>let</span> <span>dir_handle</span> <span>=</span> <span>Unix</span><span>.</span><span>opendir</span> <span>path</span> <span>in</span>\n <span>let</span> <span>rec</span> <span>read_entries</span> <span>acc</span> <span>=</span>\n <span>try</span>\n <span>match</span> <span>Unix</span><span>.</span><span>readdir</span> <span>dir_handle</span> <span>with</span>\n <span>|</span> <span>\".\"</span> <span>|</span> <span>\"..\"</span> <span>-></span> <span>read_entries</span> <span>acc</span>\n <span>|</span> <span>entry</span> <span>-></span>\n <span>let</span> <span>full_path</span> <span>=</span> <span>Filename</span><span>.</span><span>concat</span> <span>path</span> <span>entry</span> <span>in</span>\n <span>read_entries</span> <span>(</span><span>traverse_directory_unix</span> <span>full_path</span> <span>acc</span><span>)</span>\n <span>with</span> <span>End_of_file</span> <span>-></span>\n <span>Unix</span><span>.</span><span>closedir</span> <span>dir_handle</span><span>;</span>\n <span>acc</span>\n <span>in</span>\n <span>read_entries</span> <span>x</span>\n <span>with</span> <span>_</span> <span>-></span> <span>x</span>\n</code></pre></div></div>\n\n<p>The <code>Sys.readdir</code> version nicely gives us an array so we can idiomatically use <code>Array.fold_left</code>.</p>\n\n<div><div><pre><code><span>let</span> <span>traverse_directory_sys</span> <span>source</span> <span>=</span>\n <span>let</span> <span>rec</span> <span>process_directory</span> <span>s</span> <span>current_source</span> <span>=</span>\n <span>let</span> <span>entries</span> <span>=</span> <span>Sys</span><span>.</span><span>readdir</span> <span>current_source</span> <span>in</span>\n <span>Array</span><span>.</span><span>fold_left</span>\n <span>(</span><span>fun</span> <span>acc</span> <span>entry</span> <span>-></span>\n <span>let</span> <span>source</span> <span>=</span> <span>Filename</span><span>.</span><span>concat</span> <span>current_source</span> <span>entry</span> <span>in</span>\n <span>try</span>\n <span>let</span> <span>stat</span> <span>=</span> <span>Unix</span><span>.</span><span>lstat</span> <span>source</span> <span>in</span>\n <span>match</span> <span>stat</span><span>.</span><span>st_kind</span> <span>with</span>\n <span>|</span> <span>Unix</span><span>.</span><span>S_REG</span> <span>-></span> <span>acc</span> <span>+</span> <span>1</span>\n <span>|</span> <span>Unix</span><span>.</span><span>S_DIR</span> <span>-></span> <span>process_directory</span> <span>acc</span> <span>source</span>\n <span>|</span> <span>S_LNK</span> <span>|</span> <span>S_CHR</span> <span>|</span> <span>S_BLK</span> <span>|</span> <span>S_FIFO</span> <span>|</span> <span>S_SOCK</span> <span>-></span> <span>acc</span>\n <span>with</span> <span>Unix</span><span>.</span><span>Unix_error</span> <span>_</span> <span>-></span> <span>acc</span><span>)</span>\n <span>s</span> <span>entries</span>\n <span>in</span>\n <span>process_directory</span> <span>0</span> <span>source</span>\n</code></pre></div></div>\n\n<p>The file system may have a big impact, so I tested NTFS, ReFS, and ext4, running each a couple of times to ensure the cache was primed.</p>\n\n<p><code>Sys.readdir</code> was quicker in my test cases up to 500,000 files. Reaching 750,000 files, <code>Unix.readdir</code> edged ahead. I was surprised by the outcome and wondered whether it was my code rather than the module I used.</p>\n\n<p>Pushing for the result I expected/wanted, I rewrote the function so it more closely mirrors the <code>Sys.readdir</code> version.</p>\n\n<div><div><pre><code><span>let</span> <span>traverse_directory_unix_2</span> <span>path</span> <span>=</span>\n <span>let</span> <span>rec</span> <span>process_directory</span> <span>s</span> <span>path</span> <span>=</span>\n <span>try</span>\n <span>let</span> <span>dir_handle</span> <span>=</span> <span>Unix</span><span>.</span><span>opendir</span> <span>path</span> <span>in</span>\n <span>let</span> <span>rec</span> <span>read_entries</span> <span>acc</span> <span>=</span>\n <span>try</span>\n <span>let</span> <span>entry</span> <span>=</span> <span>Unix</span><span>.</span><span>readdir</span> <span>dir_handle</span> <span>in</span>\n <span>match</span> <span>entry</span> <span>with</span>\n <span>|</span> <span>\".\"</span> <span>|</span> <span>\"..\"</span> <span>-></span> <span>read_entries</span> <span>acc</span>\n <span>|</span> <span>entry</span> <span>-></span>\n <span>let</span> <span>full_path</span> <span>=</span> <span>Filename</span><span>.</span><span>concat</span> <span>path</span> <span>entry</span> <span>in</span>\n <span>let</span> <span>stats</span> <span>=</span> <span>Unix</span><span>.</span><span>lstat</span> <span>full_path</span> <span>in</span>\n <span>match</span> <span>stats</span><span>.</span><span>st_kind</span> <span>with</span>\n <span>|</span> <span>Unix</span><span>.</span><span>S_REG</span> <span>-></span> <span>read_entries</span> <span>(</span><span>acc</span> <span>+</span> <span>1</span><span>)</span>\n <span>|</span> <span>S_LNK</span> <span>|</span> <span>S_CHR</span> <span>|</span> <span>S_BLK</span> <span>|</span> <span>S_FIFO</span> <span>|</span> <span>S_SOCK</span> <span>-></span> <span>read_entries</span> <span>acc</span>\n <span>|</span> <span>S_DIR</span> <span>-></span> <span>read_entries</span> <span>(</span><span>process_directory</span> <span>acc</span> <span>full_path</span><span>)</span>\n <span>with</span> <span>End_of_file</span> <span>-></span>\n <span>Unix</span><span>.</span><span>closedir</span> <span>dir_handle</span><span>;</span>\n <span>acc</span>\n <span>in</span>\n <span>read_entries</span> <span>s</span>\n <span>with</span> <span>_</span> <span>-></span> <span>s</span>\n <span>in</span>\n <span>process_directory</span> <span>0</span> <span>path</span>\n</code></pre></div></div>\n\n<p>This version is indeed faster than <code>Sys.readdir</code> in all cases. However, at 750,000 files the speed up was < 0.5%.</p>",
+19
mte/2025_07_09_jupyter.json
+19
mte/2025_07_09_jupyter.json
···+"content": "<p>Brief notes on publishing a Jupyter notebook as a Docker container.</p>\n\n<p>My starting point is a GitHub <a href=\"https://github.com/ucam-eo/tessera-interactive-map\">repo</a> containing a Jupyter notebook and a <code>requirements.txt</code>.</p>\n\n<div><div><pre><code>git clone https://github.com/ucam-eo/tessera-interactive-map\n<span>cd </span>tessera-interactive-map\n</code></pre></div></div>\n\n<p>I created a <code>Dockerfile</code> which pulls in a standard Python container. I used 3.11 as that is the minimum version support for <a href=\"https://github.com/ucam-eo/geotessera.git\">https://github.com/ucam-eo/geotessera.git</a></p>\n\n<p><code>pip</code> installs the packages listed in <code>requirements.txt</code> plus the additional <a href=\"https://github.com/ucam-eo/geotessera.git\">geotessera</a> library. The extra library is noted in the <a href=\"https://github.com/ucam-eo/tessera-interactive-map/blob/main/README.md\">README.md</a>.</p>\n\n<div><div><pre><code>FROM python:3.11\nWORKDIR /app\nCOPY <span>.</span> /app\nRUN pip <span>install</span> <span>--no-cache-dir</span> <span>-r</span> requirements.txt\nRUN pip <span>install </span>git+https://github.com/ucam-eo/geotessera.git\nRUN pip <span>install </span>jupyter\nEXPOSE 8888\nENV NAME World\nCMD <span>[</span><span>\"jupyter\"</span>, <span>\"notebook\"</span>, <span>\"--ip=0.0.0.0\"</span>, <span>\"--port=8888\"</span>, <span>\"--no-browser\"</span>, <span>\"--allow-root\"</span><span>]</span>\n</code></pre></div></div>\n\n<p>Build the Docker image.</p>\n\n<div><div><pre><code>docker build <span>-t</span> my-jupyter <span>.</span>\n</code></pre></div></div>\n\n<p>And run the container.</p>\n\n<div><div><pre><code><span># docker run --rm -it -p 8888:8888 my-jupyter</span>\n<span>[</span>I 2025-07-09 16:11:37.739 ServerApp] jupyter_lsp | extension was successfully linked.\n<span>[</span>I 2025-07-09 16:11:37.743 ServerApp] jupyter_server_terminals | extension was successfully linked.\n<span>[</span>I 2025-07-09 16:11:37.746 ServerApp] jupyterlab | extension was successfully linked.\n<span>[</span>I 2025-07-09 16:11:37.749 ServerApp] notebook | extension was successfully linked.\n<span>[</span>I 2025-07-09 16:11:37.751 ServerApp] Writing Jupyter server cookie secret to /root/.local/share/jupyter/runtime/jupyter_cookie_secret\n<span>[</span>I 2025-07-09 16:11:38.089 ServerApp] notebook_shim | extension was successfully linked.\n<span>[</span>I 2025-07-09 16:11:38.102 ServerApp] notebook_shim | extension was successfully loaded.\n<span>[</span>I 2025-07-09 16:11:38.104 ServerApp] jupyter_lsp | extension was successfully loaded.\n<span>[</span>I 2025-07-09 16:11:38.105 ServerApp] jupyter_server_terminals | extension was successfully loaded.\n<span>[</span>I 2025-07-09 16:11:38.107 LabApp] JupyterLab extension loaded from /usr/local/lib/python3.11/site-packages/jupyterlab\n<span>[</span>I 2025-07-09 16:11:38.107 LabApp] JupyterLab application directory is /usr/local/share/jupyter/lab\n<span>[</span>I 2025-07-09 16:11:38.107 LabApp] Extension Manager is <span>'pypi'</span><span>.</span>\n<span>[</span>I 2025-07-09 16:11:38.156 ServerApp] jupyterlab | extension was successfully loaded.\n<span>[</span>I 2025-07-09 16:11:38.159 ServerApp] notebook | extension was successfully loaded.\n<span>[</span>I 2025-07-09 16:11:38.160 ServerApp] Serving notebooks from <span>local </span>directory: /app\n<span>[</span>I 2025-07-09 16:11:38.160 ServerApp] Jupyter Server 2.16.0 is running at:\n<span>[</span>I 2025-07-09 16:11:38.160 ServerApp] http://0ad4fce9b94e:8888/tree?token<span>=</span>c11c0f007dd99a785ff67331514fb44e87269055952a253b\n<span>[</span>I 2025-07-09 16:11:38.160 ServerApp] http://127.0.0.1:8888/tree?token<span>=</span>c11c0f007dd99a785ff67331514fb44e87269055952a253b\n</code></pre></div></div>\n\n<p>Note the URL in the log output and open it in the browser. You are prompted to enter the token if you don\u2019t specify the token as part of the URL.</p>",
+19
mte/2025_07_10_dune-unfmt.json
+19
mte/2025_07_10_dune-unfmt.json
···+"summary": "When working across machines, it\u2019s easy to make changes and reconcile them using git. However, I made a mistake and inadvertently ran dune fmt and now my git diff is a total mess.",+"content": "<p>When working across machines, it\u2019s easy to make changes and reconcile them using git. However, I made a mistake and inadvertently ran <code>dune fmt</code> and now my <code>git diff</code> is a total mess.</p>\n\n<p>My thought, to get myself out of this situation, is to go back to the previous commit and create a new branch with no changes other than a <code>dune fmt</code>. I can then cherry-pick my latest work on to that branch which should then give me a clean diff.</p>\n\n<div><div><pre><code>git commit <span>-am</span> <span>'inadvertent reformatted version'</span>\n</code></pre></div></div>\n\n<p>Run <code>git log</code> to find the commit that was just made and the previous one.</p>\n\n<p>Checkout the previous commit and make a new branch, in my case called <code>pre-fmt</code>.</p>\n\n<div><div><pre><code>git checkout <previous commit>\ngit switch <span>-c</span> pre-fmt\n</code></pre></div></div>\n\n<p>Format the code in this branch and commit that version.</p>\n\n<div><div><pre><code>dune <span>fmt\n</span>git commit <span>-am</span> <span>'dune fmt'</span>\n</code></pre></div></div>\n\n<p>Now cherry-pick the original commit.</p>\n\n<div><div><pre><code>git cherry-pick <latest commit>\n</code></pre></div></div>\n\n<p>The cherry-pick reports lots of merge conflicts; however, these should be trivial to resolve but it is a manual process. Once done, add the changed files and finish the cherry-pick.</p>\n\n<div><div><pre><code>git add bin/<span>*</span>.ml\ngit cherry-pick <span>--continue</span>\n</code></pre></div></div>\n\n<p><code>git diff</code> now shows just the actual changes rather than the code formatting changes. Do you have any suggestions on a better workflow?</p>",
+19
mte/2025_07_14_tessera-workshop.json
+19
mte/2025_07_14_tessera-workshop.json
···+"summary": "I wrote previously about setting up a Jupyter notebook in a Docker container. This worked well for a single user, but we intend to hold a workshop and so need a multi-user setup.",+"content": "<p>I wrote previously about setting up a <a href=\"https://www.tunbury.org/2025/07/09/jupyter/\">Jupyter notebook in a Docker container</a>. This worked well for a single user, but we intend to hold a workshop and so need a multi-user setup.</p>\n\n<p>We would prefer that as much of the per-user setup as possible be completed automatically so participants don\u2019t need to waste time setting up the environment.</p>\n\n<p>There is a great resource at <a href=\"https://github.com/jupyterhub/jupyterhub-the-hard-way/blob/HEAD/docs/installation-guide-hard.md\">jupyterhub/jupyterhub-the-hard-way</a> walking you through the manual setup.</p>\n\n<p>However, there are many Docker containers that we can use as the base, including <code>python:3.11</code>, but I have decided to use <code>jupyter/data science:latest</code>. The containers are expected to be customised with a <code>Dockerfile</code>.</p>\n\n<p>In my <code>Dockerfile</code>, I first installed JupyterLab and the other dependencies to avoid users needing to install these manually later.</p>\n\n<div><div><pre><code>RUN pip install --no-cache-dir \\\n jupyterhub \\\n jupyterlab \\\n notebook \\\n numpy \\\n matplotlib \\\n scikit-learn \\\n ipyleaflet \\\n ipywidgets \\\n ipykernel\n</code></pre></div></div>\n\n<p>Then the system dependencies. A selection of editors and <code>git</code> which is needed for <code>pip install git+https</code>.</p>\n\n<div><div><pre><code>USER root\nRUN apt-get update && apt-get install -y \\\n curl git vim nano \\\n && rm -rf /var/lib/apt/lists/*\n</code></pre></div></div>\n\n<p>Then our custom package from GitHub.</p>\n\n<div><div><pre><code>RUN pip install git+https://github.com/ucam-eo/geotessera.git\n</code></pre></div></div>\n\n<p>The default user database is PAM, so create UNIX users for the workshop participants without a disabled password.</p>\n\n<div><div><pre><code>RUN for user in user1 user2 user3; do \\\n adduser --disabled-password --gecos '' $user; \\\n done\n</code></pre></div></div>\n\n<p>Finally, set the entrypoint for the container:</p>\n\n<div><div><pre><code>CMD [\"jupyterhub\", \"-f\", \"/srv/jupyterhub/jupyterhub_config.py\"]\n</code></pre></div></div>\n\n<p>Next, I created the <code>jupyterhub_config.py</code>. I think most of these lines are self-explanatory. The password is the same for everyone to sign in. Global environment variables can be set using <code>c.Spawner.environment</code>.</p>\n\n<div><div><pre><code>from jupyterhub.auth import DummyAuthenticator\n\nc.JupyterHub.authenticator_class = DummyAuthenticator\nc.DummyAuthenticator.password = \"Workshop\"\n\n# Allow all users\nc.Authenticator.allow_all = True\n\n# Use JupyterLab by default\nc.Spawner.default_url = '/lab'\n\n# Set timeouts\nc.Spawner.start_timeout = 300\nc.Spawner.http_timeout = 120\nc.Spawner.environment = {\n 'TESSERA_DATA_DIR': '/tessera'\n}\n\n# Basic configuration\nc.JupyterHub.ip = '0.0.0.0'\nc.JupyterHub.port = 8000\n</code></pre></div></div>\n\n<p>I\u2019m going to use Caddy as a reverse proxy for this setup, for this I need a <code>Caddyfile</code> containing the public FQDN and the Docker container name and port:</p>\n\n<div><div><pre><code>workshop.cam.ac.uk {\n\treverse_proxy jupyterhub:8000\n}\n</code></pre></div></div>\n\n<p>The services are defined in <code>docker-compose.yml</code>; Caddy and the associated volumes to preserve SSL certificates between restarts, <code>jupyterhub</code> with volumes for home directories so they are preserved and a mapping for our shared dataset.</p>\n\n<div><div><pre><code>services:\n caddy:\n image: caddy:latest\n ports:\n - \"80:80\"\n - \"443:443\"\n volumes:\n - ./Caddyfile:/etc/caddy/Caddyfile\n - caddy_data:/data\n - caddy_config:/config\n\n jupyterhub:\n build: .\n volumes:\n - ./jupyterhub_config.py:/srv/jupyterhub/jupyterhub_config.py\n - jupyter_home:/home\n - tessera_data:/tessera\n\nvolumes:\n caddy_data:\n caddy_config:\n jupyter_home:\n tessera_data:\n</code></pre></div></div>\n\n<p>Reset UFW to defaults</p>\n\n<div><div><pre><code>ufw <span>--force</span> reset\n</code></pre></div></div>\n\n<p>Set default policies</p>\n\n<div><div><pre><code>ufw default deny incoming\nufw default allow outgoing\n</code></pre></div></div>\n\n<p>Allow SSH and HTTP(S) services</p>\n\n<div><div><pre><code>ufw allow ssh\nufw allow http\nufw allow https\n</code></pre></div></div>\n\n<p>Enable UFW</p>\n\n<div><div><pre><code>ufw <span>enable</span>\n</code></pre></div></div>\n\n<p>Check status</p>\n\n<div><div><pre><code>ufw status verbose\n</code></pre></div></div>",
+19
mte/2025_07_15_reflink-copy.json
+19
mte/2025_07_15_reflink-copy.json
···+"summary": "I hadn\u2019t intended to write another post about traversing a directory structure or even thinking about it again, but weirdly, it just kept coming up again!",+"content": "<p>I hadn\u2019t intended to write another <a href=\"https://www.tunbury.org/2025/07/08/unix-or-sys/\">post</a> about traversing a directory structure or even thinking about it again, but weirdly, it just kept coming up again!</p>\n\n<p>Firstly, Patrick mentioned <code>Eio.Path.read_dir</code> and Anil mentioned <a href=\"https://tavianator.com/2023/bfs_3.0.html\">bfs</a>. Then Becky commented about XFS reflink performance, and I commented that the single-threaded nature of <code>cp -r --reflink=always</code> was probably hurting our <a href=\"https://github.com/ocurrent/obuilder\">obuilder</a> performance tests.</p>\n\n<p>Obuilder is written in LWT, which has <code>Lwt_unix.readdir</code>. What if we had a pool of threads that would traverse the directory structure in parallel and create a reflinked copy?</p>\n\n<p>Creating a reflink couldn\u2019t be easier. There\u2019s an <code>ioctl</code> call that <em>just</em> does it. Such a contrast to the ReFS copy-on-write implementation on Windows!</p>\n\n<div><div><pre><code><span>#include</span> <span><caml/mlvalues.h></span><span>\n#include</span> <span><caml/memory.h></span><span>\n#include</span> <span><caml/unixsupport.h></span><span>\n#include</span> <span><sys/ioctl.h></span><span>\n#include</span> <span><errno.h></span><span>\n</span>\n<span>#ifndef FICLONE\n#define FICLONE 0x40049409\n#endif\n</span>\n<span>value</span> <span>caml_ioctl_ficlone</span><span>(</span><span>value</span> <span>dst_fd</span><span>,</span> <span>value</span> <span>src_fd</span><span>)</span> <span>{</span>\n <span>CAMLparam2</span><span>(</span><span>dst_fd</span><span>,</span> <span>src_fd</span><span>);</span>\n <span>int</span> <span>result</span><span>;</span>\n\n <span>result</span> <span>=</span> <span>ioctl</span><span>(</span><span>Int_val</span><span>(</span><span>dst_fd</span><span>),</span> <span>FICLONE</span><span>,</span> <span>Int_val</span><span>(</span><span>src_fd</span><span>));</span>\n\n <span>if</span> <span>(</span><span>result</span> <span>==</span> <span>-</span><span>1</span><span>)</span> <span>{</span>\n <span>uerror</span><span>(</span><span>\"ioctl_ficlone\"</span><span>,</span> <span>Nothing</span><span>);</span>\n <span>}</span>\n\n <span>CAMLreturn</span><span>(</span><span>Val_int</span><span>(</span><span>result</span><span>));</span>\n<span>}</span>\n</code></pre></div></div>\n\n<p>We can write a reflink copy function as shown below. (Excuse my error handling.) Interestingly, points to note: the permissions set via <code>Unix.openfile</code> are filtered through umask, and you need to <code>Unix.fchown</code> before <code>Unix.fchmod</code> if you want to set the suid bit set.</p>\n\n<div><div><pre><code><span>external</span> <span>ioctl_ficlone</span> <span>:</span> <span>Unix</span><span>.</span><span>file_descr</span> <span>-></span> <span>Unix</span><span>.</span><span>file_descr</span> <span>-></span> <span>int</span> <span>=</span> <span>\"caml_ioctl_ficlone\"</span>\n\n<span>let</span> <span>copy_file</span> <span>src</span> <span>dst</span> <span>stat</span> <span>=</span>\n <span>let</span> <span>src_fd</span> <span>=</span> <span>Unix</span><span>.</span><span>openfile</span> <span>src</span> <span>[</span><span>O_RDONLY</span><span>]</span> <span>0</span> <span>in</span>\n <span>let</span> <span>dst_fd</span> <span>=</span> <span>Unix</span><span>.</span><span>openfile</span> <span>dst</span> <span>[</span><span>O_WRONLY</span><span>;</span> <span>O_CREAT</span><span>;</span> <span>O_TRUNC</span><span>]</span> <span>0o600</span> <span>in</span>\n <span>let</span> <span>_</span> <span>=</span> <span>ioctl_ficlone</span> <span>dst_fd</span> <span>src_fd</span> <span>in</span>\n <span>Unix</span><span>.</span><span>fchown</span> <span>dst_fd</span> <span>stat</span><span>.</span><span>st_uid</span> <span>stat</span><span>.</span><span>st_gid</span><span>;</span>\n <span>Unix</span><span>.</span><span>fchmod</span> <span>dst_fd</span> <span>stat</span><span>.</span><span>st_perm</span><span>;</span>\n <span>Unix</span><span>.</span><span>close</span> <span>src_fd</span><span>;</span>\n <span>Unix</span><span>.</span><span>close</span> <span>dst_fd</span><span>;</span>\n</code></pre></div></div>\n\n<p>My LWT code created a list of all the files in a directory and then processed the list with <code>Lwt_list.map_s</code> (serially), returning promises for all the file operations and creating threads for new directory operations up to a defined maximum (8). If there was no thread capacity, it just recursed in the current thread. Copying a root filesystem, this gave me threads for <code>var</code>, <code>usr</code>, etc, just as we\u2019d want. Wow! This was slow. Nearly 4 minutes to reflink 1.7GB!</p>\n\n<p>What about using the threads library rather than LWT threads? This appears significantly better, bringing the execution time down to 40 seconds. However, I think a lot of that was down to my (bad) LWT implementation vs my somewhat better threads implementation.</p>\n\n<p>At this point, I should probably note that <code>cp -r --reflink always</code> on 1.7GB, 116,000 files takes 8.5 seconds on my machine using a loopback XFS. A sequential OCaml version, without the overhead of threads or any need to maintain a list of work to do, takes 9.0 seconds.</p>\n\n<p>Giving up and getting on with other things was very tempting, but there was that nagging feeling of not bottoming out the problem.</p>\n\n<p>Using OCaml Multicore, we can write a true multi-threaded version. I took a slightly different approach, having a work queue of directories to process, and N worker threads taking work from the queue.</p>\n\n<div><div><pre><code>Main Process: Starts with root directory\n \u2193\nWorkQueue: [process_dir(/root)]\n \u2193\nDomain 1: Takes work \u2192 processes files \u2192 adds subdirs to queue\nDomain 2: Takes work \u2192 processes files \u2192 adds subdirs to queue\nDomain 3: Takes work \u2192 processes files \u2192 adds subdirs to queue\n \u2193\nWorkQueue: [process_dir(/root/usr), process_dir(/root/var), ...]\n</code></pre></div></div>\n\n<p>Below is a table showing the performance when using multiple threads compared to the baseline operation of <code>cp</code> and a sequential copy in OCaml.</p>\n\n\n\n \n \n Copy command\n Duration (sec)\n \n \n \n \n cp -r \u2013reflink=always\n 8.49\n \n \n Sequential\n 8.80\n \n \n 2 domains\n 5.45\n \n \n 4 domains\n 3.28\n \n \n 6 domains\n 3.43\n \n \n 8 domains\n 5.24\n \n \n 10 domains\n 9.07\n \n \n\n\n<p>The code is available on GitHub in <a href=\"https://github.com/mtelvers/reflink\">mtelvers/reflink</a>.</p>",
+19
mte/2025_07_18_docker-oxcaml.json
+19
mte/2025_07_18_docker-oxcaml.json
···+"summary": "Jon asked me to make a Docker image that contains OxCaml ready to run without the need to build it from scratch.",+"content": "<p>Jon asked me to make a Docker image that contains <a href=\"https://oxcaml.org\">OxCaml</a> ready to run without the need to build it from scratch.</p>\n\n<p>I have written a simple OCurrent pipeline to periodically poll <a href=\"https://github.com/oxcaml/opam-repository\">oxcaml/opam-repository</a>. If the SHA has changed, it builds a Docker image and pushes it to current/opam-staging:oxcaml.</p>\n\n<p>The resulting image can be run like this:</p>\n\n<div><div><pre><code><span>$ </span>docker run <span>--rm</span> <span>-it</span> ocurrent/opam-staging:oxcaml\nubuntu@146eab4efc18:/<span>$ </span>ocaml\nOCaml version 5.2.0+ox\nEnter\n<span>#help;; for help.</span>\n\n<span>#</span>\n</code></pre></div></div>\n\n<p>The exact content of the image may change depending upon requirements, and we should also pick a better place to put it rather than ocurrent/opam-staging!</p>\n\n<p>The pipeline code is available here <a href=\"https://github.com/mtelvers/docker-oxcaml\">mtelvers/docker-oxcaml</a> and the service is deployed at <a href=\"https://oxcaml.image.ci.dev\">oxcaml.image.ci.dev</a>.</p>",
+19
mte/2025_07_18_q2-summary.json
+19
mte/2025_07_18_q2-summary.json
···+"summary": "I am grateful for Tarides\u2019 sponsorship of my OCaml work. Below is a summary of my activities in Q2 2025.",+"content": "<p>I am grateful for <a href=\"https://tarides.com\">Tarides</a>\u2019 sponsorship of my OCaml work. Below is a summary of my activities in Q2 2025.</p>\n\n<h1>OCaml Infrastructure and Development</h1>\n\n<h2>OCaml Maintenance Activities</h2>\n\n<p>General maintenance work on OCaml\u2019s infrastructure spanned many areas, including <a href=\"https://www.tunbury.org/2025/03/24/recent-ocaml-version/\">updating minimum supported OCaml versions from 4.02 to 4.08</a> and addressing issues with <a href=\"https://www.tunbury.org/2025/04/04/opam-repo-ci/\">opam-repo-ci job timeouts</a>. Platform-specific work included resolving compatibility issues with <a href=\"https://www.tunbury.org/2025/04/22/ocaml-fedora-gcc/\">Fedora 42 and GCC 15</a>, addressing <a href=\"https://www.tunbury.org/2025/05/13/ubuntu-apparmor/\">Ubuntu AppArmor</a> conflicts affecting runc operations, and managing <a href=\"https://www.tunbury.org/2025/05/19/macos-sequoia/\">macOS Sequoia</a> upgrades across the Mac Mini CI workers. Complex build issues were investigated and resolved, including <a href=\"https://www.tunbury.org/2025/06/21/macos-sequoia-include-path/\">C++ header path problems in macOS workers</a> and <a href=\"https://www.tunbury.org/2025/03/26/freebsd-14.2/\">FreeBSD system upgrades</a> for the CI infrastructure.</p>\n\n<h2>OCaml Infrastructure Migration</h2>\n\n<p>Due to the impending sunset of the <a href=\"https://www.tunbury.org/2025/04/23/blade-allocation/\">Equinix Metal platform</a>, the OCaml community services needed to be migrated. Services including <a href=\"https://www.tunbury.org/2025/04/27/ocaml-ci/\">OCaml-CI</a>, <a href=\"https://www.tunbury.org/2025/04/29/equinix-moves/\">opam-repo-ci</a>, and the <a href=\"https://www.tunbury.org/2025/04/29/equinix-moves/\">opam.ocaml.org</a> deployment pipeline were migrated to <a href=\"https://www.tunbury.org/2025/04/25/blade-reallocation/\">new blade servers</a>. The migration work was planned to minimise service disruption, which was kept to just a few minutes. Complete procedures were documented, including Docker volume transfers and rsync strategies.</p>\n\n<h2>opam2web Deployment</h2>\n\n<p>Optimisation work was undertaken on the <a href=\"https://www.tunbury.org/2025/06/24/opam2web/\">deployment pipeline for opam2web</a>, which powers opam.ocaml.org, to address the more than two-hour deployment time. The primary issue was the enormous size of the opam2web Docker image, which exceeded 25GB due to the inclusion of complete opam package archives. The archive was moved to a separate layer, allowing Docker to cache the layer and reducing the deployment time to 20 minutes.</p>\n\n<h2>opam Dependency Graphs</h2>\n\n<p>Algorithms for managing OCaml package dependencies were investigated, including <a href=\"https://www.tunbury.org/2025/03/25/topological-sort/\">topological sorting</a> to determine the optimal package installation order. This work extended to handling complex dependency scenarios, including post-dependencies and optional dependencies. Implemented a <a href=\"https://www.tunbury.org/2025/06/23/transitive-reduction/\">transitive reduction algorithm</a> to create a dependency graph with minimal edge counts while preserving the same dependency relationships, enabling more efficient package management and installation processes.</p>\n\n<h2>OCaml Developments under Windows</h2>\n\n<p>Significant work was undertaken to bring <a href=\"https://www.tunbury.org/2025/06/14/windows-containerd-2/\">containerization</a> technologies to OCaml development on Windows. This included implementing a tool to create <a href=\"https://www.tunbury.org/2025/06/27/windows-containerd-3/\">host compute networks</a> via the Windows API, tackling limitations with <a href=\"https://www.tunbury.org/2025/06/18/windows-reflinks/\">NTFS hard links</a>, and implementing copy-on-write <a href=\"https://www.tunbury.org/2025/07/07/refs-monteverde/\">reflink</a> tool for Windows.</p>\n\n<h2>OxCaml Support</h2>\n\n<p>Support for the new OxCaml compiler variant included establishing an <a href=\"https://www.tunbury.org/2025/06/12/oxcaml-repository/\">opam repository</a> and testing which existing <a href=\"https://www.tunbury.org/2025/05/14/opam-health-check-oxcaml/\">OCaml packages</a> successfully built with the new compiler.</p>\n\n<h1>ZFS Storage and Hardware Deployment</h1>\n\n<p>Early in the quarter, a hardware deployment project centred around <a href=\"https://www.tunbury.org/2025/04/11/dell-r640-ubuntu/\">Dell PowerEdge R640</a> servers with a large-scale SSD storage was undertaken. The project involved deploying multiple batches of <a href=\"https://www.tunbury.org/2025/04/03/kingston-drives/\">Kingston 7.68TB SSD drives</a>, creating automated deployments for Ubuntu using network booting with EFI and cloud-init configuration. Experimented with ZFS implementation as a <a href=\"https://www.tunbury.org/2025/04/02/ubuntu-with-zfs-root/\">root filesystem</a>, which was possibly but ultimately discarded and explored <a href=\"https://www.tunbury.org/2025/04/21/ubuntu-dm-cache/\">dm-cache for SSD acceleration</a> of spinning disk arrays. Investigated using ZFS as a distributed storage archive system using an <a href=\"https://www.tunbury.org/2025/05/16/zfs-replcation-ansible/\">Ansible-based deployment</a> strategy based upon a YAML description.</p>\n\n<h2>Talos II Repairs</h2>\n\n<p><a href=\"https://www.tunbury.org/2025/04/29/raptor-talos-ii/\">Significant hardware reliability issues</a> affected two Raptor Computing Talos II POWER9 machines. The first system experienced complete lockups after as little as 20 minutes of operation, while the second began exhibiting similar problems requiring daily power cycling. Working with Raptor Computing support to isolate the issues, upgrading firmware and eventually <a href=\"https://www.tunbury.org/2025/05/27/raptor-talos-ii-update/\">swapping CPUs</a> between the systems resolved the issue. Concurrently, this provided an opportunity to analyse the performance of OBuilder operations on POWER9 systems, comparing <a href=\"https://www.tunbury.org/2025/05/29/overlayfs/\">OverlayFS on TMPFS versus BTRFS on NVMe storage</a>, resulting in optimised build performance.</p>\n\n<h1>EEG Systems Investigations</h1>\n\n<p>Various software solutions and research platforms were explored as part of a broader system evaluation. This included investigating <a href=\"https://www.tunbury.org/2025/04/14/slurm-workload-manager/\">Slurm Workload Manager</a> for compute resource scheduling, examining <a href=\"https://www.tunbury.org/2025/04/19/gluster/\">Gluster distributed filesystem</a> capabilities, and implementing <a href=\"https://www.tunbury.org/2025/05/07/otter-wiki-with-raven/\">Otter Wiki with Raven authentication</a> integration for collaborative documentation. Research extended to modern research data management platforms, exploring <a href=\"https://www.tunbury.org/2025/06/03/inveniordm/\">InvenioRDM</a> for scientific data archival and <a href=\"https://www.tunbury.org/2025/07/02/bon-in-a-box/\">BON in a Box</a> for biodiversity analysis workflows. To support the <a href=\"https://www.tunbury.org/2025/07/14/tessera-workshop/\">Teserra workshop</a>, a multi-user Jupyter environment was set up using Docker containerization.</p>\n\n<h1>Miscellaneous Technical Explorations</h1>\n\n<p>Diverse technical explorations included implementing <a href=\"https://www.tunbury.org/2025/03/15/bluesky-pds/\">Bluesky Personal Data Server</a> and developing innovative <a href=\"https://www.tunbury.org/2025/04/25/bluesky-ssh-authentication/\">SSH authentication</a> mechanisms using the ATProto network by extracting SSH public keys from Bluesky profiles. Additional projects included developing OCaml-based API tools for <a href=\"https://www.tunbury.org/2025/04/12/box-diff/\">Box cloud storage</a>, creating <a href=\"https://www.tunbury.org/2025/03/23/real-time-trains/\">Real Time Trains</a> API integrations, and exploring various file synchronisation and <a href=\"https://www.tunbury.org/2025/06/14/borg-backup/\">backup</a> solutions. Investigation of <a href=\"https://www.tunbury.org/2025/07/15/reflink-copy/\">reflink copy</a> mechanisms for efficient file operations using OCaml multicore.</p>",