diff --git a/notes b/notes
index 4997f27..715dd53 100644
--- a/notes
+++ b/notes
@@ -14,7 +14,13 @@ https://fonts.google.com/knowledge/using_type/implementing_open_type_features_on
https://fonts.google.com/knowledge/choosing_type/pairing_typefaces
https://fonts.google.com/knowledge/choosing_type/pairing_typefaces_within_a_family_superfamily
+https://web.dev/articles/optimize-webfont-loading
+
noted fonts:
+Hanuman https://fonts.google.com/specimen/Hanuman
+TT2020 https://www.fontspace.com/tt2020-font-f42044
+"Zilla Slab", Inter, X-LocaleSpecific, sans-serif
+https://blog.mozilla.org/opendesign/zilla-slab-common-language-shared-font/
Bookerly
"Source Sans Pro", "Lucida Sans Unicode", "Helvetica", "Trebuchet MS", sans-serif, "Noto Emoji", "Quivira"
Palatino,"Palatino Linotype","Palatino LT STD",serif
@@ -23,4 +29,4 @@ Courier New', monospace;
line-height: 1.5;
https://fonts.google.com/specimen/Marcellus?preview.size=79&stroke=Serif
https://fonts.google.com/specimen/Tinos?preview.size=79&stroke=Serif
-https://fonts.google.com/noto/specimen/Noto+Serif+Display?classification=Display&stroke=Serif&stylecount=18&preview.text=Hello%20there
+Noto serif: https://fonts.google.com/noto/specimen/Noto+Serif+Display?classification=Display&stroke=Serif&stylecount=18&preview.text=Hello%20there
diff --git a/posts/HackBU2024.md b/posts/HackBU2024.md
new file mode 100644
index 0000000..7a25001
--- /dev/null
+++ b/posts/HackBU2024.md
@@ -0,0 +1,216 @@
+---
+
+title: "Bits bobs and notes from HackBU 2024"
+
+description: "A summary of my experience, lessons and thoughts on the HackBU 2024 hackathon"
+
+date: "2024-02-20"
+
+draft: false
+
+---
+
+# Hi :)
+
+Over the last weekend I went to [HackBU 2024](https://hackbu.org/2024/). This blog post is me writing about it (maybe not fully coherently). Also as an aside I went to the 2023 hackathon as well but I didn't write about it, oh well.
+
+## A reminder that I can do things quickly
+
+As with last year1 I worked on a project solo. Also like last year I was able to successfully get out a prototype of that project though unlike last year the prototype didn't completely work. But I'm getting ahead of myself, I should probably describe what I built before going into detail about my dissapointments in it.
+
+## What I built
+
+In Binghamton there are 2 bus systems, 1 is provided by the county and the other is provided by the university. The university buses were not in google maps meaning google maps wouldn't show routes involving them. As such I was going to build a system to make it easy to get a route using either or both bus systems.
+
+This might seem ambitious at first but it was actually quite simple, all I had to do was reverse engineer 2 live maps to get the data on bus routes from their apis, use google's route api to get the travel time of the buses through their routes, calculate the best route from point A to point B with the retrieved bus routes and learn google's map api to visualize the data and build a simple frontend to overlay on that for the user to give input into...
+
+I swear it sounds harder than it was.
+
+## Oops a bit too much scope
+
+Did I mention that I was planning on being even more ambitious than what I just described by using the live bus positions and past history to try and calculate when a bus would arrive at a particular bus stop and that point A and point B would've been proper addresse if I hadn't cut back on scope?
+
+So yeah before anything I wasted an hour or 2 working on setting up an ORM that I didn't use. But after that I got to work on useful stuff.
+
+## Work begins
+
+First things first, I had to get the data. Now you'd think that reverse engineering a bus live map would be hard but as it turns out it's pretty easy at least for what I'm doing. It was literally just
+
+1. go to live map website
+
+2. open up the network tab of the browser dev tools
+
+3. refresh the page and search for the words "bus", "route" and "stop" in the requests
+
+4. click on the obvious results and use brain to figure out what json fields like "name" and "stops" and "lat" and "lon" could possibly mean
+
+conveniently the Hackathon can't really prevent prior work that isn't code so all of the api reverse engineering was done the day before the hackathon so the time wasted on ORM stuff canceled out.
+
+## Why yes I do prefer non-linear story telling
+
+That reminds me I should probably mention why I was working on this solo as well as what I'm even using to build it. So unlike last year I did try a little bit more to get a group to work on something with but none of the other ideas were interesting and the people I was hoping to group with assumed I'd be fine on my own. Which they were right but I'd have liked the help if only so I could've increase the scope a bit.
+
+But yeah once it was clear I'd be working on my own I decided to go with a language I was comfortable with and that I knew had all the tools I needed. That language being [Rust](https://www.rust-lang.org/). [Tide]() for the http server backend, [Reqwest]() for making http requests to various apis, [SerDe]() for serializing and deserializing json, and some other libraries which aren't interesting to list out2.
+## Corners cut
+
+I'm not going to talk about the overall development process because it's boring and mostly obvious stuff. However due to being solo and only having 24 hours I did need to cut some corners.
+
+First at the start of actual work I only expected to get an api done but no frontend, however the main bulk of the api was done before I started really getting tired so I had plenty of time to get a frontend with google maps out.
+
+However I did have to cut many corners for finding an optimal route. Firstly I didn't do a graph search at all. If the optimal path used more than 2 buses or had more than 1 bus without stopping at either the university or the greyhound station then my system wouldn't find it. The reason for this was because my system only checked 3 types of route to get from where you started to your destination. Single bus, bus to Binghamton university then transfer to another bus and bus to the greyhound station then transfer to another bus. This was because having used the buses myself I know that those 3 methods will work pretty well for getting you from point A to point B and doing a proper search seemed like a lot of work.
+
+Another corner I cut was on the heuristic for how good I considered a route. A good heuristic would take into acount walking distance, waiting time and bus transit time. My heuristic was to minimize euclidean distance from the starting position to the bus stop added to the distance between the bus stop they got off at to the destination. Which leads to both obvious and subtle incorrectness in measuring how good routes are but it works well enough so whatever.
+
+An accidental corner I cut was that uuuh I might have forgotten/ran out of time to put logic in to make sure we aren't trying to go backwards along a bus route. 90% of the time this doesn't matter though so eeeh.
+
+The time estimate for travel is divided by 3. I don't know why Google Routes gave me time estimates which were higher than necessary.
+
+I was going to deploy this with docker/docker-compose instead of messing with CORS but more on that in the stories.
+
+Broome county buses visually have straight lines between the bus stops instead of following the road. I'll talk about this a bit more when I get into the stories but for now all you need to know is that the reverse engineered live map doesn't give me the path and using google routes was something I thought I didn't have time for until right now as I'm writing this... Fuck.
+
+## Story time
+
+### The fucking s
+
+I haven't run into someone who's tried to claim Google is really good at software yet but if I do I will bring this up. So when I was using the google routes api to figure out how long it would take I noticed that the time format looked something like "250s", so for about a minute I was panicking because "oh god am I going to have to parse out time units and standardize it" but after sending a request for a route frome LA to NYC I got back another time with an s so it's in seconds but dammit if google's documentation doesn't say that.
+
+### prematurely closes your connection, refuses to elaborate
+
+Here's a docker compose file
+
+```yml
+version: "3.3"
+services:
+ backend:
+ build: .
+ ports:
+ - 9090:80
+ restart: unless-stopped
+ frontend:
+ build: BBB_frontend
+ restart: unless-stopped
+ ports:
+ - 8080:80
+ depends_on:
+ - backend
+```
+and here's an nginx config
+```
+worker_processes 1;
+
+events {
+ worker_connections 1024;
+}
+
+
+http {
+ include mime.types;
+ default_type application/octet-stream;
+ sendfile on;
+ keepalive_timeout 65;
+ server {
+ listen 80;
+ server_name localhost;
+ location / {
+ root /usr/share/nginx/html;
+ index index.html index.htm;
+ }
+ location /api/ {
+ proxy_read_timeout 300s;
+ proxy_connect_timeout 75s;
+ proxy_send_timeout 60s;
+ proxy_pass http://backend/;
+ }
+
+ error_page 500 502 503 504 /50x.html;
+ location = /50x.html {
+ root /usr/share/nginx/html;
+ }
+ }
+}
+```
+
+see any problems? no? Neither do I, no idea why I...
+
+```
+2024/02/18 07:57:20 [error] 30#30: *1 upstream prematurely closed connection while reading response header from upstream, client: 172.31.0.1, server: localhost, request: "GET /api/ HTTP/1.1", upstream: "http://172.31.0.2:80/", host: "localhost:8080"
+```
+
+Oh yeah that's right, I got this error when I tried to set this up in docker-compose and I still have no idea why. I can only guess that something fucked up is going on between Tide and Nginx, oh well that wasted crucial time that could've been better spent noticing and fixing
+
+### The Polyline encoding fuckup
+
+Okay I didn't say this outright before so I'll say it now. Google's documentation sucks [here](https://developers.google.com/maps/documentation/utilities/polylinealgorithm)'s the page describing the polyline encoding in the off chance that's a dead link here's the part that I read, assuming that the rest was context I didn't need
+```
+The steps for encoding such a signed value are specified below.
+
+ 1. Take the initial signed value:
+ -179.9832104
+ 2. Take the decimal value and multiply it by 1e5, rounding the result:
+ -17998321
+ 3. Convert the decimal value to binary. Note that a negative value must be calculated using its two's complement by inverting the binary value and adding one to the result:
+ 00000001 00010010 10100001 11110001
+ 11111110 11101101 01011110 00001110
+ 11111110 11101101 01011110 00001111
+ 4. Left-shift the binary value one bit:
+ 11111101 11011010 10111100 00011110
+ 5. If the original decimal value is negative, invert this encoding:
+ 00000010 00100101 01000011 11100001
+ 6. Break the binary value out into 5-bit chunks (starting from the right hand side):
+ 00001 00010 01010 10000 11111 00001
+ 7. Place the 5-bit chunks into reverse order:
+ 00001 11111 10000 01010 00010 00001
+ 8. OR each value with 0x20 if another bit chunk follows:
+ 100001 111111 110000 101010 100010 000001
+ 9. Convert each value to decimal:
+ 33 63 48 42 34 1
+ 10. Add 63 to each value:
+ 96 126 111 105 97 64
+ 11. Convert each value to its ASCII equivalent:
+ `~oia@
+```
+
+here's what I wrote trying to implement that
+```rs
+fn enc_float(num:f64)->String{
+ let mut working:i32 = (num*1e5).round() as i32;
+ //hopethis does what's needed
+ working<<=1;
+ if num < 0.0 {
+ working = !working;
+ }
+ let mut bits:[bool;30] = [false;30];
+ for i in 0..30{
+ bits[i] = working % 2 == 1;
+ working >>=1;
+ }
+ bits.chunks(5).rev()
+ .map(|bools|{
+ let mut accu:u8 = 0;
+ for i in 0..5{
+ accu += if bools[4-i]{
+ 1
+ } else {0};
+ accu <<=1;
+ }
+ accu |= 0x20;
+ accu +=63;
+ char::from(accu)
+ }).collect::()
+
+}
+```
+
+nothing about this is obviously wrong although if you read the instructions I showed (and not the blurbs above and below) carefully there's two mistake that I made. First I didn't encode all 30 bits I needed, I only got 25 and second I Or'd every bit chunk with 0x20 rather than all but the last one. In my opinion that bit of the documentation is worded badly "OR each value with 0x20 if another bit chunk follows", compared to "OR all but the last value with 0x20" but that's not my main complaint. My main complaint is that they have step by step instructions which I just showed **in addition** to a critical paragraph block above it which I skipped due to convention being that if you have a step by step guide in either documentation or a tutorial that everything that needs to be done is contained within those steps. I've copied the critical paragraph below with important bit that I messed up boldened.
+
+> The encoding process converts a binary value into a series of character codes for ASCII characters using the familiar base64 encoding scheme: to ensure proper display of these characters, encoded values are summed with 63 (the ASCII character '?') before converting them into ASCII. The algorithm also checks for additional character codes for a given point by checking the least significant bit of each byte group; if this bit is set to 1, the point is not yet fully formed and additional data must follow.
+> Additionally, to conserve space, **points only include the offset from the previous point** (except of course for the first point). All points are encoded in Base64 as signed integers, as latitudes and longitudes are signed values. The encoding format within a polyline needs to represent two coordinates representing latitude and longitude to a reasonable precision. Given a maximum longitude of +/- 180 degrees to a precision of 5 decimal places (180.00000 to -180.00000), this results in the need for a 32 bit signed binary integer value.
+
+It's also boldened on the page itself but regardless I skipped over it.
+
+You may be wondering why this matters, why was I implementing polyline and the answer is so I could draw on google maps, and yeah surprise because of that Broome county buses don't show up because I did this wrong. The reason I didn't fix it was because I wasn't able to find out until about 3 hours before submission and didn't notice for the first 1-2 of those hours due to a mixture of sleep deprivation and eating breakfast.
+
+1 - I built a CI system called [Romance]() last year which has a separate repo with the [frontend](), and it needs even more duct tape and dreams than this years project if you want it to work properly
+
+2 - [chrono](), [async-std](), and [anyhow]() and I put in and then took out [geo-types](), [tokio]() and [sea-orm]()
diff --git a/posts/Rust_CPP_comp.md b/posts/Rust_CPP_comp.md
new file mode 100644
index 0000000..388ec3f
--- /dev/null
+++ b/posts/Rust_CPP_comp.md
@@ -0,0 +1,33 @@
+---
+
+title: "Comparing Rust and C++"
+
+description: "A post on how I view rust and C++ in relation to each other and my thoughts on them"
+
+date: "2024-04-11"
+
+draft: true
+
+---
+
+# Less black and white than the hype suggests
+
+C++ is a flawed language but I think the hype around Rust obscures the ways in which it can be decent. So I want to write about that while also simping on Rust by pointing out that it makes doing this stuff the default meanwhile C++ at best has other options that for a noob are more obvious.
+
+## Move semantics, references and smart pointers
+
+C++ has move semantics and references and you can use that to write code that performs similar things to what Rust does.
+
+```cpp
+auto x = std::make_unique(3);
+// need to be explicit with std::move but still move semantics, if you know rust then unique_ptr is Box
+std::unique_ptr y = std::move(x);
+
+// this is an implicit call to a method, Rust would require that you use String::from
+std::string s = "hi";
+
+// without std::move this would copy over the contents of s which could be slow, Rust would do the move implicitly unless you called clone
+std::string s2 = std::move(s);
+```
+
+which considering all the pre-existing C++ code that can do stuff like this even if it was written before C++ had smart pointes, RAII or references.
diff --git a/posts/_index.md b/posts/_index.md
new file mode 100644
index 0000000..aa9f0c9
--- /dev/null
+++ b/posts/_index.md
@@ -0,0 +1,4 @@
+---
+title: "Blogs"
+description: some blogs I've posted
+---
diff --git a/posts/finite_KCMP_nums.md b/posts/finite_KCMP_nums.md
new file mode 100644
index 0000000..52ddcf3
--- /dev/null
+++ b/posts/finite_KCMP_nums.md
@@ -0,0 +1,127 @@
+---
+
+title: "Finite KCMP numbers"
+
+description: "Fuck it my brain has a bit too much free time so why not figure out a proof for an isomorphism between programs writen in a turing complete language and natural numbers and use it to do fun stuff"
+
+date: 2022-12-20
+
+draft: true
+
+---
+
+# Finite KCMP numbers
+
+So this blog article exists because I have too much time and realized that numbers with a finite KCMP have an isomorphism to the natural numbers among some other interesting stuff and I wanna write a blog article about it.
+
+> Huh? KCMP? Natural numbers? Isomorphism? the fuck?
+
+## An initial explanation
+
+Okay maybe I should start with some explaining, KCMP is shorthand that me and a friend(their name is Micha, here's their [blog](https://lochalhost.pl/en/blog) and here's their [github](https://github.com/michalusio)) use when referring to [Kolmogorov complexity](https://en.wikipedia.org/wiki/Kolmogorov_complexity), Kolmogorov complexity is the term used to describe the length of the shortest program written in a turing complete language needed to calculate a particular value. From here on however I'm gonna write KCMP because that's shorter. If you've never heard of KCMP I don't blame you, until Micha brought the term to my attention I wasn't sure what I was thinking of had a word.
+
+> Wait you were thinking of KCMP before you even had a word for it?
+
+Yup, you see some months ago I started this journey with a relatively simple question "are there numbers which we can't calculate?". The answer is yes there are numbers that we cannot calculate ever these numbers are irrational numbers with infinite KCMP. For a while that was that but before we go on I should probably make sure you know what I mean by isomorphism and natural numbers.
+
+Natural numbers are the numbers you count with, `1,2,3,4,5,6,7,...` no fractions, no negatives or anything that you can't count with infinite fingers another way to describe them would be to just say they're all the positive integers.
+
+When I say [isomorphism](https://en.wikipedia.org/wiki/Isomorphism) I'm talking about a method that we can use to convert from objects within one set to objects in another set that we can also reverse to get back the original object.
+
+## The beginning
+
+The way this started is that I was thinking about the fact that natural numbers, lists and code/functions can all be used to represent each other. But going into that is a whole rabbit hole involving control flow, lambda calculus, how you can represent things with other things, abstract syntax trees and how modern electronic computers work at a basic level so I won't go into details here. All you need to know is that while I was thinking about that I made a connection to some previous random thoughts that I had between the time I first learned about KCMP and then. Those thoughts being about how I wondered what the sizes of the sets of numbers with finite and infinite KCMP.
+
+> Wait aren't both of those infinite?
+
+Yep but there's more than one size of infinity, the smallest infinity is the size of the set of natural numbers or [aleph](https://en.wikipedia.org/wiki/Aleph_number) null/nought. One of the infinities that's larger than that is the infinity of the real numbers. Due to that I was wondering whether the set of values with finite KCMP had the size of the naturals or the reals. Wait what was I talking about oh yeah isomorphism with naturals. The connection I made when thinking about how code/functions, lists and natural numbers are all equivalent was that I had just answered my question that I had had for a decent while and then I remembered one of the reasons why I was wondering that and got very confused.
+
+## Why I got very confused
+
+The reason I got very confused then is because I could apply a [diagonalization argument](https://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument) to the set of all numbers with finite KCMP and get a number which shouldn't be in the set of all numbers with finite KCMP but meets all the criteria to be in that set, meaning I had found a [paradox](https://en.wikipedia.org/wiki/Paradox).
+
+> How do you know this number should be in your set?
+
+for the sake of making this conversation easier lets give this number the name of ψ because the symbol ψ is underutilized in math. We know that ψ if it exists(we'll get to that) has finite KCMP because the process used to generate it given the set of all finite KCMP numbers only adds a finite amount of additional KCMP on top of the KCMP of the set of all numbers with finite KCMP which we know also has finite KCMP because we can describe the creation of that set with the following python program
+```py
+counter = 0
+finite_KCMP_set = set()
+while True:
+ counter = counter + 1
+ finite_KCMP_set.add(eval_num(counter))
+```
+> wait but that doesn't halt and also what's this `eval_num` function you never explained the whole natural numbers as code thing
+
+...do I really have to dive into that rabbit hole? ... Fuck it lets dive in.
+
+## An isomorphism between natural numbers and programs for a turing machine
+
+Alright so first thing first we need to convert a natural number into an array of bytes which is pretty trivial to do.
+```py
+num = random_natural()
+bytes = []
+i = 0
+while i <= log256(num):
+ bytes.append(num%256)
+ num /= 256
+ i += 1
+```
+This array of bytes can then be treated as machine code that we run on whatever architecture thereby isomorphism between naturals and programs for a turing machine complete. ... Okay I'll be a bit more rigorous but not much. A turing machine can be constructed by having finite instructions which can operate on infinite memory so each instruction can just correspond to 1 natural number and we can read out an arbitrary natural number from our array by having the highest bit on each byte correspond to whether the value is continued in the next byte although given you don't need more than 255 instructions for a turing complete machine but doing this is convenient as we can use similar logic for the arguments to an instruction to pass in infinitely large values to instructions as jump locations or places to read data from. Anyways I'm not going into more detail beyond that, I'm sure you can get plenty creative making your own turing complete machine with infinite memory. To invert the relation you just take the byte array of a program and run it through this code
+```py
+num = 0
+for i in range(0,len(bytes)):
+ num *= 256
+ num += bytes[i]
+```
+
+## The paradox
+
+Anyways back to the issue at hand we have a number(ψ) that should be in a set by the definition of the set but also shouldn't be in that set due to how it got constructed. At this point I wasn't sure what to make of this but when I mentioned this to Micha and he had a few theories on what was going on.
+
+1. ψ isn't in the set(except it is)
+2. This set isn't constructible
+3. the set isn't enumerable(interesting if true but obviously false)
+
+> can you explain these statements?
+
+sure but I don't remember how much of this I thought up vs Micha so I can't give proper credit so if that's important to you you'll need to reach out to me or Micha so we can share some discord messages.
+
+### ψ isn't in the set
+
+This would explain the paradox because it would mean there is no paradox however it's false because as described before the number has finite steps with finite KCMP and thereby also has finite KCMP.
+
+### The set isn't constructible
+
+This is the theory I'm currently in favor of as it seems to be the assumption that's made that isn't true, this also means that ψ doesn't exist.
+
+### The set isn't enumerable
+
+This is false because obviously the set is enumerable you enumerate by going through each natural number and evaluating the program equivalent.
+
+## A set that actually exists
+
+I can definitely credit the definition of this set to Micha, they gave a definition of. *"The set of all generatable tape sequences of a TM which have a limit"*. ψ isn't in this set because it doesn't have a proper limit, let n be the natural who corresponds to the program to generate ψ the nth digit of ψ can't be any number because it needs to not be equal to itself thereby meaning ψ doesn't have a limit and isn't included in our set.
+
+## Some conjecture
+
+With that we leave the realm of things I discussed with Micha and into the realm of increasingly stupid ideas.
+
+With this knowledge that numbers with finite KCMP have a mapping to the integers...
+
+> wait hold on don't those numbers include complex numbers, fractions, quaternions, irrational numbers, vectors and more? How do you know what's what from the limiting byte sequence?
+
+Stop poking holes in my fun math thoughts anyways with the knowledge that integers correspond to finite KCMP numbers I conjecture that the fact that the infinity of the real numbers is larger than the infinity of the naturals will never come up in practice barring our current understanding of the universe being completely wrong or one or more universal constants having infinite KCMP(we'll come back to this in a second) due to all process of calculation that we have available are no more powerful than a turing machine which can only compute values which we just showed to map to the natural numbers instead of the real numbers.
+
+## An amusing unfalsifiable hypothesis
+
+I hypothesize that all of the universal constants have infinite KCMP due to the universe being a continuum of possible universal constants across a multi-dimensional plane and the observable universe being just one point on that plane picked at random, due to it being picked at random the probability of it being all values with infinite KCMP is practically 100%. Do I have evidence, research or anything else backing this theory up? Nope but it's interesting.
+
+> Why is the probability practically 100% if the values are picked at random and How is that unfalsifiable?
+
+The reason the probability is practically 100% is because the size of finite KCMP numbers is countably infinite and because the size of the real numbers is bigger that means that the set of numbers with infinite KCMP is the size of the real numbers thereby being infinitely bigger making the ratio comparable to 1:∞ meaning the probability of any universal constant having finite KCMP(assuming all are chosen fully randomly over a continuum) is 0(technically not impossible for math reasons but practically impossible).
+
+As for the question of unfalsifiability we'd need to be able to make infinitely precise measurements to confirm it one way or the other and things like plank's constant conspire to prevent this in addition to us measuring with an error margin of 0 is pretty difficult if it's possible at all.
+
+## Conclusion
+
+Doing all this abstract high level math is fun though I fully expect that none of this will ever be useful in any way ever. Mildly tempted to argue otherwise in a CS ethics class I have coming up in uni soon but that probably isn't worth the effort or crappy grade.
diff --git a/posts/fractions.md b/posts/fractions.md
new file mode 100644
index 0000000..6ea9e3e
--- /dev/null
+++ b/posts/fractions.md
@@ -0,0 +1,65 @@
+---
+
+title: "Fractions"
+
+description: "So I've been thinking about representing fractions/rational numbers in binary effieciently..."
+
+date: 2023-08-07
+
+draft: false
+
+---
+
+# Fractions
+
+So recently I've been thinking about how fractions could be represented in binary effieciently. Okay I've had it in the back of my mind for months but [this video](https://www.youtube.com/watch?v=4d6YrTKmjfE) gave me a spark on a solution.
+
+## The beginning
+
+I don't remember why I first started considering this but the first obvious solution I came up with was to do something simple like this.
+
+```rs
+struct Fraction{
+ numerator:u32,
+ denominator:u32
+}
+```
+
+Which does work, however it's extremely wasteful with situations like 1/2 = 2/4 = 4/8 = 5/10 = ... among others.
+
+This is okay depending on your use case, if you're okay taking a memory and computational hit you can just check if any resulting fraction can be simplified and simplifying. But I don't like that because it's ineffiecient and very inelegant.
+
+## Why not just use floating point then
+
+Because fractions rule and scientific notation drools, also floating point is just an approximation bro and thinking about how to solve this for fractions is fun.
+
+## Second solution attempt
+
+Alright so that first try didn't work particularly well with how wasteful it was but I'm sure this time it'll go better.
+
+```rs
+struct Rational{
+ integer: u32,
+ fraction: u32
+}
+```
+
+... okay I should probably explain. The denominator of the fraction is 2^32 so now we don't have to worry about having situations where 2 fractions simplify to the same value because all denominators are the same meaning that only happens when they're represented by the same value, hurrah! Anyways addition and subtraction are both pretty easy and cheap, just do them piecewise and handle overflows. But multiplication and division are a problem now because both of them require use to have some kind of external storage to do and are going to be pretty computationaly ineffiecient. Well crap.
+
+## Inspiration from the video
+
+And that's basically where I got stuck until I found the video that I linked above. In the video they describe an algorithm for how you can get all rational numbers exactly once. Why it works doesn't matter for this blog(go watch the video if you care) but I will describe how it works. First start off with the fractions 0/1 and 1/0 ignoring that the second is undefined. Now add their numerators and denominators toegether to get a new number 1/1. Put that new number between them and then repeat that process with all adjacent numbers for however many steps you want. What I did with this was stop at a finite number of repetitions(if you wanna try at home I made a repo that generates all the fractions) the binary representation would then be whatever indice in the list a particular fraction was at.
+
+Amusingly this gave the illusion of actually working for a moment. Namely when I added the indices of x/y and (y-x)/y for a few examples I got 1. But then I tried adding 1/18 to itself 3 times, and I got 1/16.
+
+## Why does this work and why does it stop working?
+
+That is the question, well my theory is that there's a symmetry from 0-1 that makes reflecting all values x/y over the value 1/2 makes them into (y-x)/y due to the way this sequence is generated. The reason it doesn't work for values that don't add up to 1 is that there's no equivalent symmetry which causes problems due to the distribution of fractions not being uniform over the integers(aka going to and from integers with this system is non-linear). This problem leaves us without even a way to salvage a good mechanism of having fractions from 0-1 which would've been useful in combination with method 2 if multiplication and division didn't cause issue.
+
+## Oh wait solution 2 is fine actually
+
+yeah I didn't think about it hard enough originally, shift both numbers to the right by half the number of fraction bits before you do the equivalent integer operation and you're fine. That said it may make sense to only allocate a quarter of the number to the fraction(good enough for most cases) or if I were to actually implement this it'd be user specified(if people actually started using it then there'd probably be encoding problems but I'm not even going to make this so not my problem).
+
+## Conclusion
+
+The simplest solution is often terrible but the second simplest is generally at least okay. This also gives a new appreciation for how elegant floating point is(dodging the question of how many bits go to the precision completely). I also have a side quest on making a program to generate fractions for solution 3 which I intend to start writing about now.
diff --git a/posts/fractions_sidequest.md b/posts/fractions_sidequest.md
new file mode 100644
index 0000000..5f47cd3
--- /dev/null
+++ b/posts/fractions_sidequest.md
@@ -0,0 +1,207 @@
+---
+
+title: "Fractions Sidequest"
+
+description: "In my last blog I wrote about my explorations on a new number type that specifies fractions rather than approximate binimal(? decimal has a latin root for 10 but floating point uses binary so what word?) or integers"
+
+date: 2023-08-08
+
+draft: false
+
+---
+
+# The Sidequest
+
+Welcome to a blog post about a sidequest I went on while exploring a computer [fraction](/blog/fractions) based number system. Specifically for my third solution I wanted to generate a list of fractions generated by the algorithm from [the video](https://www.youtube.com/watch?v=4d6YrTKmjfE).
+
+## Algorithm recap
+
+the algorithm is decently simple but knowing it is a bit of a pre-requisite for the rest of this post. As such the steps are below.
+
+1. Start with a pair of fractions you want the generated fractions to range over(there are probably restrictions on what you can pick but for the rest of this post assume they're 0/1 and 1/0 which are fine and allow for ranging over the entire number line)
+2. Add the numerators and denominators of the 2 fractions
+3. Put the newly created fraction between the fractions used to generate it
+4. repeat with all fractions next to each other in the list for however long you want for more fractions(you won't get repeats)
+
+## Script 1
+
+I wanted to use a fast lang for this so I chose Rust(also because I personally like Rust). It didn't take long for me to write this(slightly changed for clarity)
+
+```rs
+fn main()->anyhow::Result<()>{
+
+ let mut fracs = Vec::::new();
+ fracs.push(Frac(0,1));
+ fracs.push(Frac(1,0));
+
+ for i in 0..19{
+ eprintln!("{}",i);
+ step(&mut fracs);
+ }
+
+ //remaining code in main wrote the fractions to a file and didn't change, maybe I could've written it to be faster but that's not the focus of this blog
+ Ok(())
+}
+
+fn step(list:&mut Vec){
+
+ let mut i = 0;
+ // I wanted a progress bar and in this case it actually is the reason I even knew there was a performance problem
+ let bar = indicatif::ProgressBar::new(list.len() as u64);
+
+ while i < list.len()-1{
+ bar.inc(1);
+ list.insert(i+1,list[i]+list[i+1]);
+ i+=2;
+ }
+
+ bar.finish_and_clear();
+}
+
+// trait impls are for convenience
+#[derive(Clone, Copy)]
+struct Frac(u16,u16);
+
+impl Add for Frac{
+ type Output = Self;
+ fn add(self, rhs: Self) -> Self::Output {
+ Frac(self.0+rhs.0,self.1+rhs.1)
+ }
+}
+```
+
+This code feels bad even from a code quality point of view but idk why, regardless it's hilariously bad performance wise.
+
+
+
+Considering that we're "only" doing addition this is incredibly slow. Slower than addition in (insert butt of the joke language of this week here). All that in mind something is definitely up and if you read the code above and think about it enough you'll probably see it.
+
+... Yeah the problem is this line here
+
+```rs
+list.insert(i+1,list[i]+list[i+1]);
+```
+
+Citing documentation
+
+> Inserts an element at position index within the vector, **shifting all elements after it to the right**.
+
+In this case all elements after it is tens of millions of values to put this in big O notation, doing things this way for every element makes this process O(n²).
+
+The solution is of course simple, don't ever put anything into a vec anywhere other than the end(barring witchcraft). Unfortunately implementing that solution required rewriting this code. But I took this as a nice oppurtunity to also multi-thread this code.
+
+## Concurrency how?
+
+Unfortunately this isn't quite trivially parallizable so I can't just use rayon. In the face of this a very naive solution to this problem would be something like
+
+```rs
+// ignoring move semantics and the need to only use functions that exist for convenience and readability
+
+fn recurse(f1:Frac, f2:Frac, remaining:usize)->Vec{
+ let middle = f1+f2;
+
+ let left_thread = std::thread::spawn(||recurse(f1,middle,remaining-1));
+ let right_thread = std::thread::spawn(||recurse(middle,f2,remaining-1));
+
+ let left = left_thread.join();
+ let right = right_thread.join();
+
+ //don't need return but not everyone knows rust
+ return concat(left, middle, right)
+}
+```
+
+the reason I call this the naive solution is because it uses OS threads and OS threads are expensive memory wise. Also if you spawn more of them than CPU cores you get minimal benefit and if you keep spawning them anyways the OS tends to have a panic attack. That's bad so instead of using OS threads lets use green threads for less overhead while still using multiple threads from a pool.
+
+## Script 2
+
+```rs
+#[tokio::main]
+async fn main()->anyhow::Result<()>{
+ const RECURSIONS:u64 = 19;
+
+ let fracs = recurse(Frac(0,1),Frac(1,0),RECURSIONS).await;
+
+ Ok(())
+}
+
+// actual code is much more ugly in reality due to reasons(code below won't compile), if you wanna see it there's a link to the repo with all this code at the bottom of the article, the git commit is de72a7a0
+// also removing progress bar code because nobody cares just know I still had a progress bar
+async fn recurse(f1:Frac,f2:Frac, remaining:u64)->Vec{
+ // base case for the recusion
+ if remaining == 0 {
+ return Vec::new()
+ }
+
+ // same idea as the naive version
+ let middle = f1+f2;
+ let left_task = tokio::task::spawn(recurse(f1,middle,remaining-1));
+ let right_task = tokio::task::spawn(recurse(middle,f2,remaining-1));
+
+ let left = left_task.await.expect("left future failure");
+ let mut right = right_task.await.expect("right future failure");
+
+ // how concat is being achieved
+ let mut ret = left;
+ ret.push(middle);
+ ret.append(&mut right);
+
+ return ret
+}
+
+// Frac is the same as before
+```
+
+looks good at first glance(actual version code quality is bad but blog version seems alright). What happens when we run it?
+
+
+
+Oh... we run out of memory... or well we run out of 30 gigabytes of memory because I set a limit to avoid effecting the other stuff running on the server(because it isn't mine). But why? Doing the math if all we had to deal with was the fractions we'd be using about `17501876*4/1000**3 ~ 0.07 GB`, if we include the overhead of all the Vecs we make and are pretty agressive with how much memory they use maybe 0.21 GB which is a difference of over 142x. So what's the rest of the memory?
+
+Well... I'm not 100% sure actually but my current best guess is the green threads/tokio tasks. Whatever it is on average it seems to have memory usage measured in hundreds of bytes and/or a kilobyte or 2 roughly doing a bit of quick math(I just divided 30GB/num_of_running_tasks). So I guess I gotta take out the green thread usage huh.
+
+## Script 3(the finale for now)
+
+So yeah I did that, I didn't need to rewrite this time just a refactor.
+
+```rs
+fn main()->anyhow::Result<()>{
+ const RECURSIONS:u64 = 32;
+
+ let fracs = recurse(Frac(0,1),Frac(1,0),RECURSIONS);
+
+ Ok(())
+}
+
+fn recurse(f1:Frac,f2:Frac, remaining:u64)->Vec{
+ // nothing new here
+ if remaining == 0 {
+ return Vec::new()
+ }
+
+ let middle = f1+f2;
+ let left = recurse(f1,middle,remaining-1);
+ let mut right = recurse(middle,f2,remaining-1);
+
+ let mut ret = left;
+ ret.reserve(ret.len()+right.len()+1);
+ ret.push(middle);
+ ret.append(&mut right);
+
+ return ret
+}
+
+// Frac now uses 32 bit ints rather than 16 bit ints due to an overflow
+```
+
+This solves the whole running out of memory thing. A funny side effect is that now it's even faster(even though it's 1 thread).
+
+
+
+So that was fun going through and making all this work out well, now I can generate gigabytes upon gigabytes of fractions with ease.
+
+## Conclusion
+
+Could I optimize this more? Yes I could pre-allocate a buffer and use a specialized thread pool(and probably some unsafe code as well thinking about it). But I won't because it's fast enough, the remaining speed gains probably aren't worth it and most of the execution time is spent writing the results to disk. Overall this was a fun sidequest as a part of the fraction quest. I did other stuff between the article before the fraction one and the fraction one and maybe I'll dump those articles at some point soon so I can stop feeling bad about them sitting in my website's git repo doing nothing.
+
+[git repo with the generator](https://github.com/Pagwin-Fedora/fraction_generator)
diff --git a/posts/gh_actions.md b/posts/gh_actions.md
new file mode 100644
index 0000000..bbf15b7
--- /dev/null
+++ b/posts/gh_actions.md
@@ -0,0 +1,204 @@
+---
+
+title: "Setting up CD for this site"
+
+description: "How I setup Github actions to automatically update this site"
+
+date: 2022-01-22
+
+draft: false
+
+---
+
+So recently I got a bit of a bee in my bonnet to go setup CD for this website. The main reasons that drove this were 1. deploying the site was mildly tedious which is a good enough reason on it's own and 2. I wanted to learn how to do it.
+
+
+
+## Wait but how did I find out about and how to do this?
+
+I was aware of Github actions and a vague sense of how it should work observing how things went when I made my small contribution to [Gerald](https://github.com/Gerald-Development/Barista-Gerald). But observing that didn't really give a sense of how it worked. What did was when my friend [Micha](https://github.com/michalusio) was working on implementing [their own blog](https://lochalhost.pl) and set things up with Github actions to implement CI/CD. Then I saw [this Fireship video](https://www.youtube.com/watch?v=eB0nUzAI7M8) which gave me a nice amount of context for this. Even with that bit of knowledge on how to setup stuff with Github actions I didn't really have a motivation to go do it.
+
+
+
+## The spark to actually do it
+
+Then for a couple of reasons wanted to write a blog article about progress on [Pogo](https://github.com/Pagwin-Fedora/Pogo). But I decided that before I wrote any more articles that I should go look into setting up CD for my site.
+
+
+
+## Implementing CD with Github actions
+
+So with inspiration in my heart to go and do stuff with Github actions I began. First off I needed to set up the condition for my workflow running which was a pretty simple as I wasn't really doing anything interesting here.
+
+```yaml
+
+on:
+
+ push:
+
+ branches:
+
+ - master
+
+```
+
+
+## The jobs
+
+I knew due to reading some pages on Github's action market place and previous context that I would need to have at least 3 if not more steps
+
+1. checkout the code
+
+2. use Hugo to build the site
+
+3. deploy to my VPS
+
+
+
+So going through each of those steps in order to see how I went about doing them first we have checking the code out which was a pretty simple `uses: actions/checkout@v2` additionally telling it to fetch submodules due to the structure of my project. After checking out the code I had to use Hugo and conveniently there was a module for Hugo `peaceiris/actions-hugo@v2` although sadly it only installs Hugo so another step would have to be added to build the app. But that step was a pretty simple `run: hugo --minify`. I will say though that if the example on the marketplace page didn't use the `--minify` option I wouldn't have either because I didn't know it existed so that was a nice little learning experience. After building the code I needed to deploy it which due to this being a static site was theoretically as simple as copying files with rsync. But I didn't want to have an automated action have access to root or my user for security and anti-stupidity reasons. To implement that I had to leave the realm of Github actions and go over to my VPS to set some stuff up.
+
+
+
+## Some work on the VPS
+
+That stuff in question was adding a new user and changing the perms of /var/www/pagwin.xyz so that new user could edit files there. This was pretty simple.
+
+```sh
+
+sudo useradd website # I can hear people laughing at me already for not passing the -m option but relax I'll explain later
+
+sudo chown -R website:www-data /var/www/pagwin.xyz # btw I didn't explain earlier but my website files are in /var/www/pagwin.xyz not /var/www/html because I'm hosting multiple sites on this VPS and the folder change makes it easier to keep track of which one I'm screwing with
+
+```
+
+However my unwillingness to have the new user have a home directory for cleanliness and to avoid unnecessarily leaving a user that could receive emails(I have an email server setup on this VPS as well) I didn't create a home directory. But in order for the deployment workflow on Github to deploy to the VPS via rsync it would need ssh access... Okay the problem may not be obvious if you don't understand ssh/good security practices very well. The problem is that in order to login to ssh via an ssh key you need to put that key into `$user_home/.ssh/authorized_keys` which requires the user have a home directory that I am unwilling to create. Password authentication is also not an option because allowing password auth on to a server is insecure compared to only allowing ssh keys. This is especially true when the ssh login is being done by an automated system. Also my VPS requires the usage of a TOTP if you login via a password and setting that up for Github actions sounds like a nightmare. Also also in order for the server to know the TOTP which requires a file... which goes into the home directory meaning nothing has changed or improved by trying to use a password.
+
+Conveniently while `$user_home/.ssh/authorized_keys` is the default location for ssh public keys it's pretty easy to add another location for sshd to look for authorized_keys just by adding the line `AuthorizedKeysFile .ssh/authorized_keys /etc/ssh/keys/%u.authorized.pub` to `/etc/ssh/sshd_config` where the later bit of `/etc/ssh/keys/%u.authorized.pub` is added on from the default. That last bit of the config tells ssh to look for the public keys at an additional location where the username of the user trying to login replaces %u. After that whole hassle is done with generating the ssh key is pretty simple with `ssh-keygen` and putting the public key in the right spot. Adding the private key as a Github secret was annoying however but I'll discuss that in the [#Dealing With My Stupidity](#Dealing%20With%20My%20Stupidity(and%20a%20private%20ssh%20key)) section.
+
+
+
+## What were we talking about oh yeah Github actions
+
+anyways yeah this is what I initially(spoiler I change it) wrote for Github actions to go and deploy the app.
+
+```yaml
+
+uses: up9cloud/action-rsync@master
+
+env:
+
+ HOST: pagwin.xyz
+
+ KEY: ${{secrets.SSH_KEY}}
+
+ TARGET: /var/www/pagwin.xyz/
+
+```
+
+With that I saved the file to `.github/website-publish.yml` and felt a mild sense of accomplishment. In hindsight that sense and first file are hilarious and while I would love to immediately explain why first I want to take a second to show a step I added after I finished dealing with my stupidity. That step is a cleanup step that deletes the old site before copying over the new one so people can't snoop around in redundant files. I implemented that with this tidbit.
+
+```yaml
+
+uses: appleboy/ssh-action@master
+
+with:
+
+ host: pagwin.xyz
+
+ username: website
+
+ key: ${{secrets.SSH_KEY}}
+
+ script: rm -rf /var/www/pagwin.xyz/*
+
+```
+
+
+
+## Dealing With My Stupidity(and a private ssh key)
+
+The obvious act of stupidity if you paid attention to what I wrote is that I saved the file to `.github/website-publish.yml` instead of `.github/workflows/website-publish.yml`. Fixing that was pretty easy when I figured out what was going on. After that I then had to tweak the deploy step a bit to make rsync work properly. I did a couple of things wrong with the deploy step, one I didn't specify a username, and two I didn't specify my source directory. The source directory thing was particularly stupid as I wanted the files inside the public folder but just putting ./public gave the folder itself with the files in it. As I removed that from my vps I deleted the `/var/www/pagwin.xyz` folder which required a brief recreation of that folder. Then I setup the source correctly to get the files properly but also set the target wrong so everything would go in a folder * which was annoying but at the end of all that I had a pretty smooth setup. Also when trying to copy my private key over to gh actions I struggled a little because I wanted to use xclip to put it in my clipboard but due to website being a different user I couldn't do that directly. This would've been fixed relatively easily if I realized this is why github's cli exists but oh well I eventually got it figured out.
+
+
+
+## Conclusion
+
+Overall I'm very happy I did this because it gave me a nice bit of practical understanding of how to set up Github actions for future projects. I hope reading about my technical spaghetti VPS and idiocy wasn't too boring. Oh yeah for those who care this is what the yaml file looked like in the end
+
+```yaml
+
+name: Website publish
+
+
+
+on:
+
+ push:
+
+ branches:
+
+ - master
+
+
+
+jobs:
+
+ build:
+
+ runs-on: ubuntu-latest
+
+ steps:
+
+ - name: Code Checkout
+
+ uses: actions/checkout@v2
+
+ with:
+
+ submodules: true
+
+ fetch-depth: 0
+
+ - name: Hugo Setup
+
+ uses: peaceiris/actions-hugo@v2
+
+ with:
+
+ hugo-version: '0.91.2'
+
+ - name: Build
+
+ run: hugo --minify
+
+ - name: Clean
+
+ uses: appleboy/ssh-action@master
+
+ with:
+
+ host: pagwin.xyz
+
+ username: website
+
+ key: ${{secrets.SSH_KEY}}
+
+ script: rm -rf /var/www/pagwin.xyz/*
+
+ - name: Deploy
+
+ uses: up9cloud/action-rsync@master
+
+ env:
+
+ HOST: pagwin.xyz
+
+ USER: website
+
+ KEY: ${{secrets.SSH_KEY}}
+
+ SOURCE: ./public/*
+
+ TARGET: /var/www/pagwin.xyz/
+
+```
diff --git a/posts/how.md b/posts/how.md
new file mode 100644
index 0000000..af55bf7
--- /dev/null
+++ b/posts/how.md
@@ -0,0 +1,59 @@
+---
+title: "How this website was made"
+description: "It's with hugo and the rest of this is probably gonna be short and boring viewer discretion is advised"
+date: 2020-09-30
+---
+## Prelude
+Before we get to how I actually made this site let's discussed how I failed to make this site(repeatedly). I was inspired to make a simple website/blog from [this blog post](https://k1ss.org/blog/20191004a), I rapidly regretted having that as my main inspiration. I tried setting up scripts for generating pages using the output from pandoc, making the pages look nice and what not as well as make a script for generating an rss feed but rapidly realized that all of this was going to be a pain and gave up. Rinse and repeat a couple of times over several months to a year or so.
+
+## Actually making this website
+One day(2 days before this post was written actually) I was browsing reddit when I came across [this comment](https://www.reddit.com/r/linuxquestions/comments/j0wcfj/i_hand_you_a_computer_with_a_minimalistic_install/g6vxxxj/) and realized that I'm an idiot because static site generators exist and what I had previously been doing was basically writing my own, I may still write my own but more so as a project on it's own than as something that's contributing to something else. After that work went relatively smoothly with me spending the first day learning what the fuck hugo(no I didn't do my research into static site generators don't judge me okay) is then on the second day I actually started to get into writing all the stuff for the site. For my theme as you may already know if you've looked at the footer for this website I'm using liquorice as my theme. I chose it for being simple, very nice for reading text(what I expect will be the main thing that's done on this website) and because I just liked the overall feel. There were some aspects that I felt the need to improve though such as the homepage being a bit more than just a list of every page on the site, something about lists(I don't remember what), making the subsections of these blogs and other pages in the future jump points in case I write something that would actually benefit from those jump points and not just a short one page piece and finally making the links actually look visually distinct from the text beyond simply being bold. There are probably other changes I'm forgetting and in the future I expect I'll tweak this further but that's all for now. Some of those tweaks will be me making the website smaller and more compressed following the original spirit of that kiss blog and I can already see some points where I can shave some size off but that's a story for another time.
+
+## Making the jump points
+most of those points are pretty easy if you read hugo's documentation and are willing to try random things but the jump points are a slight challenge and something worth writing about in more detail. First things first ignoring all this being generated from markdown causing some oddities how do we make a jump point on a webpage? Well with anchor tags of course!
+```html
+some content doesn't matter
+```
+this is nice now if somebody goes to example.com/#some_name_or_something_idk their browser will jump them straight down to wherever that anchor tag is. But it doesn't jump to the anchorblock when we click on it it simply sets our url and if we reload it jumps to it. *Editor's note: as I write this I'm unsure if I'm an idiot who didn't need to do the work with this javascript I'm about to talk about so it may well be possible that it's unnecessary and the above code already does that*. So in order to fix that we'll be adding an event to our anchor element like this.
+```html
+some text
+
+```
+Technically the event could be added by adding an onclick parameter to the anchor element in the dom but we we start dealing with another problem which I'll get to after explaining this it'll be way cleaner to just use `addEventListener` anyway the code is relatively self explanatory but I'll explain it anyways. Our element has an id that we attached to it by adding the parameter `id="name"` we can get our element in our code by asking the browser to give our element to us using the id as a reference to find the element with the method `document.getElementById`. We could totally just use `document.getElementsByName` and take the first element from that but I personally chose to add and use the id. With `addEventListener` we can attach a function that'll be called when an event fires in this case the click event for when the user clicks on the anchor element. The function in question take the event object given to it and takes out the dom element that was actually clicked on with the target property. We then scroll to that dom element with scrollIntoView. Now all we need to do is have it so that when we write out our header elements we just surround them with anchor elements and... wait.
+
+## We didn't write those header elements though
+Oh yeah we didn't write the header elements in the first place they're written in from whatever markdown generator that hugo uses. Well how do we handle this? There may be some way of changing how hugo generates html from the markdown but that sounds hard let's just write some javascript.
+```js
+let elems = document.getElementsByTagName("h2");
+for(let elem of elems){
+ elem.outerHTML = `
`;
+ document.getElementById(e.innerText).addEventListener('click', event=>{
+ event.target.scrollIntoView();
+ });
+}
+```
+Ok so you already understand that last bit with the event listener and what not so allow me to explain the rest. `document.getElementsByTagName` is the same as `document.getElementById` except it gets more than one element and does it by their tag name. The for loop iterates through all the elements we just got and through each iteration we can refer to the element we're on by the variable `elem`. The parameter of `outerHTML` isn't used very often `innerHTML` and `innerText` are used more often because most people only want to control the text inside of a dom element but want to leave the outer tags untouched but in this case that's useful because we actually want to add anchor tags around our header tags which is what we do. Hooray the problem with the markdown generation not allowing fine enough control was solved. Now about adding that script in to do that work.
+
+## Adding the script in
+You'd think this was simple and it kinda was but keep in mind that I've only been using hugo for less than 3 days at this point. Besides that I also only wanted this script in the single pages or the pages that blog/articles/whatever were on and not on list pages which list out all the pages as the list pages also used h2 elements but I didn't want the h2 elements there to be modified by this script. Thankfully this was easy because I had shortly before hand wanted to do something similar with a stylesheet but man adding in that stylesheet had some nuisances. The first thing I found of use for this purpose was [cond](https://gohugo.io/functions/cond/) but I still needed to figure out how to test for whether the page was a list or not so I started looking through hugo's page variables and I found 3 candidates for this `.IsNode`, `.IsPage` and `.IsSection` with the last one just being the negate. I got somewhat frustrated when I found none of these useful for what I was trying to do. Eventually I stumbled upon `.Kind` and bumbled about a bit trying to figure out how to test for a `.Kind` of page until I found [eq](https://gohugo.io/functions/eq/). So great I now can test for whether a page is a page I want the stylesheet applied to so
+```html
+{{ cond (eq .Page.Kind "page") "" "" }}
+```
+should work right? Nope nope nope for multiple reasons nope. For one thing trying to put the base url with curly brackets didn't work because apparently hugo doesn't do curly brackets in curly brackets also when I opened the page in a browser the tag and all the tags beneath it(which were placed in the head in the partial template btw) are now in the body??? Also I made it seem like I had solved the cond thing before this came up but that was happening at this point as well. So first things first how do we put a variable midway through a tag that we're inserting? Well apparently the answer to that is [printf](https://gohugo.io/functions/printf/)(I personally would've named it something like format rather than printf even if it uses something called printf internally but maybe that's just me) so now we have.
+```html
+{{ cond (eq .Page.Kind "page") (printf "" .Site.BaseURL) "" }}
+```
+which is closer but it still jumps into the body for some reason. That reason as it turns out is because Hugo ~~being somewhat annoying because it decides not to warn you for failing to be explicit about whether you want a tag as a tag~~ being very cool and safe escaping all the tags to prevent cross site scripting/injection or whatever else problems in code that you're explicitly writing out in a folder for templates. Ugh anyways after running the output of the printf through [SafeHTML](https://gohugo.io/functions/safehtml/) we get this final iteration that works how I want it to of.
+```html
+
+{{ cond (eq .Page.Kind "page") ( safeHTML (printf "" .Site.BaseURL)) "" }}
+```
+Nothing about this changes for the script that we want on our blog pages only other than that we replace link with a script tag.
+
+## Conclusion
+This was fun and I'm glad I found out about the existance of [hugo](https://gohugo.io/). I'll probably update this site in the future and this blog will probably get outdated but unless I decide the site looks almost completely different run into a very annoying or interesting problem or completely remake the site for osme reason or another I probably won't update this blog or release any other blogs with updates on changes I make to this site(and knowing me even when those things come up I probably won't write about them) one of the things I want to change is the links to different platforms/feeds/whatever but based on already made efforts I think I'll save that for another time.
diff --git a/posts/invidious_and_goals.md b/posts/invidious_and_goals.md
new file mode 100644
index 0000000..74a18e2
--- /dev/null
+++ b/posts/invidious_and_goals.md
@@ -0,0 +1,119 @@
+---
+
+title: "Yeeting the distractions and setting goals"
+
+description: "So recently I've begun work on trying to remove distractions so I'm more likely to work on productive stuff and this blog is effectively a lightning round of things I did to accomplish that"
+
+date: 2022-09-01
+
+draft: false
+
+---
+
+So recently I’ve engaged in a renewed push to be productive somewhat consistently and this time it just may work(unlike the 3-5 other times). With this push, I’ve decided to begin moving off of youtube by going down to my subscriptions. In order to do that I implemented a few small projects.
+
+## Getting the feeds(but not really)
+So to enforce that my initial plan was to only watch the content I saw through an RSS feed, preferably via mpv. In order to do that I needed a list of channel ids corresponding to the youtube channels I was subscribed to. In order to get that I could’ve gone through and manually gotten each channel id through youtube’s web interface… But that would take forever and ain’t nobody got time for that manual labor when you can spend twice as long automating(although doing that automation gave me experience that may save me time now). So to do that I looked into google’s [Youtube Api](https://developers.google.com/youtube/v3) and found a way to get a list of [subscriptions](https://developers.google.com/youtube/v3/docs/subscriptions/list). But to make use of that I’d need to go and learn how to do stuff with OAuth. Thankfully after faffing about a bit I realized that there’s an [npm package](https://www.npmjs.com/package/googleapis) that does a lot of that work for me. Anyways with that, it was time to ~~steal example code~~ write software, oh hey where did all that preexisting code come from?
+
+## Oh the callbacks
+
+Well, that code came from [here](https://developers.google.com/youtube/v3/quickstart/nodejs) and oh my god do they use callbacks. Personally, I think callbacks suck and are the worst way of having some sort of asynchronous task. So I did a decent amount of refactoring to convert things to use promises. However much to my chagrin I found that I couldn’t use async await because apparently the npm package didn’t return normal promises, or maybe something else was happening I’m not entirely sure looking over the code now with intellisense but trust me when I tried back when I was figuring this out it didn’t work and it was annoying. Though that said I also don’t know why I couldn’t/wouldn’t convert from the weird promises to normal promises due to that being relatively easy with js’s promise api but I digress.
+
+
+## Getting the subs
+
+I don’t remember if I implemented the code that got my subscriptions concurrently with the callback refactor or if I did it after. In any case, all of the code to get the subscriptions is 2 relatively small functions.
+
+```js
+function getSubscriptions(auth, page) {
+ var service = google.youtube("v3");
+ return service.subscriptions.list({
+ mine:true,
+ auth,
+ maxResults:50,
+ part:"snippet",
+ pageToken: page ? page:""
+ });
+}
+
+function handlePage(authority,response){
+ let items = response.data.items;
+ for(let item of items){
+ console.log(item.snippet.title+" ".repeat(60-item.snippet.title.length)+item.snippet.resourceId.channelId);
+ }
+ if(response.data.nextPageToken){
+ getSubscriptions(authority, response.data.nextPageToken)
+ .then(handlePage.bind(null,authority))
+ }
+}
+```
+Yeah, pretty simple but allow me to explain what bits of these 2 functions are doing and why.
+```js
+var service = google.youtube("v3");
+//...
+let items = response.data.items;
+```
+Both of these are done primarily for convenience so I'm not writing the same thing over and over again. If you notice the service one uses the inferior var instead of let it's because I was lazy and didn't change that bit from the example code. And now that I'm done with the bit you can find that code [here](https://developers.google.com/youtube/v3/quickstart/nodejs).
+```js
+ return service.subscriptions.list({
+ mine:true,
+ auth,
+ maxResults:50,
+ part:"snippet",
+ pageToken: page ? page:""
+ });
+```
+The only other bit of code in the getSubscriptions function just calls the method in the npm package to make a request for the subscriptions of the user who provided Oauth authorization, 50 results at a time specifically giving things under the "snippet" category of data. For the pageToken bit what it's doing is if it's null/undefined it specifies it as an empty string so we get the first page and if it's not then it's just itself so we can get the next page.
+```js
+for(let item of items){
+ console.log(item.snippet.title+" ".repeat(60-item.snippet.title.length)+item.snippet.resourceId.channelId);
+}
+```
+This bit of code just outputs each of the fetched channels' names and their id such that all the ids visually align for the part of my brain that wants everything to look neat. The reason I wanted things in this format was that I wanted to manually filter out the channels I didn't watch so having the channel name with the id would make it faster to get through for the obvious ones. The reason I was console.logging instead of writing to a file via the fs module was because I was lazy and decided to just have the information output via stdout and redirected to a file via a > operator in bash.
+```js
+if(response.data.nextPageToken){
+ getSubscriptions(authority, response.data.nextPageToken)
+ .then(handlePage.bind(null,authority))
+}
+```
+This last bit of code checks to see if there's a token for the next page of subscriptions and if there is it gets them, providing the nextPageToken to do just that in the getSubscriptions function and once the new response pops up it sends it to handlePage. More specifically what happens is I use the bind method of js functions as a way to have a partial function which can otherwise be said as a function that already has one of its arguments passed in. Until somewhat recently I wasn't aware you could use bind like that but one time when I was commenting an amount of annoyance at js not having a built-in function that allows for the easy construction of partial functions like Python's functools.partial or Haskell's function currying built into the language in a discord server a friend pointed out that the bind method can be used for that so the more you know I guess.
+
+## Why I specified but not really for getting the feeds
+
+As it turns out what I wanted could be better accomplished by self-hosting an [Invidious](https://github.com/iv-org/invidious) instance. However, my weird format that I had of my subs wouldn't work and I didn't want to redo filtering out channels I don't want so I decided to make a script that would make an opml file which is one of the file types that invidious could import. To do that I wrote a rust script.
+```rs
+use std::fs::File;
+use std::io::Read;
+pub fn main() -> std::io::Result<()> {
+ const START:&str = "";
+ let mut file = File::open("channels")?;
+ let mut buf:String = String::new();
+ file.read_to_string(&mut buf)?;
+ let middle = buf.split("\n")
+ .filter(|v|v!=&"")
+ .map(gen_middle)
+ .collect::>()
+ .join("\n");
+
+ const END:&str = "";
+ print!("{}\n{}\n{}",START,middle,END);
+
+ Ok(())
+}
+
+fn gen_middle(line:&str)->String{
+ let tokens = line.split(" ").filter(|v|v!=&"").map(String::from).collect::>();
+ let name = tokens[0..(tokens.len()-1)].join(" ");
+ let id = tokens.last().unwrap();
+ format!("",name,name,id)
+}
+```
+TLDR on that whole bunch of code is I have a constant string as the start of the file which gets output. Then there's a middle that's generated from the list of channels in that weird format the previous script generated such that every entry gets put into the template `"` and then all the channel entries are joined together. After that, I put a constant value at the end to close everything up. Developing that script there was a bit of a hiccup where Invidious wouldn't take it because the channel name only had the first word due to me making an initial mistake which I eventually fixed.
+
+### Wait what about hosting an Invidious instance?
+
+Oh yeah, I should probably summarize that process. I tweaked a config file in a couple of places so it fitted my particular use case. I added entries to my /etc/sites-enabled/ and my DNS, then ran Certbot. After that with a simple `docker-compose up -d` with the provided docker-compose.yml file and it worked without anything worth commenting on happening.
+
+## Conclusion
+
+So that was a lot of words to describe something that only took maybe 6 hours altogether over a couple of days. That said I hope to do a bit more to keep myself on task, specifically, I want to set up a discord bot that will dm me the tasks that I have in google tasks(which is why I mentioned doing that youtube api automation may end up saving me time) and notion every day and my larger goals each week. So that'll be an interesting short little project when I find the time. Then once that's done I can finally return to my little overengineered todo list [Pogo](/blog/pogo_so_far) where I scrapped the very small amount of code I wrote following the spec I wrote in a previous blog and will probably implement in Erlang instead. Though before any of that I need to make a blog post ranting about serenity and give into implementing a bad fix to a problem their api caused. But after all that I can continue work on that libvirt util in Lisp. Anyways with all of that rambling out of the way I wish the reader of this a nice day.
diff --git a/posts/micro_blogs.md b/posts/micro_blogs.md
new file mode 100644
index 0000000..991df03
--- /dev/null
+++ b/posts/micro_blogs.md
@@ -0,0 +1,43 @@
+---
+
+title: "Micro blogs (1)"
+
+description: "a bunch of thoughts ideas and what not that aren't worth of full blogs but that I still want to write down"
+
+date: "2024-01-30"
+
+draft: false
+
+---
+
+# You know the drill
+
+Same deal as the [speedrun blog](https://pagwin.xyz/blog/speedrun/) putting down a bunch of ideas that I want out of my head but aren't worthy of a full blog.
+
+## The entire set plus one more in the set
+
+Let me start off where I started off[[1]](#1) lets say that someone who we shall name Steve anonymously puts a bounty on themselves which is described as "$1 in addition to whatever money Steve has on their person" how much money should be payed out to whoever collects the bounty on Steve and where would it come from? Well in this case the way that reality works and the set of actors involved constrains us to the answer of "whatever money Steve has on their person" and no more. This answer would correspond to addition being equivalent to the set union operator. That does work but with slightly different context it seems like the answer would be different, for example pretend that a god came down and said "I am going to transfer ten humans in addition to the entire human population to a habitable planet in a different galaxy". In this case specifying ten humans in addition implies that we're transferring a number of humans greater than just the current human population but also ten more because otherwise why specify those ten humans. The problem that I have is how many humans come out on the other side. "Why not just the current population plus ten?" well because transfer implies they already exist and aren't being created in that moment so the number should be the same and also my brain thinks there's an interpretation or slightly different wording where you could argue there'll be infinitely many humans. I'm pretty sure this is a [type 5 paradox](https://youtu.be/ppX7Qjbe6BM?t=2035)
+
+## Excel with types/static analysis?
+
+So I think I started thinking about this when I rewatched [this Matt Parker video](https://www.youtube.com/watch?v=yb2zkxHDfUE). I'm wondering if there's a niche for some spreadsheet software is intended to require the user to specify types for cells or a full sql-esque table or something in addition to doing some nice lints/static analysis like you would see in software to minimize errors. My mind has also feature creeped this idea out a bit to have this program capable of exporting some file package/sql database and an executable so you can have something maintaining the structure of the data while other programs do automated stuff in the hopes that Ludicity doesn't come in for a [drop kick](https://ludic.mataroa.blog/blog/i-will-fucking-dropkick-you-if-you-use-that-spreadsheet/).
+
+
+## PSA please check to make sure browser zoom works okay on your site [[2]](#2)
+
+And also phones and screen readers but the browser zoom one is the one that affected me when I'm writing this (over 2 months after the prior 2 sections). Particularly if you have a blog with some text content that's centered with whitespace as the margins and I zoom in I don't just want to see the text get bigger I also want the margins to shrink so the text has more room. Reason being so zooming isn't proportional to additional scrolling. Thanks.
+
+## Wait mobile vs desktop is just handled via a CSS if statement?
+
+I was exploring the CSS to understand how the margins shrank for my site but not the offending site and I noticed on both sites that the css styling jumped a bit as I shrank and grew the page width only to realize the reason was because of some [CSS if statement(s)](https://css-tricks.com/a-complete-guide-to-css-media-queries/). I don't know why I previously thought you could deal with this in some other way but TIL.
+
+
+## How long should I make these?
+
+I've never thought about this before but how long should I make blogs? How long should I make microblog dumps? I don't know I should probably think about that... later I will think about that later.
+
+## Footnotes
+
+1 - I started off in a fan-fic of the stormlight archive where a character I inserted into the story was putting up a bounty for themselves by promising "a shardblade in addition to any shardblades or shardplate $character has on them" rather than with money
+
+2 - If you noticed that the blog example was specific that's because this post was incited by a blog with whitespace which didn't shrink the whitespace when I zoomed. Also at time of writing I checked to make sure my site adheres to this.
diff --git a/posts/micro_blogs_2.md b/posts/micro_blogs_2.md
new file mode 100644
index 0000000..8606031
--- /dev/null
+++ b/posts/micro_blogs_2.md
@@ -0,0 +1,15 @@
+---
+
+title: "Micro blogs (2)"
+
+description: "a bunch of thoughts ideas and what not that aren't worth of full blogs but that I still want to write down"
+
+date: "2024-01-30"
+
+draft: true
+
+---
+
+# I think react native or expo is haunted
+
+
diff --git a/posts/mineflayer_why.md b/posts/mineflayer_why.md
new file mode 100644
index 0000000..559f0a1
--- /dev/null
+++ b/posts/mineflayer_why.md
@@ -0,0 +1,39 @@
+
+---
+title: "Mineflayer pains"
+description: "Describing all the pains with mineflayer I've dealt with so far"
+date: 2021-01-10
+---
+## Preface
+Given I'm gonna be complaining about [mineflayer](https://github.com/PrismarineJS/mineflayer) you may be wondering why I don't roll with something else given my complaints. The problem with that is that there is nothing else to my knowledge or at least nothing else high level, not even in other languages. There probably is and I just didn't look hard enough but oh well. Also I would've built up my own thing from scratch but reverse engineering/reimplementing a network protocol without official docs and without even unofficial docs if you're trying to do stuff with older versions is kinda hard and if you wanna see how far I got then you can look at the [repo with my work](https://github.com/Pagwin-Fedora/McProtocolLearning) and I refused to work with the slightly lower level(relative to mineflayer) [node minecraft protocol](https://github.com/PrismarineJS/node-minecraft-protocol)(made by the same person) because if I'm using someone else's work I may as well go all the way to the highest level
+## Why?
+Oh yeah I decided to make a minecraft bot because it seemed fun and it seems like there's all sorts of room to implement cool stuff though the specifics of what I'm making will probably be covered in another block post. Anyways onto the problems with mineflayer
+## constructor isn't used to create an object
+Specifically the thing that's a problem is that when you want to create a new [Bot](https://github.com/PrismarineJS/mineflayer/blob/master/docs/api.md#bot) instance you need to call the method mineflayer.createBot. Why have a method completely detached from the object that makes a new instance of that object when you can just make it a constructor I have no idea. You may be wondering "what's the big deal it's a slightly different way to instantiate the object?" the problem with that being that if you try to make a subclass of `Bot` to add methods for your own purposes you have to do janky stuff to work around the constructor not being used. In my case I did
+```typescript
+class taskBot extends mineflayer.Bot{
+ ...
+ constructor(options:mineflayer.BotOptions){
+ const bot = mineflayer.createBot(options);
+ super()
+ Object.assign(this,bot);
+ ...
+ }
+ ...
+}
+```
+by the way I'm using typescript in case you can't tell which will lead to another couple of headaches I'll get into soon but yeah using `Object.assign` is absolutely not ideal at all(an as of writing I have no idea if using this hack leads to problems I haven't encountered yet). As an aside there's no reason this problem couldn't be fixed at least from the glances I've given to mineflayer's source code.
+## different ways of having a point in 3d space passed to different functions
+I hate this with a firey passion and am very happy that it's super easy to deal with this. Okay so you may be wondering "what's the big fuss?" and I'll tell you that different functions will take either a Vec3(I have a small gripe with Vec3 as well but that's not worth making a fuss about) or 3 separate arguments specifying the x, y and z coordinates. Having these 2 approaches means that there isn't a correct form to have positions stored in your program because you'll have to deal with at least 1 form that isn't the form you have them stored in anyways. This problem is an easy fix you can just have an array that stores 3 numbers in it and when you need a Vec3 or need to pass it into a function that takes 3 args use the spread operator as args to the function or the Vec3 constructor.
+## typescript hell
+As I've already mentioned I'm using typescript for my own purposes. However with typescript there are a couple of problems that come up that are annoying to deal with. Plugins and minecraft-data.
+## Plugins
+My gripe with plugins can be further subdivided into how plugins add attributes to the `Bot` instance and how they have the `Bot` instance emit new events. The first problem is that the `Bot` type has a set of attributes that typescript knows about but when you load a plugin for the Bot instance to do something like add pathfinding abilities a new attribute is added to the bot that has all the new abilities within it but typescript doesn't know that the `Bot` instance has a new attribute. The solution I found for this which also removed a bit of complexity from the code was to make a subclass of Bot and add in the plugin attributes as needed which led to the problem already described above involving the constructor. My second problem with the events was that typescript also keeps track of what events an event emitter will emit so if you try to listen for an event it won't emit it'll give you an error. But again when you run a `Bot` instance through a plugin it's type doesn't change so it doesn't get any of it's new events. Sadly the solution for this required me to commit what's effectively a typescript sin.
+```typescript
+(this as any).once(...)
+```
+I think I heard an angel die. Of course I personally don't blame these typescript problems on the developer because 1)they wrote this in javascript and mistakes can happen and 2) I don't know how I'd solve them so yeah.
+## minecraft-data
+this one's short, basically there isn't an easy way to get the type for the object you get when you provide your minecraft version to the `minecraft-data` module. There's probably a way(in fact I'm pretty confident that I'm being an idiot here) but I can't be bothered finding it
+## conclusion
+mineflayer dev if you're reading this for whatever reason please fix the problems relating to the constructor and consistent arguments to functions that take points. Although for the latter I understand if you can't make it all consistent because it would be a breaking API change in all likelyhood. Overall I think the api is alright and I don't have enough will power or brain cells to remake it for myself but I would certainly appreciate these pain points being addressed if they can.
diff --git a/posts/pogo_again.md b/posts/pogo_again.md
new file mode 100644
index 0000000..07ff363
--- /dev/null
+++ b/posts/pogo_again.md
@@ -0,0 +1,111 @@
+---
+
+title: "Pogo again"
+
+description: "I swear I'm going to finish it this time (I'd borked the format for this post in hugo previously so sorry if you noticed)"
+
+date: 2023-01-01
+
+draft: true
+
+---
+
+# Another blog post that mentions Pogo
+
+Welcome to another blog about pogo aka that todo list that I way over-engineered. Anyways I started over again but this time I swear I'm gonna finish it, I promise I won't throw it out again. Oh what have I done? I have the database setup and some method for interacting with the data inside which I'm 100% going to through out in favor of raw sql queries... Wait no I promise I'm doing good this time the only reason I'm probably not gonna use existing work is because I'm going to make the api able to give back data based on what's being requested and also have things be mostly stateless because that allows scaling and... Okay I might still be over-engineering this but at least the way I'm over-engineering it is by making it scale instead of making it a mess and annoying to work on. Anyways here's some explanation of what I've done so far.
+
+## Initial setup
+
+So to start with I made a struct corresponding to a task in this todolist app which looked like this
+```rs
+#[derive(Clone)]
+pub struct TaskV1{
+ title:String,
+ body:String,
+ connected:Vec,
+ parents: Vec>,
+ children: Vec>
+}
+```
+however after some work at time of writing it's looking like the task struct will either look like
+```rs
+#[derive(Clone)]
+pub struct TaskV1{
+ id: Uuid,
+ title:String,
+ body:String,
+ progress:f32,
+ login:String
+}
+```
+or
+```rs
+```
+heh yeah as I said I'm considering getting rid of the methods involving tasks which would make a task struct(at least a general task struct) redundant but if you pressed me I'd tell you that
+```rs
+#[derive(Serialize,Deserialize)]
+struct TaskSerial{
+ title: Option,
+ body: Option,
+ progress: Option,
+ children: Option>,
+ parents: Option>,
+ media: Option>
+}
+```
+would be the task struct.
+
+## But why?
+
+Why would I remove some attributes like that or stick all of them into Options? Why are there a bunch of Uuids now? Also why do tasks have parents and children? These are good questions, to answer the first two I'm gonna spend a good chunk of this blog explaining what I've actually done but lemme answer the latter question real quick
+
+## Task organization as a pseudo-tree
+
+The reason nodes can have children is twofold: first, tasks can have subtasks and second, this allows me to avoid having a special category type which has tasks as children. This simplifies things from a code perspective and allows me to avoid duplication of functionality between categories and tasks, the only bit of redundancy is the fact that a category having progress is nonsensical but that's something I can figure out when I get to building a client. Anyways onto why Tasks underwent a bunch of change
+
+## Abstracting task encoding away
+
+Oh you think I started with DB stuff no no no I started by setting some stuff up to make encoding and decoding tasks as seamless as possible. Namely a versioning enum(it's boring moving on) and a Trait which was setup like this.
+```rs
+/// Trait that any method of encoding and decoding tasks needs to implement
+#[async_trait]
+pub trait TaskEncoder{
+ /// The type that can be gotten from a call to either provide_identifiers or
+ /// encode_task and if a value of it is gotten that way then should be usable with decode_task
+ /// to retrieve the original task it must be serializable with serde due to it being the value
+ /// passed around when working with tasks potentially onto disk or over network
+ //Specifying DeserializeOwned may be a problem in the future if I need to deal with types with
+ //lifetimes but until then this is good
+ type Identifier:serde::Serialize + serde::de::DeserializeOwned;
+ type EncodingError;
+ type DecodingError;
+ type IdentityFetchError;
+ async fn encode_task(&mut self, task:TaskVersioning, login:&str)->Result;
+ async fn decode_task(&mut self,id:Self::Identifier, login:&str)->Result