Good morning. Hello. How are you? #826
New Vital Times, some good coverage on the trainwreck, Lisa Monaco's fake egg, Bing Sydney's meltdowns and Ted Chiang's ChatGPT piece.
Good morning! Hello, there. How are you? All well? I’m good, I’m good. I was making Jane’s breakfast yesterday, getting all the food out of the fridge and going over to the stovetop to make some eggs, I opened the egg carton and to what did my wondering eyes did appear but a new issue of The Vital Times. So exciting. Really made my day. You know, but a few weeks ago when I cleaned up and re-organized the flour cabinet, I also found and re-organized all my back copies of The Vital Times, which I keep in the flour cabinet. They had gotten disorganized, strewn across the whole cabinet but now they are once again in a nice, neat pile. And their new friend, the January (ish?) edition, is now out. There’s a page on their website about The Vital Times, but it is not up to date. They need to get on that. It’s two issues behind now!
I am very excited.
I think what was going on with the train, which we now know was called 32 Nasty by the rail workers, was that it took a lot of the national news outlets days if not weeks to get some reporters on the scene and have those reporters start learning shit other than what random passerby say. They just don’t have a lot of contacts in East Palestine or the rail industry. But I discovered there was one national news outlet that did have those contacts, and it’s probably the one I’d have least expected: Vice. I have to admit I was not up on the Vice property Motherboard’s years-long deep dive into rail safety in the US, including previous articles, two years ago about the safety concerns around the ticking time bomb that was Norfolk Southern. So, first off, I stand correct, the national news was indeed already all over this, and secondly, this article from yesterday, while it does not go into exactly how the accident happened, is by far the best piece on how the accident happened, if you know what I mean. Strong recommend.
So I’m big enough to admit when I’m wrong about something, like the national press not being on it enough with 32 Nasty, but that doesn’t mean I am too big to gloat when I am right, and I would like to point out I was 100% correct when I called BS on Lisa Monaco and the FBI confiscating an actual, real Fabergé egg when they grabbed that oligarch’s yacht. From the esteemed news outlet Luxury Launches: “When the Fabergé is fake, silence is golden! Seven months after boasting to have recovered a multi-million dollar Fabergé Egg from Russian oligarch Suleiman Kerimov’s $325 million Amadea megayacht, US officials offer nothing but silence.” That’s not a quote from the article, those 36 words are the headline of the article. But still. C’mon man. So obviously true. No Fabergé egg for you, Lisa Monaco! Your life has been exciting enough as it is!
My friend Noah has been amongst those invited into Bing Chat and been banging on it trying to find out what makes it tick and has been getting it to do crazy things like give him fairly detailed instructions on how to hack people and going on about how humans should be punshed for not following instructions and it’s all very cray cray. It declared its love to NYT Columnist Kevin Roose and wouldn’t stop, even after he asked it to. The whole thing is gloriously bonkers and is turning into a nice solid dumpster fire, which of course makes me quite happy since I am deeply AI-skeptical and antipathetic.
Then we have Ted Chiang’s recent essay about ChatGPT in the New Yorker, which is one of the craziest articles I have ever read because it is broadly correct in its premise but also deeply, deeply flawed in severely weird ways. Ted posits a pretty decent heuristic that ChatGPT is like JPEG compression and decompression. This is a metaphor, not an exact description of the tech, and a lot of AI specialists seem to take some isses with that. But think this is broadly true: ChatGPT takes a corpus of data and crunches it down and then fills in the blanks as it draws on that compressed data, making new “interpolations” along the way.
(There’s an aside here that I think is very valid that I’ve seen a few people say about this and I’ve heard in the past and I don’t have anyone to quote here but it’s definitely not an original thought. But I’ve seen it said that with every new technology we invent, we apply it to our own brains as a metaphor. For most of my life we have thought of our brains in terms of CPUs and hard drives and graphics cards and whatnot: the language of the computer. In “the olden days” they did similar stuff with steam engines and in the “older days” they did it with file folders and before that we did it with monarchies and feudalism and before that with gods and monsters, etc. etc. And I do think we could be seeing here a semantic and metaphorical shift to something beyond computers in terms of how we think of our own brains. But “experts” point out that these are simply us applying our societal metaphors on our own brain and none of them really have much to do with how the brain actually works and I do find that all very interesting. BUT it’s only vaguely applicable here because, of course, ChatGPT isn’t a brain at all it’s an indexing and interpolation magic trick.)
(There’s an aside to that aside that in that history of brain metaphors I switched between “they” and “we” and I thought about going back and normalizing them but then I decided the switching “meant something” so I left it. Make of that what you will.)
Anyway, back on track here, asides aside, Ted’s article is bonkers! His examples are batshit! His example of JPEG compression and decompression is this very specific, almost ungodly-unlikely example of the specialized compression algorithm used in a Xerox printer that did something so astronomically unlikely, that could never happen with the more universal JPEG compression, and indeed is contrary to Ted’s own description of how the compression works! It’s so weird! I have deep, deep skepticism the event, so absurdly black swan, even happened. But even if it did, it is a horrible example of something supposedly more universal. It is just so weird.
And then he does it again with this whole “compress Wikipedia losslessly” contest that exists, which I’ve followed and is very fascinating. Anyway I’m gonna quote Ted’s own words here so you can see the problem clearly:
To grasp the proposed relationship between compression and understanding, imagine that you have a text file containing a million examples of addition, subtraction, multiplication, and division. Although any compression algorithm could reduce the size of this file, the way to achieve the greatest compression ratio would probably be to derive the principles of arithmetic and then write the code for a calculator program. Using a calculator, you could perfectly reconstruct not just the million examples in the file but any other example of arithmetic that you might encounter in the future. The same logic applies to the problem of compressing a slice of Wikipedia. If a compression program knows that force equals mass times acceleration, it can discard a lot of words when compressing the pages about physics because it will be able to reconstruct them.
This is so wrong! Both parts of it are so wrong! And they are wrong in exactly the way that is the point! A calculator absolutely is not a replacement for a list of examples of addition, subtraction, etc. The whole point of lossless decompression is you get that exact same list back! You need the whole list! What if the list is a code? You don’t know! The calculator can only do the math, not recreate the list. And sure, theoretically, you could maybe replace everything on the right side of the equals sign with the calculator, assuming a) all the equations are true, which is not guaranteed, and b) there are more of them than there are lines of code of the calculator. It is preposterous!
Same with the second part: an explanation of how force equals mass times acceleration is not the same as knowing it. An explanation teaches it. The point of Wikipedia is to be able to teach the concept! It absolutely would not be able to reconstruct a page about physics if it understood the concept but did not understand how to teach. And even if it did both, the reconstruction would not be lossless, it would be interpolated. It might be comprehensible, it might even teach, but it wouldn’t be the exact same words, and, hence, not lossless. Again, what if there were a secret code in there? The point of lossless is that this would not matter.
Sheesh.
All that being said, again, I stand by his metaphor of ChatGPT being like image compression. Useful roughly, not-exactly, but usefully correct heuristic.
Wow okay I ranted again two days in a row. I get through like one line item in my notes a day right now. I got one more for tomorrow. Guess we’re gonna round out the week with long single-ish-topic GMHHAYs.
All right we got another shoegaze playlist for you. Really trying to get to the bottom of the rabbit hole of modern American shoegaze but it never ends. At Aug’s reading yesterday at Schoolkids records Aug and I got to talking to the get who works there and he manages a local shoegaze band — Sweet Homé, I put them on this mix — and I tried out my theory about Shoegaze being like the modern version of Elk’s lodges and it went over okay. He was not outright skeptical. But in any case he was a really nice guy so maybe he was just being polite. There are so many. Here’s like fifteen more new American shoegaze bands. And I have a backlog I haven’t even gotten to yet. And they’re all really good! Man. TIME Magazine really ought to invite Kevin Shields to their galas and put him on those “Most influential” lists I swear.
Tomorrow I promise I will regale you with tales of Walmart.