Because of my diary method of conlanging, I regularly have to create fairly mundane vocabulary for things I haven't yet talked about. That can sometimes lead to unexpected places. I recently wanted a phrase for "bird bath" for Kílta. I decided I was probably going to get that word out of something related to a word for "pond," in particular an artificial sort of pond/lake thing. But I didn't even have a word for that ready to go. So, I created a word for "pond" — paipa — and then I wanted to talk about a beaver pond as one of the examples. Naturally, I didn't already have a word for beaver. I'm even less likely to talk about beavers than I am bird baths, but here I am today, with a word for beaver — kiulon (which means something like "tree-associated red-brown guy") — and I still haven't gotten to "bird bath." Maybe today.
This sort of fairly routine vocabulary is where most of my creative time goes. Sometimes, though, concepts without simple English words get my attention. I've documented a few of them in Segments issue #4. Recently I have been exasperated by the lack of good words to describe things surrounding generative AI, in particular the "language" models, like ChatGPT. The core aspect of these that is so vexing is that they are bullshit machines. I am using "bullshit" here in the Harry G. Frankfurt definition, which defines B.S. as "speech intended to persuade without regard for truth." That is, these models have no meaningful relationship with questions of truth or falsity. ChatGPT just produces a stream of characters that is statistically likely given the prompt. Similar models are known to not always cope well with negation, for example, which is a fairly important thing to get right if you want to talk about the real world reliably. I will end my rant on these models here, and rely on what I've said so far to set the stage for some recent new vocabulary.
As you might imagine, I've been wracking my brains to come up with ways to talk about "true" vs. "false" in a situation where those concepts aren't particularly related to the real world, but are located within the weight matrices of these AI models. It's always nice if new words can be developed on existing models or metaphors, and, luckily, Kílta already has the word húrusakin "theory/worldview internal," an adjective for words, ideas, etc., which won't necessarily make sense to people unfamiliar with a particular theory or worldview. This is a compound of húr "border, boundary" and saka "idea, thought, notion." This immediately suggested húrutásin and húrusikkarin for contextually bounded ideas of "true" and "false," from already existing tásin "true" and ikkarin "false" (itself derived from ikko "to lack").
While the starting place for these words is my vexations when trying to talk about ChatGPT and related tools, these words, húrutásin and húrusikkarin, have application in other areas. We can use them to talk about true vs. false in computer programs, for example, or in any other sort of modeling technology or methodology. Something might be true in one system of formal logic, but not in another. Similarly, some more totalizing worldviews might have judgements about true and false that are at variance with the worldviews of others. These words will let me talk about truth judgements in particular systems that may or may not correspond to other systems, or my own ideas about the world (Kílta is somewhat optimized to make it easy for me to editorialize about things).
No comments:
Post a Comment