Wednesday, June 18, 2014

"Conlang" and the OED

So, conlang got an entry in the OED a few days ago. The word has been in use since the early 1990s, and in the post-Avatar, post-Game-of-Thrones world, it is unlikely to fade out of existence any time soon, so this is an obvious move on the part of the OED editorial team.

Compared to some conlangers' reactions, my own personal reaction to this is fairly muted. I absolutely do not view this OED entry as any sort of vindication of the art. First, if I needed approval from others to pursue my hobbies, I wouldn't play the banjo, much less conlang. I don't usually look to others for approval of my pastimes (except my neighbors, I suppose, if I decide to do something unusually loud). Second, there are all manner of very unpleasant behaviors also defined in the OED, which no one takes as a sign of OED editorial approval. The word's in the OED because it is being used now, has been for a few decades, and is likely to continue to be used for decades to come. The OED entry is a simple recognition of that fact.

I was, however, delighted to notice that one of the four citations was a book by Suzette Haden Elgin, The Language Imperative. Few people are neutral on her major conlang, Láadan. I'm a big fan, while at the same time not believing it capable of accomplishing the goals it was designed to attain. I got a copy of the grammar for the language before I had regular internet access, and so was the first conlang I ever saw that wasn't mostly a euro-clone.1 I learned a lot from Láadan, so I have a warm place in my heart for it. It's a shame Alzheimer's has probably robbed Elgin of the opportunity to know she was cited in the OED.


1 Klingon is not nearly as strange as it looks on the surface. Láadan introduced me to a range of syntactic and semantic possibilities I had not previously encountered: evidentiality, different embedding structures, inalienable possession, simpler tone systems, the possibilities of a smaller phonology.

Monday, June 16, 2014

The Ultimate Dictionary Database System

Is text. End of post.

Ok, it's not quite that simple. You probably want some sort of structured text, semantically marked up if possible. But at the end of the day, all you can really rely on is text.

Why Spreadsheets Suck

First, the format is proprietary and often inconsistent across even minor version changes. You will be in a world of hurt if you want to share your dictionary with anyone else.

Second — and this is the biggest problem by far, assuming you're trying to make a naturalistic conlang — a real dictionary for a real language does not look like this:

  • kətaŋ sleep
  • kətap book
  • kətəs hangnail on the left little finger which interferes with one's needlework
  • kəwa tree
  • kəwah noodle
  • kəwe computer
  • kəweŋ hard

A few words between two languages might have (nearly) perfect overlap, and the early history of word in a conlang might start as a simple gloss, but a simple word-to-word matching is profoundly lying to you for a real language, and in a conlang signals a relex.

A real dictionary entry looks like this: δίδωμι. It has multiple meanings defined, examples of use, collocations, grammar and morphology notes, references, etc., etc.

The spreadsheet format forces you into a very limited structure for each word. That structure can never hope to cope reliably with all the different words of a single language, much less the variety of things conlangers come up with (to say nothing of natlang variety). A spreadsheet is a too rigid format to grow the meaning and uses of a word over the lifetime of your conlang.

Why Databases Suck

First, they share the same problems with spreadsheets with respect to format. Technically, SQL is a standard. In reality, all but the most trivial of databases tend to use non-standard SQL conveniences offered by the database server software the software author decided to use. So, you may get something almost portable, but often not.

Second, and again like the spreadsheet problem, a truly universal dictionary tool, a piece of software that could handle everything from Indonesian to Ancient Greek to Navajo — or Toki Pona to Na'vi to High Valyrian to Ithkuil — is going to require a very complex database structure. The SIL "Toolbox" dictionary tool has more than 100 fields available (Making Dictionaries), and all those possibilities need to be in both the database design and the software that talks to the database.

I have, over the years, spent some time trying to design a database that could really be a good language dictionary. The schema for even a simple design was quite complex, and I would not have wanted to write the software to control it. There's this huge problem in that different languages vary wildly in their definitional needs. For Mandarin, for example, you need to cover all the usual purely semantic matters — polysemy, idiom, collocation, multiple definitions, examples, etc. — but there aren't too many morphological worries. But once you add morphological complexity you've got a whole new layer of issues. The Ancient Greek example I link to above is for a fairly irregular verb, with dialectal worries to boot. And for Navajo and related Athabaskan languages the situation is so dire that people write papers called things like Making Athabaskan Dictionaries Usable and Design Issues in Athabaskan Dictionaries (do look at those to get a feel for the issues).

Any truly general dictionary database, one capable of handling enough sorts of languages to be genuinely useful, would have vast tracts of empty space to accommodate information not needed in many languages, with these fields of whitespace in different places for different languages. Even if you target your database and software design to something like Ancient Greek, there will be lots of fields left blank most of the time. It's not like all the verbs are irregular, though it may sometimes seem that way to beginners.

If you had a very good team of developers, you could probably overcome these problems, assuming the users were willing to configure a complex tool to make it easy to use for only the things your language needed. But it's never going to be a money-making venture. I don't expect to see such a tool in my lifetime.

Enter Stage Right: Text

So, we're back to simple text. The benefits:

  • the file is still readable if Microsoft/Apple/Whoever releases a New and Improved (tm) version of this or that proprietary bit of software; a file you find from 10 years ago will still be readable
  • there are zillions of text editors, usually with built in search functions, which will work on the file
  • if part of the file is destroyed, the rest of the file will generally be recoverable (proprietary formats tend to be brittle when bitrot sets in)

Bare text, of course, is not very attractive. The way around this is to use a text-based markup of some sort. You could use HTML. Or even XML with a little more work. I strongly favor LaTeX, which requires more typing than I might like, but it gives me maximum flexibility to change my mind and spits out very attractive results. The point of this is that even though HTML and LaTeX are presentation formats, the underlying basis is still just plain text. If something goes horribly wrong, you'll have a modestly ugly text file to read, but all your hard work will still be recoverable.

If you are disorganized, a computer will not help you. If you can impose a little order on yourself, though, a computer can make your life a lot easier. And a little thought can make even a plain old .txt file into the best dictionary tool you could ever want.

Sunday, April 6, 2014

Afrihili Days of the Week

In anticipation of last week's release of my Fiat Lingua paper Afrihili: an African Interlanguage, I took to Twitter to do a few Word of the Day posts. Because this is the sort of silliness that amuses me, each Word of the Day was the word for that day. Here they are in a tidy list:

  • Kurialu Sunday
  • Lamisalu Monday
  • Talalu Tuesday
  • Wakashalu Wednesday
  • Yawalu Thursday
  • Sohalu Friday
  • Jumalu Saturday

I wasn't able to find the source languages for these words, each of which ends in alu day.

For good measure, here are the months:

  • Kazi January
  • Rume February
  • Nyawɛ March
  • Forisu April
  • Hanibali May
  • Vealɛ June
  • Yulyo July
  • Shaba August
  • Tolo September
  • Dunasu October
  • Bubuo November
  • Mbanjɛ December

Again, the source languages aren't always clear, though July is coming from some European language. I must admit I didn't devote too much time to tracking these down, though. Some might be immediately obvious to some of my readers.

There aren't enough examples of time phrases to be sure of everything. The notion of "by (a month)" combines two adpositions, ɛn Shaba fo by August.

Friday, February 21, 2014

Níí'aahta Tép Toulta - "Lord Smoke and the Merchant"

I have worked up a full interlinear for one of the shorter stories with Lord Smoke, a sort of trickster figure. I don't go into every subtlety of expression, but most should be clear.

Níí'aahta Tép Toulta (PDF), and a recording (MP3) of me reciting the tale.

Friday, November 22, 2013

What about dying languages?

There are various ways a person can respond the the discovery that I create languages for fun. The most common is noncommittal and polite puzzlement. A few people will be enthusiastic about the idea, especially if they're fans of the recent big films and TV shows involving invented languages in some way. Every once in a while, especially online, someone will object on the grounds that people involved with invented languages should, instead, be Doing Something about dying languages. This objection is so badly thought out that I'm genuinely surprised at its popularity.

First and foremost, anyone complaining about people messing around with invented languages has failed, in a fairly comprehensive way, to understand the concept of a hobby. Time I spend working with an invented language is not taken from documenting dying languages or some other improving activity, it is taken from time I spend with my banjo, reading a novel or watching TV.

Second, while it is true I, along with most language creators, know more about linguistics than the average Man on the Street, documenting undocumented languages is a special skill taking training I certainly don't have. In fact, most people with Ph.D.'s in linguistics won't even have such training. Do people going on about dying languages really imagine anyone can go out and do this sort of work? If someone has a nice garden near their house, we don't harass them about how they should be growing crops to feed the hungry, nor do we demand every weekend golfer go pro. What is it about invented languages that brings out this pious impulse to scold people for not doing something productive with their time when so many other hobbies get no comment at all?

If we step back to more modest goals than documenting a dying language, we're in much the same boat. There is little point to me going out and learning, say, Kavalan (24 speakers left as of 2000) unless I go to Taiwan and spend most of my time among the people who speak it. Sitting at home in Wisconsin learning Kavalan does nothing to preserve it in any meaningful way. You just can't really learn a language from a book. You have to spend time with native speakers.

Using other people's cultures — or fantasies about their culture — as a rhetorical foil has a long history. When Europeans were less approving of sex, they complained that Muslims were libertines, while others used this an example of a more sensible cultural trait. This is all part of the usual Noble Savage industry. The death of so many languages is a real issue, representing the permanent loss of a wealth of cultural and environmental knowledge. It deserves to be treated with more respect than to be used merely as a rhetorical club to browbeat people who have a hobby you don't like.

Friday, October 25, 2013

Arbitrary Sort Orders in Python (including digraphs!)

Unicode: everyone wants it, until they get it.
Barry Warsaw

I know I'm due to do another post about LaTeX, but that'll have to wait for next week.

I've recently discovered two nice tools for my iPad which let me do some programming, and sophisticated editing and text processing, Editorial and Pythonista. So, I've been working on some code related to conlanging.

I know some people hate them, but I'm a big fan of word generators for three reasons. First, they help me avoid overusing certain sounds, something I'm normally prone to. Second, it helps you verify that the rules you've given for your syllable shapes actually describe what you want. Finally, while I might have phonaesthetic concerns about some vocabulary, I don't want to agonize over the word for "toe" or "napkin" most of the time, so I like having a random pool of words to grab from. I still might change the word, or decide a random selection is not right for the word, so it's not like I'm giving up aesthetic control of my language.

In any case, while it is a bit odd to write new software on a tablet, last night in about an hour I created a good tool for generating random new word shapes based on rules. But one serious problem came up — the sorted list was sorted terribly! For a person using a computer intended for English speakers, "á" is sorted after "z", which is not what I want at all. So I spent some time trying to come up with a way to sort arbitrarily.

In addition to the sort order of "á", I wanted to be able to correctly sort digraphs. In some languages, "ng" comes after the entirety of "n" in dictionaries and phone books.

It turns out there is a terrifying Perl library to accomplish this, Sort::ArbBiLex. As far as I can see, no such library exists for Python, so I had to write my own.

The code could probably be more efficient, but it works for my purposes, and turned out to be fairly simple. I rely on two bits of trickery. First, Python lets you sort ordered collections like lists and tuples. This makes it easy to follow the "decorate-sort-undecorate" pattern for sorting complex items. Second, I use a bit of a regular expression hack. If you split using a regular expression in a group, you get a strided array as a result, with the split pattern interwoven with the regular expression match.

>>> import re
>>> m = re.compile(r'(ch|t|p|k|a|i|o)')
>>> m.split("tapachi")
['', 't', '', 'a', '', 'p', '', 'a', '', 'ch', '', 'i', '']
>>> m.split("tapachi")[1::2]
['t', 'a', 'p', 'a', 'ch', 'i']
>>> 

Basically, I split on every single character in the language, which gives me a lot of empty strings, but they're easily filtered out. Notice how it recognizes "ch" as a separate letter of the language.

So, the central algorithm of this little bit of code is: convert the unicode string to a sequence of "letters" (however defined in your language), convert those letters into a numerical code, sort the list of numerical codes, turn the collections of numerical codes back into words, spit back the complete result.

import re

class ArbSorter:
    def __init__(self, order):
        elts = re.split('\s*', order, flags=re.UNICODE)
        # Create a regex to split on each character or multicharacter
        # sort key.  (As in "ch" after all "c"s, for example.)
        # Gosh, this is not especially efficient, but it works.
        split_order = sorted(elts, key=len, reverse=True)
        self.splitter = re.compile(u"(%s)" % "|".join(split_order), re.UNICODE)
        # Next, collect weights for the ordering.
        self.ords = {}
        self.vals = []
        for i in range(len(elts)):
            self.ords[elts[i]] = i
            self.vals.append(elts[i])

    # Turns a word into a list of ints representing the new
    # lexicographic ordering.  Python, helpfully, allows one to
    # sort ordered collections of all types, including lists.
    def word_as_values(self, word):
        w = self.splitter.split(word)[1::2]
        return [self.ords[char] for char in w]

    def values_as_word(self, values):
        return "".join([self.vals[v] for v in values])

    def __call__(self, l):
        l2 = [self.word_as_values(item) for item in l]
        l2.sort()
        return [self.values_as_word(item) for item in l2]

if __name__ == '__main__':
    mysorter = ArbSorter(u"a á c ch e h i k l m n ng o p r s t u")
    m = u"chica ciha no áru ngo na nga sangal ahi ná mochi moco"
    s = mysorter(m.split())
    print " ".join(s).encode('utf-8')

(A more attractive presentation.)

Just run the code and it prints out "ahi áru ciha chica moco mochi na ná no nga ngo sangal", exactly what you want. Much better than the "ahi chica ciha mochi moco na nga ngo no ná sangal áru" you'll get on a computer localized for an English speaker.

It is vital that you tell Python you're working with unicode text here, so make sure to include this in a comment near the top of your code: -*- coding: utf-8 -*-.

Wednesday, August 14, 2013

Conlanging with LaTeX, Part Three

In this post I want to talk about the thing that makes LaTeX so immensely powerful: it is programmable.

It is the great tragedy of modern computing that the industry has, for the most part, systematically trained people to be terrified of their computers. Things are changing all the time, usually in baffling ways, and those little changes all too often completely break other things we rely on. One consequence of this, though other issues compound the problem, is that most people have very powerful universal computing machines at their disposal but never write even a small program to solve a problem they might have.

This is not the place to teach computer programming, but I can introduce you to some very basic programming within LaTeX, to give you the power to radically alter the appearance of your conlanging documentation with just a few simple changes. It is this programmability of LaTeX that makes it such a powerful tool. Fortunately, most easy things are easy, so we'll start with that.

Text Appearance

Before we get to the programming, we'll start with the simple commands LaTeX uses to change basic font appearance. For example, from time to time we might want text to appear in italics or bold. In modern LaTeX, you just wrap the text you want to change in simple commands, \textit for italics and \textbf for bold. For example, \textbf{lorem ipsum dolor sit amet} will typeset that bit of gibberish in bold.

In addition to the bold and italics, there are a few other basic font changes you can use. Many linguisticky forumlae use small capitals, for which you can use \textsc. Note, though, that many fonts do not have a true small caps option. If you want to use proper ones, you'll need to pick your font carefully. You can use textsf to get a sans serif family, and \texttt for a "typewriter" family, with fixed character widths. In my own documentation, I find I mostly use italics and bold, with an occasional use of small caps, if I happen to be using a font that supports it. Unfortunately, Gentium, my favorite font, does not. Here are some fonts I know have small caps, apart from LaTeX's default Computer Modern (which I personally don't care for):

This introduction to LaTeX has a nice long list of various text tweaking options in LaTeX, Introduction to LaTeX, part 2.

Your Style

In my conlang documentation, I like to use bold font for the conlang and italics for the English translations. So you might think that I have \textbf and \textit all over my documentation. I don't. Instead, I write macros which declare my intent ("this is the conlang," "this is the translation"). That way, if I were to one day change my mind, I only have to update a single macro instead of going through the entire text changing all the \textbfs to something else.

Fortunately, in LaTeX it is trivial to write my own versions of things like \textbf, and I do so freely. My personal convention is to put (English) translations into a \E macro and the example language in \LL. This is how they are defined —

\newcommand{\LL}[1]{\textbf{#1}}
\newcommand{\E}[1]{\textit{#1}}

So, what does all this mean. First, \newcommand does what you'd expect — it creates a new command. The next part, in curly braces, lets you name this new command of yours. Note that LaTeX is case sensitive, so \E and \e would be different commands. Also, note that if you accidentally try to use a name that is already defined somewhere in LaTeX, it will barf out and complain about the redefinition. This is why my "in the language" macro is \LL — there's already a \L in LaTeX (it gives a barred-L for languages like Polish).

The part in the square brackets says how many arguments the macro has. That is, how many different sets of curly braces there will be with the command. Finally is the body of the macro, which is what you want the macro to do. Within the body you can use #1 to refer to the first argument, #2 to the second, etc. So, my \LL macro has a single argument, which is wrapped up in the \textbf command.

On the surface, this looks sort of dumb. I have just written my own command to do something which LaTeX can already do. But, I've replaced a font styling command with a semantic command, for my personal cognitive benefit. \LL everywhere means "this is in the conlang" not just "this is in bold face." This gives me two advantages. First, I can go through the document looking just for examples of the conlang. Second, if I decided later I hate bold for the conlang, I can simply change the macro and let LaTeX do the rest.

You can also just put plain text within a new macro. For example, my dictionary stye has this:

\newcommand{\Seealso}[1]{See also \LL{#1}.}

Let's look at a command with more than one argument. This is a simplified version of my "Lexicon EXAMPLE" macro.

\newcommand{\lexample}[2]{\LL{#1} \E{#2}}

An example of us of this is, \lexample{tempus fugit}{time flies}. It will just print the Latin phrase in bold, a space, then the English translation in italics. Note very carefully — normal text parsing rules of LaTeX apply within a macro definition, so you need to take care about extra spaces or line ends. You can get weird effects, and I'll talk about ways to tame that in a later post.

For one last example, sometimes I make small notes to myself within the body of a document I'm working on. Because I want it to stand out, but not take up too much room, I format that note in a smaller font, but I use a different color.

\newcommand{\note}[1]{\textcolor{magenta}{\small\textit{#1}}}

If you're not using XeTeX, you'll probably need to \usepackage{color} to get this to work.

Etc.

A few weeks ago on the conlang-l mailing list someone mentioned that there's a nice LaTeX package to typeset vowel triangles in the way we're used to from a nice IPA chart. Ignoring other package and LaTeX setup details, you just need this:

...
\usepackage{vowel}
...
\begin{vowel}
  \putcvowel{\LL{i}}{1}
  \putcvowel{\LL{u}}{8}
  \putcvowel{\LL{a}}{4}
  \putcvowel{\LLi{e}}{2}
  \putcvowel{\LLi{o}}{7}
\end{vowel}

Which produces this:

If you're using TeXLive, you'll already have the package installed. The package documentation is very clear.

Next Time

The next post will be all about tables, because if there's anything conlangers love, it's paradigm charts.