Sunday, April 6, 2014

Afrihili Days of the Week

In anticipation of last week's release of my Fiat Lingua paper Afrihili: an African Interlanguage, I took to Twitter to do a few Word of the Day posts. Because this is the sort of silliness that amuses me, each Word of the Day was the word for that day. Here they are in a tidy list:

  • Kurialu Sunday
  • Lamisalu Monday
  • Talalu Tuesday
  • Wakashalu Wednesday
  • Yawalu Thursday
  • Sohalu Friday
  • Jumalu Saturday

I wasn't able to find the source languages for these words, each of which ends in alu day.

For good measure, here are the months:

  • Kazi January
  • Rume February
  • Nyawɛ March
  • Forisu April
  • Hanibali May
  • Vealɛ June
  • Yulyo July
  • Shaba August
  • Tolo September
  • Dunasu October
  • Bubuo November
  • Mbanjɛ December

Again, the source languages aren't always clear, though July is coming from some European language. I must admit I didn't devote too much time to tracking these down, though. Some might be immediately obvious to some of my readers.

There aren't enough examples of time phrases to be sure of everything. The notion of "by (a month)" combines two adpositions, ɛn Shaba fo by August.

Friday, February 21, 2014

Níí'aahta Tép Toulta - "Lord Smoke and the Merchant"

I have worked up a full interlinear for one of the shorter stories with Lord Smoke, a sort of trickster figure. I don't go into every subtlety of expression, but most should be clear.

Níí'aahta Tép Toulta (PDF), and a recording (MP3) of me reciting the tale.

Friday, November 22, 2013

What about dying languages?

There are various ways a person can respond the the discovery that I create languages for fun. The most common is noncommittal and polite puzzlement. A few people will be enthusiastic about the idea, especially if they're fans of the recent big films and TV shows involving invented languages in some way. Every once in a while, especially online, someone will object on the grounds that people involved with invented languages should, instead, be Doing Something about dying languages. This objection is so badly thought out that I'm genuinely surprised at its popularity.

First and foremost, anyone complaining about people messing around with invented languages has failed, in a fairly comprehensive way, to understand the concept of a hobby. Time I spend working with an invented language is not taken from documenting dying languages or some other improving activity, it is taken from time I spend with my banjo, reading a novel or watching TV.

Second, while it is true I, along with most language creators, know more about linguistics than the average Man on the Street, documenting undocumented languages is a special skill taking training I certainly don't have. In fact, most people with Ph.D.'s in linguistics won't even have such training. Do people going on about dying languages really imagine anyone can go out and do this sort of work? If someone has a nice garden near their house, we don't harass them about how they should be growing crops to feed the hungry, nor do we demand every weekend golfer go pro. What is it about invented languages that brings out this pious impulse to scold people for not doing something productive with their time when so many other hobbies get no comment at all?

If we step back to more modest goals than documenting a dying language, we're in much the same boat. There is little point to me going out and learning, say, Kavalan (24 speakers left as of 2000) unless I go to Taiwan and spend most of my time among the people who speak it. Sitting at home in Wisconsin learning Kavalan does nothing to preserve it in any meaningful way. You just can't really learn a language from a book. You have to spend time with native speakers.

Using other people's cultures — or fantasies about their culture — as a rhetorical foil has a long history. When Europeans were less approving of sex, they complained that Muslims were libertines, while others used this an example of a more sensible cultural trait. This is all part of the usual Noble Savage industry. The death of so many languages is a real issue, representing the permanent loss of a wealth of cultural and environmental knowledge. It deserves to be treated with more respect than to be used merely as a rhetorical club to browbeat people who have a hobby you don't like.

Friday, October 25, 2013

Arbitrary Sort Orders in Python (including digraphs!)

Unicode: everyone wants it, until they get it.
Barry Warsaw

I know I'm due to do another post about LaTeX, but that'll have to wait for next week.

I've recently discovered two nice tools for my iPad which let me do some programming, and sophisticated editing and text processing, Editorial and Pythonista. So, I've been working on some code related to conlanging.

I know some people hate them, but I'm a big fan of word generators for three reasons. First, they help me avoid overusing certain sounds, something I'm normally prone to. Second, it helps you verify that the rules you've given for your syllable shapes actually describe what you want. Finally, while I might have phonaesthetic concerns about some vocabulary, I don't want to agonize over the word for "toe" or "napkin" most of the time, so I like having a random pool of words to grab from. I still might change the word, or decide a random selection is not right for the word, so it's not like I'm giving up aesthetic control of my language.

In any case, while it is a bit odd to write new software on a tablet, last night in about an hour I created a good tool for generating random new word shapes based on rules. But one serious problem came up — the sorted list was sorted terribly! For a person using a computer intended for English speakers, "á" is sorted after "z", which is not what I want at all. So I spent some time trying to come up with a way to sort arbitrarily.

In addition to the sort order of "á", I wanted to be able to correctly sort digraphs. In some languages, "ng" comes after the entirety of "n" in dictionaries and phone books.

It turns out there is a terrifying Perl library to accomplish this, Sort::ArbBiLex. As far as I can see, no such library exists for Python, so I had to write my own.

The code could probably be more efficient, but it works for my purposes, and turned out to be fairly simple. I rely on two bits of trickery. First, Python lets you sort ordered collections like lists and tuples. This makes it easy to follow the "decorate-sort-undecorate" pattern for sorting complex items. Second, I use a bit of a regular expression hack. If you split using a regular expression in a group, you get a strided array as a result, with the split pattern interwoven with the regular expression match.

>>> import re
>>> m = re.compile(r'(ch|t|p|k|a|i|o)')
>>> m.split("tapachi")
['', 't', '', 'a', '', 'p', '', 'a', '', 'ch', '', 'i', '']
>>> m.split("tapachi")[1::2]
['t', 'a', 'p', 'a', 'ch', 'i']

Basically, I split on every single character in the language, which gives me a lot of empty strings, but they're easily filtered out. Notice how it recognizes "ch" as a separate letter of the language.

So, the central algorithm of this little bit of code is: convert the unicode string to a sequence of "letters" (however defined in your language), convert those letters into a numerical code, sort the list of numerical codes, turn the collections of numerical codes back into words, spit back the complete result.

import re

class ArbSorter:
    def __init__(self, order):
        elts = re.split('\s*', order, flags=re.UNICODE)
        # Create a regex to split on each character or multicharacter
        # sort key.  (As in "ch" after all "c"s, for example.)
        # Gosh, this is not especially efficient, but it works.
        split_order = sorted(elts, key=len, reverse=True)
        self.splitter = re.compile(u"(%s)" % "|".join(split_order), re.UNICODE)
        # Next, collect weights for the ordering.
        self.ords = {}
        self.vals = []
        for i in range(len(elts)):
            self.ords[elts[i]] = i

    # Turns a word into a list of ints representing the new
    # lexicographic ordering.  Python, helpfully, allows one to
    # sort ordered collections of all types, including lists.
    def word_as_values(self, word):
        w = self.splitter.split(word)[1::2]
        return [self.ords[char] for char in w]

    def values_as_word(self, values):
        return "".join([self.vals[v] for v in values])

    def __call__(self, l):
        l2 = [self.word_as_values(item) for item in l]
        return [self.values_as_word(item) for item in l2]

if __name__ == '__main__':
    mysorter = ArbSorter(u"a á c ch e h i k l m n ng o p r s t u")
    m = u"chica ciha no áru ngo na nga sangal ahi ná mochi moco"
    s = mysorter(m.split())
    print " ".join(s).encode('utf-8')

(A more attractive presentation.)

Just run the code and it prints out "ahi áru ciha chica moco mochi na ná no nga ngo sangal", exactly what you want. Much better than the "ahi chica ciha mochi moco na nga ngo no ná sangal áru" you'll get on a computer localized for an English speaker.

It is vital that you tell Python you're working with unicode text here, so make sure to include this in a comment near the top of your code: -*- coding: utf-8 -*-.

Wednesday, August 14, 2013

Conlanging with LaTeX, Part Three

In this post I want to talk about the thing that makes LaTeX so immensely powerful: it is programmable.

It is the great tragedy of modern computing that the industry has, for the most part, systematically trained people to be terrified of their computers. Things are changing all the time, usually in baffling ways, and those little changes all too often completely break other things we rely on. One consequence of this, though other issues compound the problem, is that most people have very powerful universal computing machines at their disposal but never write even a small program to solve a problem they might have.

This is not the place to teach computer programming, but I can introduce you to some very basic programming within LaTeX, to give you the power to radically alter the appearance of your conlanging documentation with just a few simple changes. It is this programmability of LaTeX that makes it such a powerful tool. Fortunately, most easy things are easy, so we'll start with that.

Text Appearance

Before we get to the programming, we'll start with the simple commands LaTeX uses to change basic font appearance. For example, from time to time we might want text to appear in italics or bold. In modern LaTeX, you just wrap the text you want to change in simple commands, \textit for italics and \textbf for bold. For example, \textbf{lorem ipsum dolor sit amet} will typeset that bit of gibberish in bold.

In addition to the bold and italics, there are a few other basic font changes you can use. Many linguisticky forumlae use small capitals, for which you can use \textsc. Note, though, that many fonts do not have a true small caps option. If you want to use proper ones, you'll need to pick your font carefully. You can use textsf to get a sans serif family, and \texttt for a "typewriter" family, with fixed character widths. In my own documentation, I find I mostly use italics and bold, with an occasional use of small caps, if I happen to be using a font that supports it. Unfortunately, Gentium, my favorite font, does not. Here are some fonts I know have small caps, apart from LaTeX's default Computer Modern (which I personally don't care for):

This introduction to LaTeX has a nice long list of various text tweaking options in LaTeX, Introduction to LaTeX, part 2.

Your Style

In my conlang documentation, I like to use bold font for the conlang and italics for the English translations. So you might think that I have \textbf and \textit all over my documentation. I don't. Instead, I write macros which declare my intent ("this is the conlang," "this is the translation"). That way, if I were to one day change my mind, I only have to update a single macro instead of going through the entire text changing all the \textbfs to something else.

Fortunately, in LaTeX it is trivial to write my own versions of things like \textbf, and I do so freely. My personal convention is to put (English) translations into a \E macro and the example language in \LL. This is how they are defined —


So, what does all this mean. First, \newcommand does what you'd expect — it creates a new command. The next part, in curly braces, lets you name this new command of yours. Note that LaTeX is case sensitive, so \E and \e would be different commands. Also, note that if you accidentally try to use a name that is already defined somewhere in LaTeX, it will barf out and complain about the redefinition. This is why my "in the language" macro is \LL — there's already a \L in LaTeX (it gives a barred-L for languages like Polish).

The part in the square brackets says how many arguments the macro has. That is, how many different sets of curly braces there will be with the command. Finally is the body of the macro, which is what you want the macro to do. Within the body you can use #1 to refer to the first argument, #2 to the second, etc. So, my \LL macro has a single argument, which is wrapped up in the \textbf command.

On the surface, this looks sort of dumb. I have just written my own command to do something which LaTeX can already do. But, I've replaced a font styling command with a semantic command, for my personal cognitive benefit. \LL everywhere means "this is in the conlang" not just "this is in bold face." This gives me two advantages. First, I can go through the document looking just for examples of the conlang. Second, if I decided later I hate bold for the conlang, I can simply change the macro and let LaTeX do the rest.

You can also just put plain text within a new macro. For example, my dictionary stye has this:

\newcommand{\Seealso}[1]{See also \LL{#1}.}

Let's look at a command with more than one argument. This is a simplified version of my "Lexicon EXAMPLE" macro.

\newcommand{\lexample}[2]{\LL{#1} \E{#2}}

An example of us of this is, \lexample{tempus fugit}{time flies}. It will just print the Latin phrase in bold, a space, then the English translation in italics. Note very carefully — normal text parsing rules of LaTeX apply within a macro definition, so you need to take care about extra spaces or line ends. You can get weird effects, and I'll talk about ways to tame that in a later post.

For one last example, sometimes I make small notes to myself within the body of a document I'm working on. Because I want it to stand out, but not take up too much room, I format that note in a smaller font, but I use a different color.


If you're not using XeTeX, you'll probably need to \usepackage{color} to get this to work.


A few weeks ago on the conlang-l mailing list someone mentioned that there's a nice LaTeX package to typeset vowel triangles in the way we're used to from a nice IPA chart. Ignoring other package and LaTeX setup details, you just need this:


Which produces this:

If you're using TeXLive, you'll already have the package installed. The package documentation is very clear.

Next Time

The next post will be all about tables, because if there's anything conlangers love, it's paradigm charts.

Monday, July 1, 2013

Conlanging with LaTeX, Interlude One

One of the perennial problems in writing any document dealing with multiple languages is choosing a font that can handle everything. Add a little linguistics, and things get very messy. Since I wrote the first part in this series, I have discovered a new font that's designed for this sort of linguistic work, the Brill. It's been in development for a while, but I hadn't checked it in more than a year, waiting for the bold. Now it's ready.

It has several character sets (Latin, Greek, Cyrillic, IPA), special glyphs for some humanist work, and, best of all, has true bold, italics and small caps which harmonize nicely with the rest of the text.

It's free for non-commercial use — which describes most of us conlangers — so give it a try. I've been working on a personal language document with this font, and it really is very nice. I'm not 100% fond of the italics, but I'll put up with that for true small caps and a well-integrated polytonic Greek.

Sunday, May 12, 2013

Conlanging with LaTeX, Part Two

In the previous post I suggested a basic LaTeX tutorial you might use to get a basic command of LaTeX. I'm going to assume everyone reading this has played around a little with LaTeX.

Before you can produce any document in LaTeX, you need to tell it a little about what you intend. The very simplest trussing for this will look a lot like this:



The space between the \documentclass and \begin{document} lines is called the preamble, and this is were you can put all sorts of other declarations to change how LaTeX works, either by changing its default behavior or by adding new functionality. For this post, I'm going to mention a few things that are useful for conlangers to have in their preambles. Specifically, I'm going to focus on what LaTeX calls packages. Fortunately, if you do a web search on most LaTeX packages you can get good documentation on how to use them effectively.

The first thing you should know, is that the font size can be changed in the \documentclass line. I usually like a 12pt font, but you can also ask for 10 or 11 points. As always, you need to use other packages to get more font size options.


By default, LaTeX has rather large margins. I have no need for so much whitespace, so I use the fullpage package to pull out the margins to something less wasteful of paper:



And that's all you need to say. Simply by using the package, the changes you want take effect.

The next big thing is a package to manage fonts. In the old days, dealing with fonts in LaTeX was truly a nightmare — strange font names, freaky encodings, fonts themselves in a special LaTeX format, fights between different packages and font expectations, etc. These days, the XeTeX version of LaTeX has much simpler font management capabilities, though you still have to do a little work.

For XeTeX to find a TrueType or OpenType font, it needs to be installed in the usual places your OS would put the font, since it relies on local mechanisms to find them.

There is a utility package that helps manage all this, fontspec:


\setromanfont{Gentium Basic}
\newfontfamily\gplus{Gentium Plus}


So, what I'm doing here is loading up the package, then immediately running some commands provided by that package to set some font defaults. The \defaultfontfeatures line tells XeTeX I want to use the normal, old-fashioned LaTeX digraphs and trigraphs for certain kinds of characters. For example, it will convert three consecutive minus signs into an em-dash (—), in the usual LaTeX way. If you omit this line, many examples of LaTeX you might find on the web may break in subtle ways for you.

The next line, \setromanfont picks the default font for the document. I like the Gentium family, since it has lots of accenting support, as well as Ancient Greek, which I often find myself using.

The next line lets me create a font command. It turns out, the Gentium Plus font has much better support for IPA characters, so when I want to type IPA, I can use the \gplus command to get the IPA. Note that you have to enclose the commands created that way in curly-braces to limit their effect. An example from my Kahtsaai grammar:

 \item Double \LL{ł}, \LL{łł}, is 
    pronounced [{\gplus ɮ}:]. 

So, the \newfontfamily command needs a command name, which you choose, and then a font name. Here, I picked the name \gplus (the leading backslash is required for all LaTeX command names).

The fontspec package is vast and powerful, allowing many interesting effects. You can look at the documentation to learn about more of its capabilities. I will just add that it is common for LaTeX documentation to have a large section at the end with the actual package code, with explanations. Most of the time, that is safely skipped.

I like to use different sorts of underlining in examples, for which the package ulem is very useful. Just use \usepackage{ulem}, and then you get some new LaTeX commands:

\uline{Just a normal underline.}
\uuline{A double underline}.
\uwave{A wavy underline}.

Some people will want to use the tipa package, which provides a funky encoding for IPA. I don't use it these days, since I don't always like the look of the output.

These are the most basic packages I use. There are a few more, but they are complex enough, or add such large new functionality, that I will save them for future posts.

Do experienced LaTeX-er conlangers have other basic packages to recommend, other than things like multicol, makeidx, multicol or hyperref, which I hope to talk about more in the future?

In the next post, I will talk a bit about defining your own simple macros to ease some formatting tasks, and tables tables tables...