Friday, November 19, 2010

Great expectations

It's been a bit since I wrote anything truly original on this here site; I fear things have been rather hectic in Patrickland.

I've had a few straight weeks of mind-boggling busyness, but every one of them has been good, almost to a day. Seriously: things have been chugging along furiously but swimmingly, and I've been enjoying all of my classes immensely.

As you can tell from the two most recent posts (this one and this one), Newton v. Leibniz came off without a hitch. Both sections of my Calc I class prepared well and performed well. Though the debate wasn't as heated as I've seen it in the past, both sides were ready and the back-and-forth was steady and confident. I was particularly impressed with the woman playing John Collins in the second section, and the woman playing Henry Oldenburg in the first. The lead attorneys all did splendidly, with well-prepared arguments and clear questions. Objections were at a minimum, and everything was civil (for the most part...there was one somewhat heated exchange in the second section). I enjoyed it a lot. Judging from a preliminary reading ("skimming" would be more apt) of the students' reflection papers dealing with the project, they got a lot out of it, too.

Since then (that was a week ago this past Tuesday), my primary occupation has been getting ready for the Conference on Constrained Poetry. Everything's in order now, with all of our speakers in town (aside from my colleague who's driving over from the Sylva-Cullowhee area tomorrow morning), almost 80 conference-goers pre-registered, recently-run articles on the event in the campus literary mag and the local free weekly, and a well-known journal (Critiphoria) ready to print attendees' creations. Not bad for a first time event. I'm quite sanguine about our chances of repeating next year.

Mimicking our guest Michael Leong's construction of occasional constrained poems in honor of our conference (check out his chapbook, The Hoax of Contagion...also, my thanks to Michael for linking to a handy on-line n + 7 generator), I applied my finite-state automatatistic (what a wonderful word!) constraint to the list of last names of students in my Linear class, obtaining the following automaton:


The word list I obtained from this constraint was short enough to offer a challenge and long enough to give me a rich source of words to describe the joy I've had in working with the students in this course, one of my favorites since coming to this school five and a half years ago.

I'll leave you with the resulting poem, but I'll be sure to check in again soon to let you know how things go tomorrow (because I know you give a damn).

We are one

Here we were married.
We were wedded,
welded
one to one
on a vessel on a charted curve
etched in every memory
in marked meter.

We were battered hard
at intervals,
in imagined rain,
in grey tones.

We are free, we veterans,
sores patched, keels buried,
meet, reserved, nerves set.

We are free, we veterans.
We are one.

1 comment:

eddeaddad said...

Hi Doc!

I'm not a mathematician by trade, but I think if you add probabilities to the graph edges, you have a character bigram language model. Similarly, the "Loose ends" FSA from June 29 '09 looks like a class-based character bigram language model, where each vertex is a class of characters. So you could say each FSA is a bigram whose probabilities are all equal.

Character-based n-gram generators (google: janusnode or travesty generator byte or dissociated-press to try one!) tend to produce amusing but frequently incoherent text, compared to word-based n-gram generators (google: eGnoetry or janusnode) which are themselves frequently incoherent semantically. I've compared class-based and word-based models built on Shakespeare's sonnets (google: epogees) and subjectively find class-based models to be less semantically coherent than word-based models. I have to admit I've never tried generating from class-based character n-grams, but it sounds like they'd suffer the problems of both character-based n-grams and class-based n-grams. Anyways, the fun part is searching through the set of outputs consistent with the FSA/n-gram language model and a given poetic form. Since most poetic forms are finite, the number of outputs is also finite. However developing an evaluation measure to automatically identify the best output is very challenging.

But I babble. It looks like you're after language forms that are mathematically interesting yet not necessarily meaningful? I look forward to seeing where you take it!

-e.