NECPhon 2014, New York University

8th Northeast Computational Phonology Meeting

The Northeast Computational Phonology Circle (NECPhon) is an informal yearly meeting of scholars interested in any aspect of computational phonology. Our goal is to provide a relaxed atmosphere for researchers to present work in progress on a variety of topics, including learnability, modeling, and computational resources. This is the eighth annual NECPhon, and the first time the workshop takes place at New York University. For information on previous meetings, see here.

Date

Saturday, November 15, 2014

Location

NYU Department of Linguistics, 10 Washington Place, room 104 (ground floor)

Registration

There is no registration fee, and all are welcome. If you plan to attend, please email the organizer, Frans Adriaans, so that we can plan accordingly for food and refreshments, and we can make sure to keep you informed about any updates.

Schedule

11:00-12:00 Lunch (with bagels and coffee)
Session 1 (chair: Frans Adriaans)
12:00-12:30 Thomas Graf (Stony Brook University)
Dependencies in Syntax and Phonology: A Computational Comparison
12:30-1:00 Jane Chandlee (University of Delaware)
Using Output Strict Locality to Model and Learn Long-distance Processes
1:00-1:30 Coral Hughto, Robert Staubs, Joe Pater (University of Massachusetts Amherst)
Typological consequences of agent interaction
1:30-2:00 Coffee break
Session 2 (chair: Jeffrey Heinz)
2:00-2:30 Rachael Richardson, Naomi Feldman, & William Idsardi (University of Maryland)
What defines a category? Evidence that listeners’ perception is governed by generalizations
2:30-3:00 Tal Linzen & Gillian Gallagher (NYU)
The time course of generalization in phonotactic learning
3:00-3:30 Hyun Jin Hwangbo (University of Delaware)
Preliminary results of artificial language learning of vowel harmony patterns with neutral vowels
3:30-4:00 Coffee break
Session 3 (chair: Naomi Feldman)
4:00-4:30 Adam Jardine (University of Delaware)
Representing and learning phonological tiers
4:30-5:00 Caitlin Richter (UMD), Naomi Feldman (UMD), & Aren Jansen (JHU)
Representing speech in a model of phonetic perception
5:00-5:30 Juliet Stanton (Massachusetts Institute of Technology)
Rare forms and rare errors: deriving a learning bias in error-driven learning
5:30 Organizational meeting

 

Selected abstracts

Thomas Graf, Stony Brook University
Dependencies in Syntax and Phonology: A Computational Comparison

Today’s dominant theories of phonology and syntax — Optimality theory and Minimalism, respectively — differ greatly in their theoretical primitives, e.g. the mode of operation (representational vs derivational) and the locus of typological variation (constraint reranking vs lexical features). One might take this to suggest that phonology and syntax have very little in common, which is also backed up by results in formal language theory that locate phonology within the realm of regular string languages and syntax in the much more powerful class of mildly context-sensitive string languages. I argue that this picture is misleading and that a close connection emerges between the two once one looks at the complexity of the automata that are needed to compute their respective dependencies. It turns out that syntax and phonology have comparable upper and lower boundaries in this case. The increased power of syntax stems from its use of a more expressive data structure: trees of unbounded depth.

Juliet Stanton, MIT
Rare forms and rare errors: deriving a learning bias in error-driven learning

In this talk I argue that a subset of systems exhibiting the midpoint pathology (Kager 2012) are absent from the typology due to considerations of learnability. I identify two factors that make these systems difficult to learn: the rarity of forms that demonstrate certain crucial rankings, and the subsequent rarity of errors that would lead the learner to the right hypothesis. In the absence of overt evidence, the learner never believes it is learning a midpoint system – even though this is a possible hypothesis, given the observed data. I show that this bias follows naturally from facts about error-driven learning as well as current hypotheses about the initial state.