Universal Language is incomplete

Perhaps the most famous account is Chomsky’s Universal Grammar hypothesis, which argues that humans are born with innate knowledge about many of the features of language (e.g., languages distinguish subjects and objects), which would not only explain cross-linguistic universals but also perhaps how language learning gets off the ground in the first place. Over the years, Universal Grammar has become increasingly controversial for a number of reasons, one of which is the arbitrariness of the theory: The theory merely replaces the question of why we have the languages we have, and not others, with the question of why we have the Universal Grammar we have, and not another one.

As an alternative, a number of researchers have explored the possibility that some universals in language fall out of necessary design constraints. The basic idea is that some possible but nonexistent languages do not exist because they would simply be bad languages.

Still, none of that explains why SOV would be the default; as usual, a new question has hitched a ride along with the answer to an old one. We also still need an explanation of why some SVO languages have case marking and some SOV languages do not (the authors sketch a few possibilities).

Overall, though, this paper provides one of the clearest examples yet of where an important tendency in human language — a bias you would not expect to exist through mere random chance — can be explained by reference to universal principles of computation and information theory. This does not necessarily exclude Universal Grammar — perhaps Universal Grammar smartly implements good computational principles — but it does shed light on why human language — and by extension, human nature — is the way it is and not some other way.

Leave a comment