It is surprisingly difficult to establish widely accepted results on the relation between language and thought. Part of the reason is that the basic terms of the question cannot be defined neutrally : we cannot scientifically find the ‘true’ notion of language, thought, word, or concept (let alone object). What instead can be done is formulate hypotheses that allow for empirical verification and that can be integrated with the results of neighbouring fields. This is the goal of this presentation : take a specific perspective on language and mind, and formulate a falsifiable hypothesis about a central aspect of the general question, namely, what can be a noun in a natural language ; or more precisely, what can be ‘thought as a noun’. The perspective adopted is a mentalistic one, which assumes the reality of mental representations and takes linguistic knowledge as not reducible to non-linguistic cognition. The hypothesis is followed that nouns are, like other lexical categories (‘parts of speech’), not atomic symbols but constructions : knowledge of words is also syntactic, in an extended sense, and the way words are assembled as abstract representations constrains what they can express. Because the content of a noun is related to its morphological structure, this approach makes predictions about how noun content can correlate with noun form. For example, it is predicted that mass plural nouns like waters cannot support abstract readings as in the formula of waters, and that structure of nouns like furniture correlates with a ‘discrete mass’ reading (mass, but denoting undescribed discrete individuals).
Importantly, the structure posited for nouns is a model for nouns as linguistic representations, not for concepts. The relation between word meaning and concepts is more indirect.
The main innovation I propose is that what makes a noun a noun, at the most basic level, is the function of naming an abstract sort, that is, a category of entities conceptualized as an abstract object, like dog, water, but also red wine (not strong wine) or Jim. Sorts are abstract entities we use as individuating principles that drive inductive learning, as we amass information about them and dynamically review the properties expected of them. These are the abstract concepts lexicalized as nouns. Qua concepts, they display many properties that are not derivable from the grammatical objects that express them (typicality, variable and blurred boundaries, continuous interaction with background knowledge in context). But language constrains what type of entity we can think as a word (not what entity is thinkable). For example, a thing with two contradictory properties, or the concept of two random simultaneous events, are all thinkable, but we predict that no natural language can contain a noun with those meanings. So, things like ‘the number of planets’ or ‘round square’ can be thought but cannot be thought as nouns - and so can never be sorts in a mental ontology.
Since lexical semantics is not directly a model of conceptual knowledge, this approach is compatible with the insight that cognition is ‘grounded’. This is because there is a continuum from modality-specific to cross-modal abstract representations. Language knowledge is part of the latter, but the representations it determines are like abstract forms given substance and implemented by general cognition, which may be based on ‘simulations’ of experience.
Paolo Acquaviva, University College Dublin.