↓ Skip to main content

Using Neural Networks to Generate Inferential Roles for Natural Language

Overview of attention for article published in Frontiers in Psychology, January 2018
Altmetric Badge

Mentioned by

twitter
3 X users

Citations

dimensions_citation
2 Dimensions

Readers on

mendeley
19 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Using Neural Networks to Generate Inferential Roles for Natural Language
Published in
Frontiers in Psychology, January 2018
DOI 10.3389/fpsyg.2017.02335
Pubmed ID
Authors

Peter Blouw, Chris Eliasmith

Abstract

Neural networks have long been used to study linguistic phenomena spanning the domains of phonology, morphology, syntax, and semantics. Of these domains, semantics is somewhat unique in that there is little clarity concerning what a model needs to be able to do in order to provide an account of how the meanings of complex linguistic expressions, such as sentences, are understood. We argue that one thing such models need to be able to do is generate predictions about which further sentences are likely to follow from a given sentence; these define the sentence's "inferential role." We then show that it is possible to train a tree-structured neural network model to generate very simple examples of such inferential roles using the recently released Stanford Natural Language Inference (SNLI) dataset. On an empirical front, we evaluate the performance of this model by reporting entailment prediction accuracies on a set of test sentences not present in the training data. We also report the results of a simple study that compares human plausibility ratings for both human-generated and model-generated entailments for a random selection of sentences in this test set. On a more theoretical front, we argue in favor of a revision to some common assumptions about semantics: understanding a linguistic expression is not only a matter of mapping it onto a representation that somehow constitutes its meaning; rather, understanding a linguistic expression is mainly a matter of being able to draw certain inferences. Inference should accordingly be at the core of any model of semantic cognition.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 19 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 19 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 5 26%
Researcher 4 21%
Professor > Associate Professor 3 16%
Student > Master 3 16%
Other 2 11%
Other 1 5%
Unknown 1 5%
Readers by discipline Count As %
Engineering 4 21%
Neuroscience 3 16%
Agricultural and Biological Sciences 2 11%
Computer Science 2 11%
Psychology 2 11%
Other 3 16%
Unknown 3 16%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 17 January 2018.
All research outputs
#15,486,175
of 23,012,811 outputs
Outputs from Frontiers in Psychology
#18,960
of 30,257 outputs
Outputs of similar age
#270,259
of 441,876 outputs
Outputs of similar age from Frontiers in Psychology
#402
of 546 outputs
Altmetric has tracked 23,012,811 research outputs across all sources so far. This one is in the 22nd percentile – i.e., 22% of other outputs scored the same or lower than it.
So far Altmetric has tracked 30,257 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.5. This one is in the 31st percentile – i.e., 31% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 441,876 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 29th percentile – i.e., 29% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 546 others from the same source and published within six weeks on either side of this one. This one is in the 17th percentile – i.e., 17% of its contemporaries scored the same or lower than it.