↓ Skip to main content

Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions

Overview of attention for article published in Frontiers in Neurorobotics, December 2017
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • Among the highest-scoring outputs from this source (#31 of 879)
  • High Attention Score compared to outputs of the same age (88th percentile)
  • High Attention Score compared to outputs of the same age and source (92nd percentile)

Mentioned by

news
1 news outlet
blogs
1 blog
twitter
1 X user

Citations

dimensions_citation
9 Dimensions

Readers on

mendeley
26 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions
Published in
Frontiers in Neurorobotics, December 2017
DOI 10.3389/fnbot.2017.00070
Pubmed ID
Authors

Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata

Abstract

An important characteristic of human language is compositionality. We can efficiently express a wide variety of real-world situations, events, and behaviors by compositionally constructing the meaning of a complex expression from a finite number of elements. Previous studies have analyzed how machine-learning models, particularly neural networks, can learn from experience to represent compositional relationships between language and robot actions with the aim of understanding the symbol grounding structure and achieving intelligent communicative agents. Such studies have mainly dealt with the words (nouns, adjectives, and verbs) that directly refer to real-world matters. In addition to these words, the current study deals with logic words, such as "not," "and," and "or" simultaneously. These words are not directly referring to the real world, but are logical operators that contribute to the construction of meaning in sentences. In human-robot communication, these words may be used often. The current study builds a recurrent neural network model with long short-term memory units and trains it to learn to translate sentences including logic words into robot actions. We investigate what kind of compositional representations, which mediate sentences and robot actions, emerge as the network's internal states via the learning process. Analysis after learning shows that referential words are merged with visual information and the robot's own current state, and the logical words are represented by the model in accordance with their functions as logical operators. Words such as "true," "false," and "not" work as non-linear transformations to encode orthogonal phrases into the same area in a memory cell state space. The word "and," which required a robot to lift up both its hands, worked as if it was a universal quantifier. The word "or," which required action generation that looked apparently random, was represented as an unstable space of the network's dynamical system.

Timeline

Login to access the full chart related to this output.

If you don’t have an account, click here to discover Explorer

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 26 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 26 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 5 19%
Student > Bachelor 5 19%
Student > Doctoral Student 3 12%
Researcher 2 8%
Student > Master 2 8%
Other 2 8%
Unknown 7 27%
Readers by discipline Count As %
Computer Science 9 35%
Engineering 5 19%
Arts and Humanities 1 4%
Business, Management and Accounting 1 4%
Unspecified 1 4%
Other 2 8%
Unknown 7 27%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 15. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 12 May 2021.
All research outputs
#2,121,597
of 23,012,811 outputs
Outputs from Frontiers in Neurorobotics
#31
of 879 outputs
Outputs of similar age
#51,040
of 440,933 outputs
Outputs of similar age from Frontiers in Neurorobotics
#1
of 13 outputs
Altmetric has tracked 23,012,811 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 90th percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 879 research outputs from this source. They receive a mean Attention Score of 4.1. This one has done particularly well, scoring higher than 96% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 440,933 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 88% of its contemporaries.
We're also able to compare this research output to 13 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 92% of its contemporaries.