↓ Skip to main content

Acquisition of Viewpoint Transformation and Action Mappings via Sequence to Sequence Imitative Learning by Deep Neural Networks

Overview of attention for article published in Frontiers in Neurorobotics, July 2018
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • Good Attention Score compared to outputs of the same age (74th percentile)
  • High Attention Score compared to outputs of the same age and source (83rd percentile)

Mentioned by

twitter
3 X users
patent
2 patents

Citations

dimensions_citation
1 Dimensions

Readers on

mendeley
16 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Acquisition of Viewpoint Transformation and Action Mappings via Sequence to Sequence Imitative Learning by Deep Neural Networks
Published in
Frontiers in Neurorobotics, July 2018
DOI 10.3389/fnbot.2018.00046
Pubmed ID
Authors

Ryoichi Nakajo, Shingo Murata, Hiroaki Arie, Tetsuya Ogata

Abstract

We propose an imitative learning model that allows a robot to acquire positional relations between the demonstrator and the robot, and to transform observed actions into robotic actions. Providing robots with imitative capabilities allows us to teach novel actions to them without resorting to trial-and-error approaches. Existing methods for imitative robotic learning require mathematical formulations or conversion modules to translate positional relations between demonstrators and robots. The proposed model uses two neural networks, a convolutional autoencoder (CAE) and a multiple timescale recurrent neural network (MTRNN). The CAE is trained to extract visual features from raw images captured by a camera. The MTRNN is trained to integrate sensory-motor information and to predict next states. We implement this model on a robot and conducted sequence to sequence learning that allows the robot to transform demonstrator actions into robot actions. Through training of the proposed model, representations of actions, manipulated objects, and positional relations are formed in the hierarchical structure of the MTRNN. After training, we confirm capability for generating unlearned imitative patterns.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 16 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 16 100%

Demographic breakdown

Readers by professional status Count As %
Student > Bachelor 3 19%
Student > Ph. D. Student 3 19%
Student > Master 2 13%
Other 1 6%
Lecturer > Senior Lecturer 1 6%
Other 3 19%
Unknown 3 19%
Readers by discipline Count As %
Engineering 3 19%
Computer Science 3 19%
Arts and Humanities 1 6%
Mathematics 1 6%
Decision Sciences 1 6%
Other 1 6%
Unknown 6 38%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 7. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 20 July 2022.
All research outputs
#4,191,804
of 22,882,389 outputs
Outputs from Frontiers in Neurorobotics
#94
of 864 outputs
Outputs of similar age
#81,673
of 329,083 outputs
Outputs of similar age from Frontiers in Neurorobotics
#4
of 24 outputs
Altmetric has tracked 22,882,389 research outputs across all sources so far. Compared to these this one has done well and is in the 80th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 864 research outputs from this source. They receive a mean Attention Score of 4.2. This one has done well, scoring higher than 89% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 329,083 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 74% of its contemporaries.
We're also able to compare this research output to 24 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 83% of its contemporaries.