↓ Skip to main content

Sparsey™: event recognition via deep hierarchical sparse distributed codes

Overview of attention for article published in Frontiers in Computational Neuroscience, December 2014
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (83rd percentile)
  • High Attention Score compared to outputs of the same age and source (80th percentile)

Mentioned by

twitter
5 X users
wikipedia
1 Wikipedia page
googleplus
1 Google+ user

Citations

dimensions_citation
14 Dimensions

Readers on

mendeley
55 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Sparsey™: event recognition via deep hierarchical sparse distributed codes
Published in
Frontiers in Computational Neuroscience, December 2014
DOI 10.3389/fncom.2014.00160
Pubmed ID
Authors

Gerard J. Rinkus

Abstract

The visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally) and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field (which we equate with the cortical macrocolumn, "mac"), at each level. In localism, each represented feature/concept/event (hereinafter "item") is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge ("Big Data") problems. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal patterns.

X Demographics

X Demographics

The data shown below were collected from the profiles of 5 X users who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 55 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Germany 1 2%
Australia 1 2%
United Kingdom 1 2%
Slovakia 1 2%
United States 1 2%
Unknown 50 91%

Demographic breakdown

Readers by professional status Count As %
Researcher 12 22%
Student > Ph. D. Student 10 18%
Student > Master 7 13%
Student > Bachelor 5 9%
Professor > Associate Professor 5 9%
Other 11 20%
Unknown 5 9%
Readers by discipline Count As %
Computer Science 20 36%
Neuroscience 7 13%
Engineering 4 7%
Biochemistry, Genetics and Molecular Biology 3 5%
Agricultural and Biological Sciences 2 4%
Other 10 18%
Unknown 9 16%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 8. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 30 July 2022.
All research outputs
#4,324,565
of 23,653,937 outputs
Outputs from Frontiers in Computational Neuroscience
#192
of 1,379 outputs
Outputs of similar age
#58,946
of 357,988 outputs
Outputs of similar age from Frontiers in Computational Neuroscience
#6
of 25 outputs
Altmetric has tracked 23,653,937 research outputs across all sources so far. Compared to these this one has done well and is in the 81st percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,379 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 6.4. This one has done well, scoring higher than 85% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 357,988 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 83% of its contemporaries.
We're also able to compare this research output to 25 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 80% of its contemporaries.