Computational Neural Network Model of Cholinergic Activity in the Hippocampus

Abstract

An increasing body of research indicates that high cholinergic states in the hippocampus improve encoding ability while not affecting retrieval. While there exist previous computational models of cholinergic activity in the hippocampus (Hasselmo, 2006), they lack the breadth and ability of more recent hippocampal models which incorporate theta rhythms and error-driven learning, increasing model capacity and yielding more robust results (Ketz et al., 2013). Here, I instantiate a neural network model of cholinergic effect in the hippocampus, capable of executing pattern separation tasks under various levels of simulated cholinergic activity. Model testing revealed that high internal cholinergic states would improve performance on pattern separation tasks while low cholinergic states would impair performance - results that are in line with previous literature. This work provides a foundation from which to address further questions about cholinergic effects on the hippocampus through a modernized simulation.

Introduction

The hippocampus is well known to serve a variety of functions in the human brain, notably the formation and retrieval of memories as well as spatial navigation (Rudy, 2018). The investigation of these critical functions has led to the creation of a number of computational neural network models aimed at elucidating the underlying processes that allow the hippocampus (HPC) to work the way that it does. These networks have achieved impressive performance on tasks that cortical models have struggled with (Ketz et al., 2013). But hippocampi do not exist in a vacuum, they are constantly receiving neuromodulation from other brain regions. One such region is the basal forebrain (BF) and its release of acetylcholine (ACh) in the hippocampus is thought to affect learning and memory ability (Ballinger et al., 2016). As a result, there remains a lot of room for improving upon previous HPC neural network models by incorporating the dynamic effects of cholinergic modulation to answer important questions such as why we have the ability to remember past memories in vivid detail, but also sometimes fail to remember things that we know. Furthermore, having access to accurately represented cholinergic activity in the HPC could shed light on the underlying mechanisms of how age-related decline in cholinergic modulation relates to the observed parallel decline in memory performance (Richter et al., 2014).

The current research aims to instantiate a computational neural network model capable of executing pattern separation tasks under various levels of simulated cholinergic activity. Pattern completion (PC) and pattern separation (PS) along with their differences and similarities are central to this research. Pattern completion refers to the re-activation of a previously established representation from a partial cue, whereas pattern separation refers to the act of breaking apart similar inputs into their own separate representations (Sahay et al., 2011). At a network level, PS entails the ability to pick apart a given input and decrease similarity, while PC requires an increase in similarity. These seemingly opposing processes both need to occur within the hippocampus, meaning that there must be a way to activate or suppress one or the other - a task for which acetylcholine is thought to be a good candidate (Hasselmo, 2006). Below we review the known effects of acetylcholine on the hippocampus, followed by the present state of computational modelling of the HPC.

Acetylcholine and the Hippocampus

Acetylcholine plays a vital role in the hippocampus’ ability to perform pattern separation tasks well. Rogers and Kesner (2003) found that scopolamine - a known ACh blocker - when injected directly into the CA3 region of a rat hippocampus performing a spatial memory task, would impair encoding (during which pattern separation is needed) but would have no effect on retrieval (during which pattern completion is needed). Conversely, the researchers found that injections into the rat’s CA3 with physostigmine - a drug used to enhance the effects of ACh - had the opposite effect: no detriment to encoding but impaired retrieval. Atri et al. (2004) showed a similar result in humans. Participants injected with scopolamine showed impaired encoding, but nominal retrieval as compared to the control.

Furthermore, optogenetic studies have revealed that higher levels of acetylcholine in the hippocampus can enhance HPC theta rhythms while lower levels of acetylcholine cause sharp wave ripples to be more pronounced (Vandecasteele et al., 2014). In the hippocampus, theta rhythms are 4-10Hz waves oscillating through the HPC, responsible for a number of functions including acting as an internal clock, allowing for place cell encoding of spatial input, and heightened plasticity (Lubenov & Siapas, 2009). Theta rhythms have also been implicated in enhanced memory ability (Vertes, 2005). Sharp wave ripples are 150-200Hz waves commonly associated with supporting episodic memory consolidation (Buzsáki, 2015).

This observed heightened synaptic plasticity from theta rhythms goes hand-in-hand with research demonstrating that acetylcholine can facilitate long term potentiation (LTP) in synapses (Mitsumisha et al., 2013), and LTP has long been accepted as a vital part of learning (Martinez & Derrick, 1996), and as a consequence, encoding. Thus, blocking cholinergic receptors impedes learning (Atri et al., 2004) while increasing activation of cholinergic receptors can increase learning capability (Levin et al., 2006). A mechanism through which these observed effects can be functionally expressed is through hippocampal recurrent connections getting suppressed while feedforward connections are left unfettered (Hasselmo et al., 1995). This mechanism has also been successfully modelled (Hasselmo, 2006) with the model being leveraged to predict that high levels of acetylcholine incite higher levels of pattern separation while lower levels of acetylcholine incite higher levels of pattern completion.

It is important to note, however, that among other factors, the amount and location of acetylcholine delivery along with temporal proximity to particular neural events can significantly alter the hippocampal and behavioural response (Decker & Duncan, 2020), so boiling down cholinergic behaviour to the inhibition of recurrent pathways will not fully encompass the effect that ACh has on the hippocampus.

Computational Neural Network Modelling and the Hippocampus

Contemporary neural networks originated in the 1970s and were popularized in the 1980s under the name “Parallel Distributed Processing” (McClelland et al.,1986). These early implementations of Connectionist theory aimed to describe mental processes by using nets of connected nodes (or neurons) that pass input through hidden layer(s) of nodes to provide an output. The productive power of these artificial neural networks (ANNs) began being leveraged in the field of computer science for solving problems that functional machines struggled with, such as natural language processing, as well as in the field of neuroscience for modelling, using ANNs’ explanatory capability. Over time, computational neuronal networks have become increasingly valuable in neuroscience research as technological progress and theoretical breakthroughs have allowed for the creation of more complex and efficient models (Rubinov, 2015). These networks are able to shed light on underlying biological processes responsible for cognitive functions such as memory, learning, and more (Schapiro et al., 2017), and having access to robust and accurate computational models of brain regions can help explain behaviour and further the understanding of how the human brain works.

One of the first hippocampus-specific models that did not require any external tuning before training to achieve successful results was created by Michael Hasselmo, (Hasselmo & Schnell, 1994) demonstrating that brain models did not require expressly set parameters unlike those previously created (Amit, 1992). This and subsequent models would leverage the increasingly well-known phenomenon of Hebbian learning as a foundation for how synaptic strength is regulated (Kelso et al., 1986; Hasselmo et al., 2020).

Following this, the Complementary Learning Systems theory (CLS) posited that the ability to remember unique events, but also have the ability to generalize across similar events are two processes fundamentally at odds with each other (McClelland et al., 1995). The CLS theory placed the hippocampus as the brain region responsible for quickly picking apart recent experience into individual, separate memories. The cortex is then slowly taught these different memories and the memories remain stored in the cortex thereafter. The model proposed by McClelland et al. (1995) consisted of a hippocampus (with layers for EC input, EC output, the Dentate Gyrus, CA3, and CA1) and a layer for an associated cortex. But while the CLS theory accounted for learning occurring over time-spans of hours and days, the theory didn’t accommodate the idea that people can learn to group similar experiences very quickly (Schapiro & Turk-Browne, 2015).

The more recent Ketz et al. (2013) model sought to address the limits of Hebbian learning by introducing error-driven learning through the modelling of the theta-phase, allowing the hippocampus to switch rapidly between encoding and retrieval. This, along with additional factors such as bidirectional connectivity between the CA1 and EC allowed the model to surpass standard Hebbian learning capabilities in both storage capacity and performance (Ketz et al., 2013). The mechanism through which this was achieved was by dynamically changing the strength of the hippocampal mono-synaptic (MSP) and tri-synaptic (TSP) pathways in time with the theta rhythm. In the Ketz model, the MSP was composed of bidirectional EC to CA1 connections while the TSP consisted of unidirectional EC to DG to CA3 to CA1 paths. As a result, this model demonstrated rapid learning ability, and later work by Schapiro et al. (2017) addressed the shortcoming of the CLS theory mentioned above by establishing that the hippocampus itself could simultaneously handle opposing processes of remembering individual episodes along with generalization in the case of a rapid learning scenario.

There also exist models of specifically cholinergic function in the hippocampus, leveraging what is known of the biological function of acetylcholine in the HPC to further explain the HPC’s ability to both separate and generalize. One such model was built by Hasselmo (2006), but it included a smaller scale representation of the HPC and did not have a theta-rhythm component to rapidly interleave encoding and retrieval.

The purpose of this research is to instantiate a neural network model with error-driven learning and theta rhythm simulation along with a system of modulating cholinergic activity in the hippocampus in order to achieve a better understanding of how acetylcholine release from the basal forebrain affects human performance on pattern separation tasks. The neural network model used was based on the existing Ketz model of the HPC along with known physiological data that ACh inhibits recurrent connections but does not affect feedforward ones. Once the model was built and successfully trained on a created pattern separation dataset, the underlying structure and implementation of the model was modified to mirror high, low, and baseline cholinergic activity in the hippocampus. The following question was addressed: what can the model tell us about how the hippocampus handles pattern separation tasks at various cholinergic states?

Materials and Methods

Neural Network Model

In light of the current state of neural network modelling, Emergent v1.0.6 along with Leabra v1.0.4 were selected for their use of modern HPC models - specifically the Theta-phase HPC Ketz Model (Ketz et al., 2013) - along with their speed and ease of modification (Aisa et al., 2008). The latest version of Emergent runs on Go, a statically typed and compiled programming language, generally used for its efficiency and speed. Emergent is complemented by Leabra, a package responsible for the underlying computational basis for biological neural networks. These two Go packages work in synchrony to provide a platform and graphical interface allowing the user to build, modify, train, and run neural networks.

This project used the existing pre-built Ketz HPC model that has already been implemented in Emergent as a baseline, but was modified in order to simulate high and low cholinergic states. Acetylcholine's ability to inhibit recurrent neuronal connections but leave feedforward connections was used to show how ACh modulation influences pattern separation tasks in the HPC. Three discrete models were used for three cholinergic states: a baseline, a high cholinergic state, and a low cholinergic state. The specific parameters used and altered for these models can be found in Table 1. All other parameters not mentioned in Table 1 were kept identical to the baseline model.

Table 1. Parameters used for defining low, baseline, and high cholinergic state models. Here, Layer and Prjn specify if the parameter is of type Layer or Projection. WtScale.Rel refers to relative scaling of the overall strength of input projections. Learn.Lrate refers to the rate according to which the projection learns, which is then fed into an effective weight calculation. Inhib.Layer.Gi specifies the net inhibitory input to the layer. Inhib.ActAvg.Init specifies average initialized activation for the layer. More information along with the underlying equations that these parameters feed into can be found at https://github.com/emer/leabra.

The model architecture itself can be seen in Figure 1. Each square in each of the layers  constitutes a unit, which can maintain a particular level of activity. This level of activity is always between 0 and 1 and the more yellow a unit is, the higher its activity at a given time.

Figure 1. A demonstration of the model during training. The model consists of six layers: two superficial and four deep. ‘Input’ and ‘ECout’ are the superficial layers, where the former maps on to ‘ECin’ to present the model with a particular input, and the latter is where the model provides the output to the given stimulus. The black arrows represent projections from one layer to another, with ‘CA3’ having special recurrent projections.

Pattern Separation Dataset

This model was trained on a dataset designed to test pattern separation ability. Figure 2 provides an example of one data unit, consisting of a training set and testing set. A number of units were created for this experiment, each with varying levels of difference between the A1 set and A2 set. Overall, units were created with similarity values of 0%, 20%, 40%, 60%, 80%, 90%, 100%. The set with 100% similarity was used as a sanity check for the model, as the pattern that the model was expected to learn was identical between A1 and A2, but the expected output was completely different. The model was expected to fail to learn this unit. The model was also expected to quickly learn the 0% similarity unit as little pattern separation would need to be done to discreetly characterize these units.

Figure 2. A data unit with 90% similarity between A1 and A2. Each unit has a training and testing set. Each training and testing set consists of an input pattern, fed into the ‘Input’ layer of the model, and an expected output pattern, compared to the output of the ‘ECout’ layer. Set A1 is used as the baseline for set A2. The units marked with orange in set A2 demonstrate the differences between the two sets. The pattern that the model is expected to learn consists of everything other than the top right three nodes, which demonstrate either strong right activation (set A1) or strong left activation (set A2). During the testing phase, the model is presented with just the pattern it needed to learn, and is tested for its ability to recall either strong right or strong left activation.

Training and Testing

The pattern separation task was performed as follows. The model would be initialized in its baseline state, high cholinergic state, or low cholinergic state, as described above. The model would then be stepped through a Run. Each Run consisted of up to 20 Epochs. Each Epoch consisted of a training event and a testing event for both the A1 unit and A2 unit. The training event would push the A1 training input set to the ‘Input’ layer, and allow the model to run for 100 cycles - arbitrary time units which allow the model to loop through the data a number of times. The same would then be done with the A2 unit, interleaving the training of both. The testing event would then push the A1 testing input set to the ‘Input’ layer, and expect the correct output from ‘ECout’ after 100 cycles. The same would be done for A2. Once the Epoch is complete, the tests of A1 and A2 determine if the model has succeeded in learning both A1 and A2. If successful the simulation would end, but if the model has failed to learn either A1 or A2 or both another Epoch would begin. Should another Epoch be needed, the model will continue this cycle of training and testing until the model either successfully learned the dataset or unsuccessfully completed 20 Epochs, which constitutes a failure.

After each Run, the number of Epochs it took to learn the dataset was recorded, with 20 recorded if the model failed. In total, six Runs were completed for every data unit for every cholinergic state. The number of Epochs until success were then averaged across all Runs for each cholinergic state.

Results

The observed results followed the hypothesized pattern closely. Figure 3 demonstrates that as the similarity of inputs increases, the model struggles more and more to successfully complete the pattern separation task, eventually failing at the 100% similarity sanity check. We also saw that a low cholinergic state hinders the model’s performance on the task relative to the baseline state. Conversely, the high cholinergic state increased the model’s task performance notably at 80% and 90% similarity -- conditions that required the highest levels of pattern separation.

We further observe that when input similarity was at or below 60%, the changes in cholinergic state did little to affect the model’s performance with all three models performing very similarly.

Figure 3. Results of cholinergic state on learning ability for the pattern separation task. The y-axis shows how many Epochs were required for the model to successfully complete the task. The fewer Epochs required, the better the task performance.

Discussion

This work provides a promising first step for updating previous well known cholinergic models of the hippocampus, namely Hasselmo (2006), which focused on altering levels of inhibition in specifically the CA3 recurrent connections and in the CA3 layer. Building on previous work, my model instantiation makes use of error-driven learning and theta-rhythm simulation to create a more modern and robust working cholinergic model.

Throughout the experimentation process, a couple of different methods for inhibiting layers and recurrent connections were tested. Initially, the model used in this project was modified with a scaling algorithm, which globally elevated (or decreased) levels of inhibition in deep layers proportionally to their baseline in order to create high and low cholinergic states. Though this model also demonstrated that high cholinergic states improved pattern separation task performance, I discovered that focusing increased (or decreased) inhibition specifically in the CA3 layer and CA3 recurrent connections led to the best performance on pattern separation tasks for a high cholinergic state. This outcome further lends credence to the focus of cholinergic inhibition of recurrent pathways in the CA3 region and its effect on pattern separation, an idea supported by research blocking cholinergic receptors in the CA3 and observing impeded learning (Atri et al., 2004). However, more recent rodent research has found acetylcholine inhibiting other hippocampal regions such as the dentate gyrus (Pabst et al., 2016), indicating that there remains room for testing more diverse inhibitory modulation within computational models.

Putting aside the locality of cholinergic effect, this project would benefit from an analysis of activation overlap in layers during training between different data units, in order to confirm that the underlying processes that yielded the results above involve pattern separation, specifically. This analysis is in progress and aims to demonstrate that the difference in activation in the hidden layers - particularly the dentate gyrus and CA3 - is greater than the difference in inputs in one training unit. I expect to see the greatest dependence on ACh in the CA3 region, manifested by having the greatest differences in overlap between the input layer and CA3 layer for the simulated cholinergic states. Once this analysis is complete, this research can move to address further questions discussed below.

Conclusions and Future Directions

First and foremost, the model will be validated against human behavioural pattern separation results with the goal of providing granular insight into cholinergic activity within the human hippocampus based on a particular participant’s PS task results. If the model can be tuned to perform similarly to a healthy control group of participants, it could be possible to identify people who may have cholinergic deficits from behaviour alone. Similarly, using participant fMRI data to inform the model of expected pattern separation task performance based on observed basal forebrain integrity would extend the use-cases of this model even further.

Another extension of the current research would be to simulate cholinergic activity in a continuous manner, as opposed to relying on discrete states to produce results. This could involve the modelling and implementation of a basal forebrain region with manipulatable integrity parameters used to simulate the extent to which an individual would be able to fluctuate cholinergic levels in their hippocampus. Using this fMRI data to further refine the model has the potential to provide even more precise information on potential cholinergic deficits of participants who perform abnormally on pattern separation tasks.

Though the question of why we have the ability to remember past memories in vivid detail, but also sometimes fail to remember things that we know, goes unanswered, the model instantiated here provides a potential answer of neuromodulatory failure. The three models used demonstrate discrete cholinergic states in the hippocampus, but in reality, neuromodulation is continuous, not discrete. I observed during testing that if inhibition in the CA3 region was too high or too low, the model would simply fail to learn the task. While another potential answer to this question is that something about the input stimulus was off and resulted in a failure to remember, or that stochastic noise was to blame, it is also possible that the slight errors in neuromodulation cause just enough of a disturbance to prevent a memory from resurfacing. But, despite the fact that this research leaves many questions open to future work, the current state of the project is encouraging - with data corroborating previous models along with biological evidence and creating a strong foundation from which to address deeper and more intricate questions later on.


Author's Note: This research was conducted over the course of the summer term at the Duncan Memory Lab at the University of Toronto under Dr. Katherine Duncan as part of an undergraduate independent study and was not formally published. I hope you enjoyed the work, if you have any questions about anything you read or feel that something needs clarification, please feel free to reach me at alex@alexgordienko.com or @alexgordienko_.


References

Aisa, B., Mingus, B., & O’Reilly, R. (2008). The Emergent neural modeling system. Neural Networks, 21(8), 1146–1152. https://doi.org/10.1016/j.neunet.2008.06.016

Amit, D. J. (1992). Modeling brain function: The world of attractor neural networks. Cambridge university press.

Atri, A., Sherman, S., Norman, K. A., Kirchhoff, B. A., Nicolas, M. M., Greicius, M. D., … Stern, C. E. (2004). Blockade of central cholinergic receptors impairs new learning and increases  proactive interference in a word paired-associate memory task. Behavioral Neuroscience, 118(1), 223–236. https://doi.org/10.1037/0735-7044.118.1.223

Ballinger, E. C., Ananth, M., Talmage, D. A., & Role, L. W. (2016). Basal Forebrain Cholinergic Circuits and Signaling in Cognition and Cognitive Decline. Neuron, 91(6), 1199–1218. https://doi.org/10.1016/j.neuron.2016.09.006

Barnes, J. M., & Underwood, B. J. (1959). Fate of first-list associations in transfer theory. Journal of Experimental Psychology, 58, 97–105. https://doi.org/10.1037/h0047507

Buzsáki, G. (2015). Hippocampal sharp wave-ripple: A cognitive biomarker for episodic memory and planning. Hippocampus, 25(10), 1073–1188. https://doi.org/10.1002/hipo.22488

Decker, A. L., & Duncan, K. (2020). Acetylcholine and the complex interdependence of memory and attention. Current Opinion in Behavioral Sciences, 32, 21–28. https://doi.org/https://doi.org/10.1016/j.cobeha.2020.01.013

Hasselmo, M. E., Schnell, E., & Barkai, E. (1995). Dynamics of learning and recall at excitatory recurrent synapses and cholinergic modulation in rat hippocampal region CA3. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 15(7 Pt 2), 5249–5262. https://doi.org/10.1523/JNEUROSCI.15-07-05249.1995

Hasselmo, M. E. (2006). The role of acetylcholine in learning and memory. Current Opinion in Neurobiology, 16(6), 710–715. https://doi.org/10.1016/j.conb.2006.09.002

Hasselmo, M. E., & Eichenbaum], [Howard. (2005). Hippocampal mechanisms for the context-dependent retrieval of episodes. Neural Networks, 18(9), 1172–1190. https://doi.org/https://doi.org/10.1016/j.neunet.2005.08.007

Hasselmo, M. E., & Schnell, E. (1994). Laminar selectivity of the cholinergic suppression of synaptic transmission in rat hippocampal region CA1: computational modeling and brain slice physiology. Journal of Neuroscience, 14(6), 3898–3914.

Hasselmo, M. E., Alexander, A. S., Dannenberg, H., & Newman, E. L. (2020). Overview of computational models of hippocampus and related structures: Introduction to the special issue. Hippocampus, 30(4), 295–301. https://doi.org/10.1002/hipo.23201

Hasselmo, M. E., & McClelland, J. L. (1999). Neural models of memory. Current Opinion in Neurobiology, 9(2), 184–188. https://doi.org/10.1016/S0959-4388(99)80025-7

Hummos, A., Franklin, C. C., & Nair, S. S. (2014). Intrinsic mechanisms stabilize encoding and retrieval circuits differentially in a hippocampal network model. Hippocampus, 24(12), 1430–1448. https://doi.org/10.1002/hipo.22324

Kelso, S. R., Ganong, A. H., & Brown, T. H. (1986). Hebbian synapses in hippocampus. Proceedings of the National Academy of Sciences, 83(14), 5326–5330.

Ketz, N., Morkonda, S. G., & O’Reilly, R. C. (2013). Theta Coordinated Error-Driven Learning in the Hippocampus. PLoS Computational Biology, 9(6). https://doi.org/10.1371/journal.pcbi.1003067

Levin, E. D., McClernon, F. J., & Rezvani, A. H. (2006). Nicotinic effects on cognitive function: behavioral characterization,  pharmacological specification, and anatomic localization. Psychopharmacology, 184(3–4), 523–539. https://doi.org/10.1007/s00213-005-0164-7

Li, X., Yu, B., Sun, Q., Zhang, Y., Ren, M., Zhang, X., … Qiu, Z. (2018). Generation of a whole-brain atlas for the cholinergic system and mesoscopic projectome analysis of basal forebrain cholinergic neurons. Proceedings of the National Academy of Sciences, 115(2), 415 LP – 420. https://doi.org/10.1073/pnas.1703601115

Lubenov, E. V, & Siapas, A. G. (2009). Hippocampal theta oscillations are travelling waves. Nature, 459(7246), 534–539. https://doi.org/10.1038/nature08010

Martinez, J. L., & Derrick, B. E. (1996). LONG-TERM POTENTIATION AND LEARNING. Annual Review of Psychology, 47(1), 173–203. https://doi.org/10.1146/annurev.psych.47.1.173

McClelland, J. L., McNaughton, B. L., & O’Reilly, R. C. (1995). Why there are complementary learning systems in the hippocampus and neocortex:  insights from the successes and failures of connectionist models of learning and memory. Psychological Review, 102(3), 419–457. https://doi.org/10.1037/0033-295X.102.3.419

McClelland, J. L., Rumelhart, D. E., Group, P. D. P. R., & others. (1986). Parallel distributed processing. Explorations in the Microstructure of Cognition, 2, 216–271.

McCloskey, M., & Cohen, N. J. (1989). Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem. In G. H. Bower (Ed.) (Vol. 24, pp. 109–165). Academic Press. https://doi.org/https://doi.org/10.1016/S0079-7421(08)60536-8

Mitsushima, D., Sano, A., & Takahashi, T. (2013). A cholinergic trigger drives learning-induced plasticity at hippocampal synapses. Nature Communications, 4. https://doi.org/10.1038/ncomms3760

O’Reilly, R. C. (2020). Emergent. Retrieved June 11, 2020, from https://github.com/emer/emergent

O’Reilly, R. C., Bhattacharyya, R., Howard, M. D., & Ketz, N. (2014). Complementary learning systems. Cognitive Science, 38(6), 1229–1248. https://doi.org/10.1111/j.1551-6709.2011.01214.x

O’Reilly, R. C., & McClelland, J. L. (1994). Hippocampal conjunctive encoding, storage, and recall: Avoiding a trade‐off. Hippocampus, 4(6), 661–682. https://doi.org/10.1002/hipo.450040605

Pabst, M., Braganza, O., Dannenberg, H., Hu, W., Pothmann, L., Rosen, J., … Beck, H. (2016). Astrocyte Intermediaries of Septal Cholinergic Modulation in the Hippocampus. Neuron, 90(4), 853–865. https://doi.org/10.1016/j.neuron.2016.04.003

Richter, N., Allendorf, I., Onur, O. A., Kracht, L., Dietlein, M., Tittgemeyer, M., … Kukolja, J. (2014). The integrity of the cholinergic system determines memory performance in healthy elderly. NeuroImage, 100, 481–488. https://doi.org/10.1016/j.neuroimage.2014.06.031

Rogers, J. L., & Kesner, R. P. (2004). Cholinergic Modulation of the Hippocampus during Encoding and Retrieval of Tone/Shock-Induced Fear Conditioning. Learning and Memory, 11(1), 102–107. https://doi.org/10.1101/lm.64604

Rolls, E. T., & Kesner, R. P. (2016). Pattern separation and pattern completion in the hippocampal system. Introduction to the Special Issue. Neurobiology of Learning and Memory, 129, 1–3. https://doi.org/10.1016/j.nlm.2016.02.001

Rubinov, M. (2015). Neural networks in the future of neuroscience research. Nature Reviews Neuroscience, 16(12), 767. https://doi.org/10.1038/nrn4042

Rudy, J. W. (2018). The Neurobiology of Learning and Memory (Second Edi). Sinauer Associate, Inc. Publishers.

Sahay, A., Wilson, D. A., & Hen, R. (2011). Pattern separation: a common function for new neurons in hippocampus and olfactory bulb. Neuron, 70(4), 582–588. https://doi.org/10.1016/j.neuron.2011.05.012

Schapiro, A., & Turk-Browne, N. (2015). Statistical learning. Brain Mapping, 3, 501–506.

Schapiro, A. C., Turk-Browne, N. B., Botvinick, M. M., & Norman, K. A. (2017). Complementary learning systems within the hippocampus: A neural network modelling approach to reconciling episodic memory with statistical learning. Philosophical Transactions of the Royal Society B: Biological Sciences, 372(1711). https://doi.org/10.1098/rstb.2016.0049

Vandecasteele, M., Varga, V., Berényi, A., Papp, E., Barthó, P., Venance, L., … Buzsáki, G. (2014). Optogenetic activation of septal cholinergic neurons suppresses sharp wave ripples  and enhances theta oscillations in the hippocampus. Proceedings of the National Academy of Sciences of the United States of America, 111(37), 13535–13540. https://doi.org/10.1073/pnas.1411233111

Vertes, R. P. (2005). Hippocampal theta rhythm: A tag for short‐term memory. Hippocampus, 15(7), 923–935.

Show Comments