参见作者原研究论文

本实验方案简略版
May 2020

本文章节


 

Dissociating Behavior and Spatial Working Memory Demands Using an H Maze
利用H迷宫研究分离行为和空间工作记忆需求   

引用 收藏 提问与回复 分享您的反馈 Cited by

Abstract

The development of mazes for animal experiments has allowed for the investigation of cognitive maps and place cells, spatial working memory, naturalistic navigation, perseverance, exploration, and choice and motivated behavior. However, many mazes, such as the T maze, currently developed to test learning and memory, do not distinguish temporally and spatially between the encoding and recall periods, which makes it difficult to study these stages separately when analyzing animal behavior and electrophysiology. Other mazes, such as the radial maze, rely on single visits to portions of the maze, making maze coverage sparse for place cell and electrophysiology experiments. In this protocol, we present instructions for building and training an animal on a spatial appetitive choice task on a low-cost double-sided T (or H) maze. This maze has several advantages over the traditional T maze and radial mazes. This maze is unique in that it temporally and directionally dissociates the memory encoding and retrieval periods, while requiring the same behaviors of the animal during both periods. This design allows for independent investigation of brain mechanisms, such as cross-region theta coordination, during memory encoding and retrieval, while at least partially dissociating these stages from behavior. This maze has been previously used in our laboratory to investigate cell firing, single-region local field potential (LFP) patterns, and cross region LFP coherence in the hippocampus, lateral septum, prefrontal cortex, and ventral tegmental area, as well as to investigate the effects of hippocampal theta perturbations on task performance.

Keywords: Spatial navigation (空间导航), T-maze (迷宫实验), Working memory (工作记忆), Hippocampus (海马结构), Animal behavior (动物行为)

Background

Mazes have been used in psychology, animal behavior, and neuroscience experiments for well over a century (Dudchenko, 2004; Fortin, 2008). In controlled laboratory settings, mazes have allowed for the investigation of cognitive maps and place cells, spatial working memory, naturalistic navigation, perseverance, exploration, and choice and motivated behavior (Tolman et al., 1946 and 1948; Roberts et al., 1962; Lash, 1964; Kimble and Kimble, 1965; O'Keefe and Nadel, 1978; Brito and Thomas, 1981; Wiener et al., 1989; Redish and Touretzky, 1998; Hollup et al., 2001; Huxter et al., 2001; Deacon and Rawlins, 2006; Ainge et al., 2007; Gupta et al., 2012; Tanila et al., 2018).


After early maze experiments established that animals have a proclivity to spontaneously alternate between two spatial choices (Carr, 1917), Tolman became one of the first researchers to use a T-maze to formally investigate alternation (Tolman, 1925). Although the ability to alternate depends on working memory of the previously made choice, the traditional T maze (or Y maze, which is sometimes used for more natural turning angles) is limited in that the encoding period, where the animal must form a memory of the current choice, and the recall period, in which the animal must remember the previous choice, overlap both temporally and spatially. Additionally, because spontaneous alternation is an innate behavior, animals require no learning period to display alternation, making the task un-ideal for studying learning and behavioral acquisition. It also still remains unclear what motivates animals to alternate.


Half a century later, Olton developed a radial arm maze to investigate choice behavior and spatial working memory beyond simple T maze alternation (Olton and Samuelson, 1976; Olton et al., 1979). In the radial maze, animals are expected to visit each arm of the maze once, with no repetitions or revisits. Although this maze requires learning, a trained animal normally visits an arm one time per session (Foreman and Ermakova, 1998), making maze coverage sparse for place cell and electrophysiology experiments. Working memory in the radial maze also incorporates all previous choices during a session, which means memory errors can compound and a single choice is dependent on all previous choices.


Our lab has developed a double-sided T (or H) maze which has several advantages over the traditional T maze and radial mazes. In this task, an animal is forced to an arm on one side T of the maze, and then must choose the same side of the maze on the opposing T in order to get rewarded. While this maze task requires a learning period and thus can be used for studying spatial learning, a trained rat can run dozens of trials a day, each of which requires coverage of at least half of the track. The maze is therefore ideal for use in place cell and other experiments which require robust tract coverage. Furthermore, each choice in the maze requires knowledge of only the immediately preceding navigation, so errors are non-compounding. Finally, this maze is unique in that it temporally and directionally disassociates the memory encoding and retrieval periods, while requiring the same behaviors of the animal during both periods (i.e., both periods require a run down a center stem and then a single turn, which is mirrored across encoding and retrieval). This design allows for independent investigation of brain mechanisms, such as cross-region theta coordination, during memory encoding and retrieval, while at least partially dissociating these stages from behavior.


In this protocol, we will discuss the steps involved in building a low-cost version of this maze for use in rat, as well as the steps necessary for training an animal to become proficient at the task. This maze has been previously used in our laboratory to investigate cell firing, single-region LFP patterns, and cross region LFP coherence in the hippocampus, lateral septum, prefrontal cortex, and ventral tegmental area (Jones and Wilson, 2005a and 2005b; Gomperts et al., 2015; Wirtshafter and Wilson, 2019 and 2020), as well as to investigate the effects of hippocampal theta perturbations on task performance (Siegle and Wilson, 2014).


Materials and Reagents

  1. Animals

    Animals used were Long Evans rats. All animals tested were males at about 350-500 g, but females and animals of varying weights could be used. A scaled version of this maze has also been successfully used with mice (Siegle and Wilson, 2014).

  2. Sucrose

  3. Chocolate milk powder, such as Nesquik Chocolate Powder Drink Mix

  4. Reward liquid (see Recipes)

Equipment

For maze

  1. Maze building materials such as ProTRAK 25 2-1/2 in. × 10 ft. 25-Gauge EQ Galvanized Steel Track (Homedepot, model: 250PDT125-15). Wood or corrugated plastic also works

    Note: If using a metal maze and doing electrophysiology, a grounding wire for the maze.

  2. Black contact paper for water proofing maze, making it less conductive, and increasing contrast with animal for video tracing. We used d-c-fix 26.57 in. × 78.72 in. Black glossy shelf liner (Tmi-Pvc, model: FA3468348) secured with TMI Anti-Static Strip and Sheet Plastic (Tmi-Pvc, model: FCA 12060-400)

  3. Two small reward wells

    We used National Hardware 142 Cup Pulls in Antique Brass, ” (Reference #: N335-661 from Amazon.com, USA)

  4. Wood or corrugated plastic to create a block to force the animal to one side at the forced portion of the maze (for an automated version servo controlled doors can be used [Gomperts et al., 2015]).

  5. Risers to elevate the maze, we used blue traffic cones.


For reward delivery

A remote reward delivery system, we used two BrandTechTM BRANDTM seripettorTM Bottletop Dispensers (Fisher Scientific, catalog number: 03-840-010) attached to two common reagent bottles. We ran tubing from the bottles to the reward site in the maze, as described in ‘procedure’. Previous procedures have also used 45 mg food pellets (Gomperts et al., 2015).


For animal tracking

  1. Overhead camera

    We used MODEL: BFLY-PGE-09S2C-CS: 0.9 MP, 30 FPS, SONY ICX692, COLOR mounted with Tripod Adapter for BFS (30mm), BFLY, CMLN, CM3, FFMV, FL2, FL3, FMVU, both from https://www.flir.com/.

  2. Appropriate lens for camera depending on maze dimensions and mounting height

    We recommend something like Computar H1214FICS-3 CS Mount Lens for the suggested dimensions, at 6ft above the maze.

  3. Tracking software such as Bonsai (https://open-ephys.org/bonsai) or OATE (Newman et al., 2017)

Procedure

  1. Overview of task

    The maze is comprised of a forced side, a choice side, and a middle stem. The forced side and choice side each have two arms and contain a forced point and choice point, respectively (Figure 1). On the forced side of the maze, one arm (the arm varies by trial) is blocked off. A trial is initiated when the animal visits the forced point. In order to be rewarded, the animal must visit the arm on the same side of the maze on the choice side of the maze (Figure 2, Video 1). For example, if the animal is forced to the right forced arm, they must also visit the right choice arm to receive a reward. The correct choice must be made on the first try in order to be rewarded. The side the animal is forced to is randomly chosen, with the caveat that the animal can be forced to the same side of the maze for no more than three trials in a row (to avoid the animal thinking only one side of the maze is ever correct). A reward is delivered remotely by an experimenter who is also responsible for moving the blockade that forces the animal to a forced arm. A new trial begins when the animal returns to the forced arm.

      Of note, if one seeks to compare neurophysiology associated with choice-associated reward to forced reward, the animal can also be rewarded at the forced side of the maze (Gomperts et al., 2015).



    Figure 1. Design of the double-sided T maze. Top left: Schematic showing the maze and different components, as referred to throughout the text. Bottom left: Schematic showing the dimensions of the different maze components. Right: Labeled photograph of the maze.



    Figure 2. Overview of maze task demands. In the forced choice phase, animals were forced, using a blockade (represented by a shaded black box) to either side of the forced side of the maze. Subsequently, animals had to choose, at the choice side of the maze, the same side of the maze to which they were previously forced. If the animal made the correct choice on their first selection, they were rewarded. Left column: Examples of two correct trials where the animal chose the same choice arm as forced arm. Right column: Examples of two incorrect trials where the animal chose the opposite choice arm to which they were forced.


    Video 1. Video of a trained rat running two trials on the maze. Video is sped up 2× normal speed. Animal position is indicated with a red circle. The forced side of the maze is on the bottom and the choice side on the top. On the first trial, the animal is forced to the left arm of the forced side. The animal then runs down the center arm and correctly chooses the left arm of the choice side and receives a reward. The animal then returns to the forced side, where he is forced to the right arm. The animal runs down the center stem and incorrectly chooses the left arm and does not receive a reward. The animal then checks the right arm for a reward, which is not administered as the initial choice was incorrect.

  2. Before beginning

    Animal housing

    Animals were single housed with enrichment on a 12/12 light dark cycle. Animals were trained during the light cycle as to capture post-training sleep, but could easily be run on the dark cycle. Animals were food deprived to 85-90% of starting weight, with free access to water.


    Maze construction

    1. Construct the maze to the parameters in Figure 1. Ends of forced and choice arms can be made slightly larger to allow for turnaround room.

    2. If planning on doing electrophysiology and the maze is constructed of metal, use a conductive wire to ground the maze to a ground source in the room.

    3. We advise covering the maze in black contact paper, which makes for easier cleaning and provides high contrast for animal tracking with overhead cameras.

    4. Construct two blocks the width of the maze (~2.5 in/6.3 cm), by about 6 inches deep and 10 inches tall. These will be used to block off the forced arm during the learning and test phase, as well as the incorrect choice arm during the learning phase. These blocks should be deep and tall enough so that the rat cannot crawl around or over them.


    Reward delivery

    We recommend using a liquid food reward so the animal cannot travel with the reward; we used 20% chocolate milk powder and 10% sucrose in water. We inexpensively created a remote reward delivery system as follows:

    1. Drill a hole in the wall of the maze at the two reward delivery sites.

    2. Glue or cement a small dish, such as a screw cover cap, beneath the drilled hole.

    3. Feed a pipette tip through the hole so it is over the dish, and connect the pipette tip to tubing. The tubing can then be run to the SeripettorTM Bottletop Dispensers for remote delivery. It is important to fill the length of the tubing with the reward liquid before putting the animal in the maze.


    Animal tracking

    Install overhead cameras above the maze making sure the entire maze is in the field of view. Adjust tracking software to your desired parameters, which will vary depending on if you are tracking the animal using an LED or not. We adjusted camera parameters using a trial animal to avoid unintended maze exposure with a test animal. Some tracking programs, such as Bonsai, allow you to set the parameters post recording.


    Operating the maze

    Because the maze is not automated, it requires manual operation. Set up an area of the room where the maze operator can be hidden behind two semi-sheer curtains. The two curtains should part at the forced point of the maze. This setup allows the maze operator to remain hidden from view, while within reach of the object blocking the non-forced arm of the maze to adjust when the animal is at the maze choice arms. This setup also allows the operator the observe the choices made by the animal for manual reward delivery. Please refer to Video 1 for a demonstration on operating the maze.

      A reward is dispensed when the animal’s entire body, including tail tip, is entirely in the correct choice arm. While all four feet in the correct arm could be used as a criterion, animals often change or correct their choices with four feet (but not the entire length of their tail) in the maze. Using tail tip as criterion helps eliminate incorrect administration of reward.


  3. Animal training

    For an overview of the animal training timeline, see Figure 3.

    Handling phase (minimum of 10 days)

    Handle the animals for a minimum of 15 min a day for 10 days before placing on the maze. Allow the rat to familiarize themselves to you by holding them in your lap, stroking them lightly, and playing with them, over gradually increasing lengths of time. As the version of the maze we use is not automated, the animal must be familiar with you as to avoid being distracted when you operate the maze. Begin food depriving the animal about a week prior to the start of training the animal on the maze.



    Figure 3. Timeline of maze training. Animals are handled for ten days prior to first exposure to the maze. Three days into handling, food deprivation (to 85% starting weight) begins and is maintained for the duration of all maze running. Three days before the anticipated start of training, the reward mixture should be introduced to the animal in their home cage to familiarize them with the chocolate milk and sucrose mixture. The animal is then trained for 2 days with unrewarded exploration the maze, followed by two days of reward exploration, and 3 days of trials where the animal is double forced (i.e., forced at both the forced and choice sides of the maze to make the correct turns). Following these three training stages, the animal is ready to begin learning the maze task.


    Familiarizing the animal with the reward (3 days)

    Three days before beginning maze exploration, during the animal’s dark/feeding period, leave a small dish in the animal’s home cage with some prepared liquid reward. This is to familiarize the animal with the reward food and to avoid any neophobia on the maze.


    Unrewarded exploration phase (2 days)

    This phase is designed to get the animal familiar with the track and the surrounding landmarks. This phase should last for two consecutive days and no arms on the maze should be blocked off.

    1. Place the animal on the maze (location was not kept consistent).

    2. Let the animal explore, uninterrupted, for 30 min or until the animal is still for 5 min, whichever comes first.

    3. Remove the animal from the maze.

    4. On the second day, if the entirety of the track was not covered on the first exploration day, be sure to place the animal in the unexplored area when placing them on the track.


    Rewarded exploration phase (2 days)

    This phase is designed to familiarize the animal with the reward locations on the maze. This phase should last for two consecutive days and begin the day after the end of the unrewarded exploration phase.

    1. Before putting the animal on the track, fill the reward wells with about 0.8 ml of reward each. Set up the remote reward dispensers to dispense 0.8 ml of reward. No maze arms should be blocked off.

    2. Place the animal on the track, at either one of the forced arms or the middle stem.

    3. Allow the animal to explore and find the filled reward wells.

    4. When the animal has finished a reward, remotely dispense 0.5 ml reward into the now empty well after the animal has exited that arm of the maze.

    5. Continue doing this for 30 min or until the animal is still for 5 min, whichever comes first. There may be some initial hesitancy to take the reward on the maze; this is normal.


    Double forced learning phase (3 days)

    This phase is designed to familiarize the animal with the idea that they will be rewarded on the choice side after visiting the same side of the maze they were at on the maze’s forced side. This phase is easiest to complete with two people; one to move the blockade on each side of the maze.

    1. Before putting the animal on the track, set up the remote reward dispensers to dispense 0.4 ml of reward.

    2. Prepare and make note of the direction the blockade will be on for 30 trials (you will likely only get through 5-12 trials on these days). As stated, the animal will be forced to variable sides of the maze with no more than three consecutive trials forced to the same side. To prepare, either run prepared code that will output trials in this pattern (we used https://github.com/hsw28/behavior/blob/master/maze/leftright.py) or you can assign heads and tails of a coin to left and right and flip a coin. If the coin comes up the same way three times (for example, instructs three times forced right), do not flip for the next trial and assign it to the opposite direction (i.e., left) then resume flipping to designate the next trial.

    3. Block off the designated side of the maze for the first trial on both the reward/choice arms and the forced arms. You will therefore be forcing the animal to the correct side of the maze on the choice side.

    4. Place the animal on the unblocked forced arm

    5. Allow the animal to run to the rewarded arm and receive the reward.

    6. While the animal is taking the reward, set up for the next trial. If the animal is forced the same way, this will require no set up. If the animal is to be forced to the opposite direction, both blockades will need to be moved.

    7. The next trial is considered initiated when the animal has returned to the forced side of the maze (the unblocked arm for the next trial)–you will not be moving the animal yourself. Tail tip in the arm was considered a trial initiation.

    8. Allow the animal to run the next trial with the appropriate sides blocked off according to your plan from step two. Although, at this stage, there is no choice at the choice arm, it is still a good idea to get in the habit of not dispensing a reward until the animal has committed to a choice/reward arm with tail tip in.

    9. Repeat steps 6-8 for 30 trials, 30 min, or until the animal is still for 10 min, whichever comes first. It is common to only get 5-12 trials in on these training days.

    10. Repeat the above steps for a total of three consecutive days.


    Test phase (variable length)

    This is the final phase of learning and testing. The animal should acquire task proficiency (>75% correct) during this phase, although the time it takes to gain proficiency varies greatly (Figure 5B). We believe a three-day moving average over 75% correct would be an acceptable criteria.

    1. Before putting the animal on the track, set up the remote reward dispensers to dispense 0.4ml of reward.

    2. Prepare and make note of the direction the blockade will be on for 30 trials (see step 2 in the double forced learning phase).

    3. Block off the designated side of the maze for the first trial on the forced side of the maze.

    4. Place the animal on the unblocked forced arm.

    5. Allow the animal to run the maze and select a choice arm.

    6. If the correct choice arm is selected on the first choice, remotely dispense the reward into the reward dish. You can dispense the reward when the animal’s tail tip is in the correct choice arm.

    7. If the animal chooses the incorrect arm, they are not rewarded even if they subsequently visit the correct arm. The animal must initiate a new trial for the chance to be rewarded.

    8. As soon as the animal makes their choice, whether correct or incorrect, setup for the next trial by moving the block on the forced side to the side needed for the next trial, as determined in step 2.

    9. A new trial is considered initiated when the animal returns to the forced side of the maze, with tail tip in the unblocked arm.

    10. Repeat steps 5-9 for 30 trials, 30 min, or until the animal is still for 10 min.

    11. An animal is considered to have learned the task when they complete the task at 75% correct for two consecutive days.

        For an example of an animal running two trials on the maze see Video 1.

Data analysis

A tail-tip in the forced arm was the criteria used for the initiation of new trials. A tail tip in the choice arm was used to determine when an animal made a choice on the choice side of the maze. For a representative example, see Figure 5.

Notes

  1. For examples of many of the phenomena explained below see Figure 4.



    Figure 4. Examples traces of an animal running during different training stages. Colors of traces correlate with training period. Unreward exploration (green): The animal is allowed to freely explore the maze. The animal may not fully cover the maze. This animal only explored the lower forced arm once, so he was placed there on the second day of exploration. Rewarded exploration (purple): The animal is allowed to freely explore the maze with reward in the reward wells. Animals tend to spend the majority of their time during these trials on the choice side of the maze. Double forced (red): Note that only trials forced in one direction are shown here. The animal is forced to make the correct choice at both sides of the maze. Training and running (black): The animal begins training on the complete task. Left: Early training. The animal spends a lot of time on the choice arms of the maze they have associated with reward. The animal has not yet learned they must return to the forced arm to initiate a new trial. Middle: Animal has begun to learn that they must return to the forced side of the maze, but still meanders and is not very focused on running and initiating trials. Right: Animal is well trained. They consistently return to the forced arms and are focused on the task with little wandering or turning mid-arm.


  2. During the initial stages of training, the animal will tend to run back and forth between the reward arms. This is normal; it takes time for the animal to learn they have to return to the forced side of the maze. This will likely greatly decrease the number of trials initiated (Figure 5A). For an example, see Video 1.

  3. Animals at the beginning of training will also tend to run halfway up or down the middle arm. A new trial is not considered initiated until the tail tip is in a forced arm, and a new trial should also NOT be started if the animal has not yet visited a choice arm. This will also greatly decrease the number of total trials (Figure 5A).

  4. Even a well-trained animal will often check the other reward arm before returning to the forced side of the maze to initiate a new trial (Note than in some automated variations of this maze, after the animal chooses at the choice side, a door is lowered to prevent the animal from checking the other side of the maze [Gomperts et al., 2015]). For an example, see Video 1.

  5. If an animal perseverates on one side of the maze (incorrectly turns the same way on > 80% of trials) for several days it can be useful to return to the double-forced stage of training for two days. A return to the double-forced stage can also be used if 10 days have passed and the animal is not improving.

  6. At the start of training, many animals are very curious about the blockade on the forced side of the maze; this is normal. However, your animal should NOT be able to climb around or over the blockade. If they are able to, build a taller and/or deeper blockade.

  7. At the start of training, it is normal to get very few trials done in 30 min. This occurs because animals run slowly and spend a lot of time going back and forth between reward arms, rather than initiating new trials by returning to the forced arms. As the animals learn the task and how to initiate new trials, they will become much faster (Figure 5A).

  8. There can be great fluctuations in percent trials correct in one animal. This can be due to natural behavior patterns mirroring trial demands, without the animal actually learning the task. For instance, if your animal has a strong left turn bias and a number of trials require a left turn, it may appear the animal has learned the task when they do not actually understand the task demands. Similarly, if the randomly chosen blocked sides alternate, it can mirror animals’ natural tendency to alternate, but the animals have not actually learned the task (Figure 5B).

  9. We found there to be extensive variability between animals and their ability to learn this task. The fastest animal learned within 5 days and consistently performed at over 95% correct. Most animals took 2-4 weeks and were generally around 80% correct. A small proportion of animals were never able to learn to proficiency (Figure 5B).



    Figure 5. Trials and learning curves of five animals. Note that all five animals were implanted with electrophysiology arrays that may impact learning speed. Graphs begin on the first day of testing (after the double forced period) and continue until animals met criteria (2 consecutive days with 75% or greater accuracy, animals 1-4) or, in the case of animal 5, when the animal was sacrificed. A. Number of trials completed by each animal. As the animals become more proficient at the task, the number of trials initiated and completed in 30 min increases. The number of trials continued upward beyond the animals meeting criteria. Left: All data. Right: Data smoothed with a three-day moving average. B. Accuracy of each animal; animals are the same as in A. 75% correct criteria is marked with a red dotted line. There is high variability in the amount of time it took each animal to reach criteria (from 9 days, to did not reach after 42 days). Left: All data. Right: Data smoothed with a three-day moving average.


  10. Be sure not to dispense reward until the animal has committed (tail tip in) to their choice in the choice side of the maze.

  11. Because this non-automated version of the task requires the input of the experimenter, it is important to stereotype your own behavior to avoid serving as a cue in the task. The experimenter who trained the animal should perform the task and avoid wearing any scented products. As noted above, the experimenter should sit behind a curtain when running the animal to avoid any visual cues, and the blockade should be moved only when the animal is at the other side of the track, ideally when they are receiving a reward and distracted. If multiple animals are being run, the track should be cleaned between each animal.

Recipes

  1. Reward liquid

    100 g boiling water was mixed with 10 g sucrose and 20 g chocolate milk powder, such as as Nesquik Chocolate Powder Drink Mix, until fully dissolved. Mixture was allowed to cool before use. Mixture was refrigerated between uses and remade weekly.

Acknowledgments

H.S.W. was supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program. Protocol was adopted from previous works including previous manuscripts from the Wilson lab. We thank Dr Steven Gomperts for his valuable feedback.

Competing interests

There are no competing interests.

Ethics

All procedures were performed within MIT Committee on Animal Care and NIH guidelines under Wilson protocol 0417-037-20, valid from 2017-2020.

References

  1. Ainge, J. A., Tamosiunaite, M., Woergoetter, F. and Dudchenko, P. A. (2007). Hippocampal CA1 place cells encode intended destination on a maze with multiple choice points. J Neurosci 27(36): 9769-9779.
  2. Brito, G. N. and Thomas, G. J. (1981). T-maze alternation, response patterning, and septo-hippocampal circuitry in rats. Behav Brain Res 3(3): 319-340.
  3. Carr, H. (1917). The alternation problem. J Animal Behavior 7(5): 365-384.
  4. Deacon, R. M. and Rawlins, J. N. (2006). T-maze alternation in the rodent. Nat Protoc 1(1): 7-12.
  5. Dudchenko, P. A. (2004). An overview of the tasks used to test working memory in rodents. Neurosci Biobehav Rev 28(7): 699-709.
  6. Foreman, N. and Ermakova, I. (1998). The Radial Arm Maze: Twenty Years On. In: Handbook Of Spatial Research Paradigms And Methodologies. Foreman, N. and Gillett, R. (Eds.). Psychology Press. p. 87-144.
  7. Fortin, N. J. (2008). Navigation and Episodic-Like Memory in Mammals. In: Learning and memory a comprehensive reference. Byrne, J. H. (Ed.). Academic, London. p. 385-417.
  8. Gomperts, S. N., Kloosterman, F. and Wilson, M. A. (2015). VTA neurons coordinate with the hippocampal reactivation of spatial experience.Elife 4: e05360.
  9. Gupta, K., Keller, L. A. and Hasselmo, M. E. (2012). Reduced spiking in entorhinal cortex during the delay period of a cued spatial response task. Learn Mem 19(6): 219-230.
  10. Hollup, S. A., Molden, S., Donnett, J. G., Moser, M. B. and Moser, E. I. (2001). Accumulation of hippocampal place fields at the goal location in an annular watermaze task. J Neurosci 21(5): 1635-1644.
  11. Huxter, J. R., Thorpe, C. M., Martin, G. M. and Harley, C. W. (2001). Spatial problem solving and hippocampal place cell firing in rats: control by an internal sense of direction carried across environments. Behav Brain Res 123(1): 37-48.
  12. Jones, M. W. and Wilson, M. A. (2005a). Phase precession of medial prefrontal cortical activity relative to the hippocampal theta rhythm. Hippocampus 15(7): 867-873.
  13. Jones, M. W. and Wilson, M. A. (2005b). Theta rhythms coordinate hippocampal-prefrontal interactions in a spatial memory task. PLoS Biol 3(12): e402.
  14. Kimble, D. P. and Kimble, R. J. (1965). Hippocampectomy and response perseveration in the rat. J Comp Physiol Psychol 60(3): 474-476.
  15. Lash, L. (1964). Response Discriminability and the Hippocampus. J Comp Physiol Psychol 57: 251-256.
  16. Newman, J., Hale, G., Myroshnychenko, M., Voigts, J., Flores, F. J., Levy, S. and DonatoRidgley, I. (2017). jonnew/Oat: Oat Version 1.0 (Version 1.0). Zenodo. http://doi.org/10.5281/zenodo.1098579.
  17. O'Keefe, J. and Nadel, L. (1978). The Hippocampus as a Cognitive Map. Oxford University Press.
  18. O'Keefe, J. and Speakman, A. (1987). Single unit activity in the rat hippocampus during a spatial memory task. Exp Brain Res 68(1): 1-27.
  19. Olton, D. S. and Samuelson, R. J. (1976). Remembrance of Places Passed: Spatial Memory in Rats. J Exp Psychol Animal Behavioral Processes 2(2): 97-116.
  20. Olton, D.S., J.T. Becker, and Handelmann, G. E. (1979). Hippocampus, Space, and Memory. Behavioral Brain Sci 2(3): 313-322.
  21. Redish, A. D. and Touretzky, D. S. (1998). The role of the hippocampus in solving the Morris water maze. Neural Comput 10(1): 73-111.
  22. Roberts, W. W., Dember, W. N. and Brodwick, M. (1962). Alternation and exploration in rats with hippocampal lesions. J Comp Physiol Psychol 55: 695-700.
  23. Siegle, J. H. and Wilson, M. A. (2014). Enhancement of encoding and retrieval functions through theta phase-specific manipulation of hippocampus. Elife 3: e03061.
  24. Tanila, H., Ku, S., Kloosterman, F. and Wilson, M. A. (2018). Characteristics of CA1 place fields in a complex maze with multiple choice points. Hippocampus 28(2): 81-96.
  25. Tolman, E. C. (1925). Purpose and cognition: the determiners of animal learning. Psychol Rev 32: 285-97.
  26. Tolman, E. C. (1948). Cognitive maps in rats and men. Psychol Rev 55(4): 189-208.
  27. Tolman, E. C., Ritchie, B. F. and Kalish, D. (1946). Studies in spatial learning; place learning versus response learning. J Exp Psychol 36: 221-229.
  28. Wiener, S. I., Paul, C. A. and Eichenbaum, H. (1989). Spatial and behavioral correlates of hippocampal neuronal activity. J Neurosci 9(8): 2737-2763.
  29. Wirtshafter, H. S. and Wilson, M. A. (2019). Locomotor and Hippocampal Processing Converge in the Lateral Septum. Curr Biol 29(19): 3177-3192 e3173.
  30. Wirtshafter, H. S. and Wilson, M. A. (2020). Differences in reward biased spatial representations in the lateral septum and hippocampus. Elife 9: e55252.

简介

[摘要] 用于动物实验的迷宫的发展使得人们可以研究认知图和位置细胞,空间工作记忆,自然主义导航,毅力,探索,选择和动机行为。但是,许多迷宫,例如目前开发用来测试学习和记忆的T迷宫,并不能在时间和空间上区分编码和记忆期间,这使得在分析动物行为和电生理学时很难分别研究这些阶段。其他迷宫,例如放射状迷宫,依赖于对迷宫部分的单次访问,​​使得迷宫覆盖稀疏,无法进行位置细胞和电生理实验。在此协议中,我们介绍了在低成本的双面T(或H)迷宫中进行空间食性选择任务时对动物进行构建和训练的说明。与传统的T型迷宫和放射状迷宫相比,这种迷宫具有多个优势。这种迷宫的独特之处在于,它在时间和方向上使记忆编码和检索周期分离,同时在两个周期中要求动物具有相同的行为。这种设计允许在研究过程中独立研究大脑机制,例如跨区域theta的协调性。内存编码和检索,同时至少部分地将这些阶段与行为分离。该迷宫以前曾在我们的实验室中用于研究海马,外侧中隔,前额叶皮层和腹侧被盖区中的细胞放电,单区域局部场电势(LFP)模式以及跨区域LFP相干性,并进行研究海马θ扰动对任务绩效的影响。


[背景]迷宫已经在心理学,动物行为学,神经科学实验中被用于超过一个世纪(Dudchenko,2004;福廷,2008年)。在受控的实验室环境中,迷宫使人们能够研究认知图和位置细胞,空间工作记忆,自然主义导航,毅力,探索,选择和动机行为(Tolman等人,1946和1948; Roberts等人,1962) ; Lash,1964; Kimble和Kimble,1965;O'Keefe和Nadel,1978;Brito和Thomas,1981; Wiener等,1989; Redish和Touretzky,1998; Hollup等,2001; Huxter等, 2001; Deacon和Rawlins,2006; Ainge等人,2007; Gupta等人,2012; Tanila等人,2018 )。

在早期的迷宫实验确定动物具有在两个空间选择之间自发交替的倾向后(Carr,1917 ),托尔曼成为最早使用T型迷宫正式研究替代方法的研究者之一(托尔曼,1925 )。尽管交替的能力取决于先前选择的工作记忆,但是传统的T迷宫(有时称为Y迷宫,有时用于更自然的转弯角度)的局限在于编码周期,在此期间,动物必须形成对动物的记忆。当前的选择以及召回方法(动物必须记住先前的选择)在时间和空间上都重叠。另外,由于自发性交替是一种先天的行为,因此动物不需要任何学习时间就可以表现出交替性,这使得该任务对于学习学习和行为习得而言是不理想的。仍然不清楚是什么促使动物进行交配。

半个世纪后,Olton开发了a臂迷宫,以研究除了简单的T迷宫交替之外的选择行为和空间工作记忆(Olton和Samuelson,1976; Olton等,1979 )。在放射状迷宫中,预计动物将一次访问迷宫的每个手臂,而不会重复或重访。尽管这种迷宫需要学习,但受过训练的动物通常每节一次只拜访一只手臂(Foreman和Ermakova,1998年),使得迷宫覆盖的稀疏性无法用于位置细胞和电生理实验。放射状迷宫中的工作记忆还包含了会话期间的所有先前选择,这意味着记忆错误可能会加重,并且一个选择取决于所有先前的选择。

我们的实验室开发了一种双面T(或H)迷宫,它比传统的T迷宫和放射状迷宫具有多个优势。在此任务中,动物被迫迷宫在迷宫的一侧T上,然后必须选择迷宫在对面T上的同一侧才能获得奖励。虽然这个迷宫任务需要学习一段时间,因此可以用于学习空间学习,但是训练有素的大鼠每天可以进行数十次试验,每项试验都需要覆盖至少一半的赛道。因此,该迷宫非常适合用于需要坚固覆盖区域的地方细胞和其他实验。此外,迷宫中的每个选择都只需要知道紧接在前的导航,因此错误是难以理解的。最终,这种迷宫的独特之处在于,它在时间和方向上取消了记忆编码和检索周期的关联,同时在两个周期中都要求动物具有相同的行为(即,两个周期都需要沿着中心茎向下奔跑,然后转一圈),跨编码和检索进行镜像)。这种设计允许在记忆编码和检索期间独立研究大脑机制,例如跨区域theta协调,同时至少部分地将这些阶段与行为分离。

在此协议中,我们将讨论构建低成本版本的迷宫以供大鼠使用的步骤,以及训练动物精通此任务的必要步骤。这种迷宫以前曾在我们的实验室中用于研究海马,外侧隔,前额叶皮层和腹侧被盖区的细胞放电,单区域LFP模式和跨区域LFP相干性(Jones和Wilson,2005a和2005b; Gomperts等等人,2015年;Wirtshafter和Wilson,2019年和2020年),以及研究海马theta摄动对任务绩效的影响(Siegle和Wilson,2014年)。

关键字:空间导航, 迷宫实验, 工作记忆, 海马结构, 动物行为

材料和试剂


动物
使用的动物是长埃文斯大鼠。测试的所有动物均为约350-500 g的雄性,但可以使用雌性和不同体重的动物。这种迷宫的缩放版本也已成功地用于小鼠(Siegle和Wilson,2014年)。


蔗糖
巧克力奶粉,例如Nesquik巧克力粉混合饮料
奖励液体(请参阅食谱)


设备


对于迷宫


迷宫建筑材料如ProTRAK 25在2-1 / 2。× 10英尺25号EQ镀锌钢的Trac。K(家得宝,米Odel等:250PDT125-15)。木材或瓦楞塑料也可以
Ñ OTE :如果使用金属迷宫和做电,为对迷宫接地线。


黑色接触纸,用于防水迷宫,使其导电性降低,并增加了与动物的视频跟踪对比度。我们使用DC-固定26.57英寸×在78.72黑色光泽架衬里(TMI公司-PVC,米Odel等:FA3468348)与TMI防静电地带固定和塑料片(TMI公司-PVC ,米Odel等:FCA 12060-400)
两个小奖励井
我们使用了¾英寸的国家五金142英寸古董黄铜拉杯(美国亚马逊公司的参考号:N335-661)


木材或瓦楞塑料在动物的迷宫的受迫部位形成一块将动物逼向一侧的障碍物(对于自动版本,可以使用伺服控制的门[ Gomperts等,2015 ])。
为了提升迷宫的高度,我们使用了蓝色的交通锥。


奖励送达


在一个远程奖励交付系统中,我们使用了两个BrandTech TM BRAND TM seripettor TM瓶式分液器(Fisher Scientific,目录号:03-840-010 )连接到两个常见的试剂瓶上。如“过程”中所述,我们从瓶子到迷宫中的奖励位置使用了油管。先前的程序还使用了45 mg的食品颗粒(Gomperts等人,2015)。


用于动物追踪


高架摄像机
W¯¯ Ë使用的模型:BFLY PGE-09S2C-CS:0.9 MP,30 FPS,SONY ICX692,COLOR安装有三脚架适配器BFS(30毫米),BFLY,CMLN,CM3,FFMV,FL2,FL3,FMVU,无论是从HTTPS ://www.flir.com/。


相机的合适镜头取决于迷宫的尺寸和安装高度
我们建议像Computar H1214FICS-3 CS固定镜头这样的尺寸建议在迷宫上方6英尺处使用。


跟踪软件,例如Bonsai(https://open-ephys.org/bonsai)或OATE(Newman等,2017 )


程序


任务概述
迷宫由一个受力侧,一个选择侧和一个中间茎组成。强制侧和选择侧各有两个臂,分别包含一个强制点和选择点(图1 )。在迷宫的受力侧,一只手臂(该手臂因试验而异)被挡住了。当动物到达强制点时开始进行试验。为了获得奖励,动物必须在迷宫的选择侧访问迷宫同一侧的手臂(图2,视频1 )。例如,如果将动物逼到右手臂上,他们还必须拜访正确的选择手臂以获取奖励。为了获得奖励,必须在第一次尝试时做出正确的选择。请注意,动物是随机选择的一侧,但需要注意的是,可以连续不超过3次尝试将动物强迫到迷宫的同一侧(为避免动物认为只迷宫的一侧是过去,正确的)。奖励是由实验者远程提供的,该实验者还负责将将动物逼迫到手臂上的封锁进行移动。当动物返回到被迫手臂时,将开始新的试验。


值得注意的是,如果人们试图将与选择相关奖励相关的神经生理学与强迫奖励进行比较,那么动物也可以在迷宫的强迫侧得到奖励(Gomperts等人,2015)。








图1.双面T型迷宫的设计。顶部左侧:示意图显示迷宫和不同的组件,如通篇提及。底部左:示意图表示不同迷宫部件的尺寸。右:贴有迷宫的照片。




图2.迷宫任务需求概述。在强制选择阶段,使用封锁(用阴影的黑框表示)将动物逼到迷宫逼迫一侧的任一侧。随后,动物不得不在迷宫的选择侧选择他们先前被迫进入的迷宫的同一侧。如果动物在第一次选择时做出了正确的选择,就会获得奖励。左列:两次正确试验的示例,其中动物选择了与强制手臂相同的选择手臂。右栏:两次不正确试验的例子,其中动物选择了他们被迫选择的相反选择臂。


视频1.受过训练的老鼠在迷宫中进行两次试验的视频。视频以2倍正常速度加速。动物的位置用红色圆圈表示。迷宫的受力侧在底部,选择侧在顶部。在第一次试验中,将动物压向被迫侧的左臂。然后,动物顺着中心臂奔跑,并正确选择了选择侧的左臂并获得了奖励。然后,动物返回被迫侧,在那里他被迫右臂。动物沿着中心茎跑,错误地选择了左臂,因此没有得到奖励。然后,动物检查右臂以获取奖励,因为最初的选择不正确,所以不予给予奖励。


开始之前
动物房


以12/12个光暗周期将动物单只饲养并进行富集。在光周期中对动物进行了训练,以捕获训练后的睡眠,但可以很容易地在黑暗周期中奔跑。剥夺动物食物的起始重量的85-90 %,并且可以自由饮水。


迷宫建筑


根据图1中的参数构造迷宫。可以将强制臂和选择臂的末端稍大一些,以提供周转空间。
如果计划进行电生理检查,并且迷宫由金属制成,请使用导线将迷宫接地到房间中的接地源。
我们建议用黑色接触纸遮盖迷宫,这样更容易清洁,并为高架摄像机跟踪动物提供高对比度。
构造两个块状迷宫(约2.5英寸/6.3厘米)的宽度,深约6英寸,高约10英寸。这些将用于在学习和测试阶段阻止被强迫的手臂,以及在学习阶段阻止错误选择的手臂。这些积木应该足够高且足够深,以使老鼠不能在其周围或上方爬行。


奖励送达


我们建议使用流食奖励,以免动物随奖励而旅行;我们在水中使用了20%的巧克力奶粉和10%的蔗糖。我们廉价地创建了一个远程奖励交付系统,如下所示:


在两个奖励交付地点的迷宫壁上钻一个洞。
在钻孔的下方,将小盘子(例如螺旋盖)粘上或固结。
将移液器吸头穿过孔,使其在培养皿上方,然后将移液器吸头连接到管道。然后可以将管子输送到Seripettor TM瓶式分配器以进行远程输送。重要的是在将动物放入迷宫之前,用奖励液填充管的长度。


动物追踪


在迷宫上方安装高架摄像机,确保整个迷宫都在视野内。将跟踪软件调整为所需的参数,该参数会根据您是否使用LED跟踪动物而有所不同。我们使用试验动物调整了相机参数,以避免与试验动物意外接触的迷宫。一些跟踪程序(例如盆景)可让您设置录制后的参数。


操作迷宫


由于迷宫不是自动的,因此需要手动操作。设置房间的区域,迷宫操作员可以藏在两个半透明窗帘的后面。两个窗帘应在迷宫的受力点分开。此设置允许迷宫操作或保持隐藏状态,而在动物触及迷宫选择臂时,在物体触及的范围内,可以阻止迷宫的非受力臂进行调整。这种设置还允许操作员观察动物做出的用于手动奖励交付的选择。有关操作迷宫的演示,请参阅视频1。


当动物的整个身体(包括尾尖)完全在正确的选择臂中时,便会获得奖励。虽然可以将正确手臂上的所有四只脚用作标准,但动物通常会在迷宫中用四只脚(而不是整个尾巴的长度)来更改或纠正其选择。使用尾尖作为标准有助于消除对奖励的错误管理。


动物训练
有关动物训练时间表的概述,请参见图3 。


处理阶段(最少10天)


放置在迷宫中之前,每天至少处理动物15分钟,共10天。在逐渐增加的时间长度内,将老鼠抱在腿上,轻轻抚摸它们并与它们玩耍,以使老鼠熟悉自己。由于我们使用的迷宫版本不是自动的,因此动物必须熟悉您,以免在迷宫操作时分心。在迷宫中训练动物之前约一周,开始剥夺动物的食物。




图3.迷宫训练的时间表。在第一次暴露于迷宫之前,将动物处理十天。处理三天后,开始进行食物剥夺(达到起始重量的85%),并在所有迷宫运行期间保持食物剥夺。在预期的训练开始前三天,应将奖励混合物引入其家笼中的动物,以使他们熟悉巧克力牛奶和蔗糖混合物。然后,对动物进行2天的无奖探索迷宫训练,然后进行2天的奖励探索,并进行3天的试验,在这种情况下,动物被双力压迫(即,在迷宫的强制侧和选择侧均被压迫,以使迷宫迷路。正确的转弯)。在这三个训练阶段之后,动物准备开始学习迷宫任务。


使动物熟悉奖励(3天)


开始迷宫探索的前三天,在动物的黑暗/喂养期间,将一小盘装有一定量的液体奖励的东西留在动物的家笼中。这是为了使动物熟悉奖励食品,并避免迷宫上发生新恐惧症。


未获奖励的探索阶段(2天)


此阶段旨在使动物熟悉赛道和周围的地标。此阶段应连续两天,并且不应遮挡迷宫上的任何武器。


将动物放在迷宫上(位置不一致)。
让动物不间断探索30分钟,或直到动物静止5分钟,以先到者为准。
从迷宫中取出动物。
在第二天,如果在第一个探索日没有覆盖整个赛道,则将动物放在赛道上时,请确保将其放置在未探索的区域。


奖励探索阶段(2天)


此阶段旨在使动物熟悉迷宫中的奖励位置。此阶段应连续两天,并在无酬勘探阶段结束后的第二天开始。


在将动物放置在赛道上之前,将每个约0.8毫升的奖励液注入奖励孔中。设置远程奖励分配器以分配0.8毫升的奖励。请勿遮挡任何迷宫武器。
将动物放在赛道上,放在任一被强迫的手臂或中间的茎上。
让动物探索并找到填满的奖励井。
动物获得奖励后,在动物离开迷宫的那支手臂后,将0.5毫升奖励远程分配到现在空的井中。
继续进行30分钟或直到动物静止5分钟为止,以先到者为准。最初可能有些犹豫,无法在迷宫中获得奖励。这是正常的。


双重强制学习阶段(3天)


此阶段旨在使动物熟悉以下想法:在迷宫被迫进入迷宫的同一侧后,将在选择侧获得奖励。这个阶段最容易由两个人完成;一个移动迷宫两侧的封锁线。


在将动物放在赛道上之前,请设置远程奖励分发器以分发0.4毫升的奖励。
准备并记下进行30次审判的封锁的方向(这些天您可能只会经历5到12次审判)。如上所述,该动物将被迫进入迷宫的另一侧,而连续进行的试验不得超过同一侧的三个连续试验。要准备,可以运行将以这种模式输出试验的准备好的代码(我们使用了https://github.com/hsw28/behavior/blob/master/maze/leftright.py),也可以将硬币的正面和反面分配给左右移动硬币。如果硬币以相同的方式出现三遍(例如,指示三遍被迫向右移动),请勿翻转以进行下一次尝试,并将其分配给相反的方向(即,向左),然后继续翻转以指定下一次尝试。
在奖励/选择臂和强制臂上挡住迷宫的指定一侧以进行第一次审判。因此,您将在选择侧将动物逼到迷宫的正确一侧。
将动物放在畅通无阻的强迫手臂上
让动物跑到有奖赏的手臂上并获得奖赏。
当动物获得奖励时,为下一次试验做好准备。如果以相同的方式强迫动物,则无需进行设置。如果将动物逼向相反的方向,则需要移动两个障碍物。
当动物回到迷宫的被迫侧(下一次试验的畅通手臂)后,就认为下一次试验已经开始。–您不会自己动动动物。手臂的尾尖被认为是试验开始。
根据第二步中的计划,让动物进行下一次试验,并在适当的一侧将其挡住。尽管在此阶段选择手臂没有选择,但是养成在动物承诺尾巴尖尖进入选择/奖励手臂之前不分发奖励的习惯仍然是一个好主意。
重复步骤6-8,进行30分钟的试验(30分钟),或直到动物静止10分钟(以先到者为准)。在这些培训日中,通常只进行5到12次试验。
总共连续三天重复上述步骤。


测试阶段(可变长度)


这是学习和测试的最后阶段。动物在此阶段应获得任务熟练程度(> 75%正确),尽管获得熟练程度所需的时间差异很大(图5B )。我们认为,三天移动平均线的正确率超过75%将是可以接受的标准。


在将动物放在赛道上之前,请设置远程奖励分发器以分发0.4ml的奖励。
准备并记下封锁将进行30次试验的方向(请参阅双重强制学习阶段的步骤2)。
遮挡迷宫的指定一侧,以便在迷宫的受力一侧进行第一次尝试。
将动物放在畅通无阻的强迫手臂上。
让动物跑迷宫并选择一个选择臂。
如果在第一个选择上选择了正确的选择臂,则将奖励远程分配到奖励盘中。当动物的尾巴尖位于正确的选择臂中时,您可以分配奖励。
如果动物选择了错误的手臂,即使他们随后访问了正确的手臂,也不会获得奖励。动物必须重新进行试验才能获得奖励。
一旦动物做出了选择,无论是正确还是不正确,都可以通过将受力侧的块移动到下一次试验所需的一侧来进行下一次试验的设置,如步骤2所述。
当动物返回到迷宫的受力侧且尾巴末端畅通无阻时,便开始了一项新的试验。
重复步骤5-9,进行30分钟的试验(30分钟),或直到动物静止10分钟。
当动物连续两天以75%的正确率完成任务时,就认为该动物已经学会了该任务。
有关在迷宫上进行两次试验的动物的示例,请参见视频1 。


数据分析


强制手臂的尾尖是启动新试验的标准。选择臂上的尾尖用于确定动物何时在迷宫的选择侧做出选择。有关代表性示例,请参见图5 。


笔记


有关下面解释的许多现象的示例,请参见图4 。




图4.在不同训练阶段奔跑的动物的痕迹示例。痕迹的颜色与训练时间相关。无用探索(绿色):允许动物自由探索迷宫。动物可能没有完全遮盖迷宫。该动物只探过下部下压臂一次,因此在探查的第二天就被放在那里。奖励性探索(紫色):允许动物在奖励井中自由探索具有奖励的迷宫。在这些试验中,动物往往将大部分时间都花在迷宫的选择面上。双强制(红色):请注意,此处仅显示沿一个方向强制的试验。动物被迫在迷宫的两侧做出正确的选择。训练和跑步(黑色):动物开始训练完成的任务。左:早期训练。动物在与奖励相关联的迷宫选择臂上花费大量时间。该动物尚未得知他们必须返回被强迫的手臂以发起新的审判。中:动物已经开始学习它们必须回到迷宫的那一面,但是仍然蜿蜒而行,并且不太专注于进行和开始试验。正确:动物训练有素。他们始终返回到被强迫的武器,并专注于任务,而很少徘徊或转过中臂。


在训练的初始阶段,动物往往会跑回来和次奖励臂之间。这是正常的; 动物需要花费一些时间才能知道他们必须回到迷宫的被迫一侧。这可能会大大减少启动的试验次数(图5A )。有关示例,请参见视频1 。
训练开始时的动物也倾向于在中臂的上半部或下半部奔跑。直到尾尖处于受迫臂中之前,才考虑进行新的试验;如果动物尚未拜访选择臂,则也不应开始新的试验。这也将大大减少总试验次数(图5A )。
即使是训练有素的动物,也常常会在返回迷宫的被迫侧开始新的试验之前检查另一只奖励臂(请注意,在该迷宫的某些自动变体中,动物在选择侧选择了门之后,将其降低以防止动物检查迷宫的另一侧[ Gomperts等,2015 ]。有关示例,请参见视频1 。
如果动物在迷宫的一侧持续不动(在80%的试验中错误地以相同的方式转弯)数天,则返回双力训练阶段两天可能会很有用。如果已经过了10天并且动物没有好转,也可以返回双倍受力阶段。
在训练开始时,许多动物都对迷宫被迫侧的封锁感到非常好奇。这是正常的。但是,您的动物不应该能够绕过障碍物或越过障碍物。如果他们能够,建立一个更高和/或更深的封锁。
在训练开始时,在30分钟内进行很少的试验是正常的。发生这种情况是因为动物慢跑并且在奖励臂之间花费大量时间来回移动,而不是通过返回强制臂来发起新的试验。随着动物学习任务以及如何开始新的试验,它们将变得更快(图5A )。
一只动物的正确试验百分比可能会有很大的波动。这可能是由于自然的行为模式反映了审判的要求,而动物却没有真正地学习任务。例如,如果您的动物左转弯偏向性强,并且许多试验都要求左转弯,那么当它们实际上不了解任务要求时,可能似乎动物已经学会了该任务。类似地,如果随机选择的受阻边交替出现,则可以反映出动物自然交替的倾向,但动物并未真正学会该任务(图5B )。
我们发现动物与它们学习这项任务的能力之间存在很大的差异。最快的动物在5天内学会了,并且表现稳定,正确率超过95%。大多数动物需要2-4周的时间,通常正确率约为80%。一小部分动物永远无法学会熟练(图5B)。




图5.五只动物的试验和学习曲线。请注意,所有五只动物都植入了可能会影响学习速度的电生理阵列。图形从测试的第一天开始(双倍强迫期之后),一直持续到动物达到标准(连续2天,达到75%或更高的准确度,动物1-4),或者在动物5的情况下,直到动物达到标准为止。牺牲了。A.每只动物完成的试验次数。随着动物对这项任务的熟练程度提高,在30分钟内启动和完成的试验数量也随之增加。试验数量继续超过符合标准的动物。左:一个LL数据。右图:数据以三天移动平均线进行了平滑处理。B.每只动物的准确性;动物与A中的动物相同。75%的正确标准标有红色虚线。每只动物达到标准所需的时间(从9天到42天后没有达到)的时间差异很大。左:一个LL数据。右图:数据以三天移动平均线进行了平滑处理。


在动物在迷宫的选择侧做出选择(尾尖)之前,请确保不要分配奖励。
由于此非自动化版本的任务需要实验者的输入,因此刻板印象自己的行为很重要,以避免在任务中充当提示。训练动物的实验者应执行任务,并避免穿戴任何有香味的产品。如上所述,实验者在奔跑动物时应坐在窗帘后面,以避开任何视觉线索,并且仅当动物在赛道的另一侧时才应移动障碍物,理想情况下是在他们获得奖励并分散注意力时。如果要奔跑多只动物,则应在每只动物之间清洁轨道。


菜谱


奖励液
将100 g沸水与10 g蔗糖和20 g巧克力奶粉(例如Nesquik Chocolate Powder Drink Mix)混合,直到完全溶解。使用前将混合物冷却。两次使用之间应将混合物冷藏,并每周进行翻新。


致谢


HSW由国防部(DoD)通过国防科学与工程研究生奖学金(NDSEG)计划提供支持。协议是从以前的工作中采纳的,包括威尔逊实验室的以前的手稿。我们感谢Steven Gomperts博士的宝贵反馈。


利益争夺


没有利益冲突。


伦理


所有程序均在麻省理工学院动物护理委员会和NIH指南下根据Wilson协议0417-037-20执行,有效期自2017-2020年。


参考


JA的Ainge,M。的Tamosiunaite,F。的Woergoetter和PA的Dudchenko(2007)。海马CA1位置细胞在具有多个选择点的迷宫中编码预期的目的地。神经科学杂志27(36):9769-9779。
布里托(GN)和托马斯(GJ)(1981)。大鼠的T型迷宫交替,反应模式和隔海马回路。行为大脑研究3(3):319-340。
卡尔·H(1917)。交替问题。动物行为杂志7(5):365-384。
迪肯(RM执事)和罗恩斯(JN)(2006)。啮齿动物中的T型迷宫交替。Nat Protoc 1(1):7-12。
Dudchenko,P.A .(2004年)。用于测试啮齿动物工作记忆的任务的概述。Neurosci Biobehav修订版28(7):699-709。
N.的工头和一号的Ermakova(1998)。径向手臂迷宫:二十年。于:空间研究范式和方法手册。N.福尔曼和R. Gillett(编辑)。心理学出版社。p。87-144。
新泽西州福汀(2008)。哺乳动物的导航和情节式记忆。我Ñ :学习和记忆的全面参考。JH伯恩(编辑)。学术,伦敦。p。385-417。
Gomperts,SN,Kloosterman,F.和Wilson,MA(2015)。VTA神经元与空间体验的海马激活相协调。Elife 4:e05360。
Gupta,K.,Keller,LA和Hasselmo,ME(2012)。在提示的空间响应任务的延迟期间减少内嗅皮层的尖峰。学习记忆19(6):219-230。
Hollup,SA,Moulden,S.,Donett,JG,Moser,MB和Moser,EI(2001)。在环形水迷宫任务中,目标位置的海马位置场累积。神经科学杂志21(5):1635-1644。
Huxter,JR,Thorpe,CM,Martin,GM和Harley,CW(2001)。解决大鼠的空间问题和海马地方细胞放电:通过跨环境的内部方向感进行控制。行为大脑研究123(1):37-48。
MW的Jones和MA的Wilson(2005 a )。相对于海马θ节律的内侧前额叶皮质活动的相位进动。海马15(7):867-873。
MW的琼斯和马萨诸塞州的威尔逊(2005年b )。Theta节律在空间记忆任务中协调海马-前额叶的相互作用。PLoS Biol 3(12):e402。
DP的Kimble和RJ的Kimble(1965)。大鼠海马切除术和反应持久性。生理心理学杂志(J Comp Physiol Psychol)60(3):474-476。
拉什(Lash,L.)(1964年)。响应可分辨性和海马体。生理心理学杂志(J Comp Physiol Psychol)57:251-256。
纽曼(J.),海尔(Hale),米罗什尼琴科(M.Myroshnychenko),沃伊兹(Voigts),弗洛雷斯(Flores),FJ,里维(S.)jonnew / Oat:Oat版本1.0(版本1.0)。Zenodo。http://doi.org/10.5281/zenodo.1098579。
O'Keefe,J.和Nadel,L.(1978)。海马作为认知图。牛津大学出版社。
O'Keefe,J.和Speakman,A.(1987)。在空间记忆任务期间大鼠海马中的单个单位活动。Exp Brain Res 68(1):1-27。
DS的Olton和RJ的Samuelson(1976)。记住经过的地方:大鼠的空间记忆。J Exp Psychol动物行为过程2(2):97-116。
DS的Olton,JT Becker和GE的Handelmann(1979)。海马,空间和记忆。行为脑科学2(3):313-322。
Redish,AD和Touretzky,DS(1998年)。海马在解决莫里斯水迷宫中的作用。神经计算10(1):73-111。
罗伯茨,WW,登伯,WN和布罗德威克,M。(1962)。大鼠海马病变的交替和探索。生理心理学杂志(J Comp Physiol Psychol)55:695-700。
西格尔(Siegle,JH)和马萨诸塞州威尔逊(Wilson,MA)(2014)。通过theta阶段特定的海马操纵增强编码和检索功能。网上生活3:e03061。
Tanila,H.,Ku,S.,Kloosterman,F.和Wilson,MA(2018)。具有多个选择点的复杂迷宫中CA1放置场的特征。海马28(2):81-96。
托尔曼,EC(1925年)。目的和认知:动物学习的决定因素。Psychol修订版32:285-97。
托尔曼,EC(1948年)。大鼠和男人的认知图。Psychol修订版55(4):189-208。
托尔曼,EC,里奇,BF和Kalish,D。(1946年)。空间学习研究;地方学习与回应学习。Ĵ精通心理学36:221-229。
Wiener,SI,Paul,CA和Eichenbaum,H。(1989)。海马神经元活动的空间和行为相关。神经科学杂志9(8):2737-2763。
Wirtshafter,HS和Wilson,MA(2019)。运动和海马加工融合在外侧隔中。Curr Biol 29(19):3177-3192 e3173。
Wirtshafter,HS和Wilson,MA(2020)。外侧隔和海马中奖励偏倚的空间表示的差异。Elife 9:e55252。
  • English
  • 中文翻译
免责声明 × 为了向广大用户提供经翻译的内容,www.bio-protocol.org 采用人工翻译与计算机翻译结合的技术翻译了本文章。基于计算机的翻译质量再高,也不及 100% 的人工翻译的质量。为此,我们始终建议用户参考原始英文版本。 Bio-protocol., LLC对翻译版本的准确性不承担任何责任。
Copyright Wirtshafter et al. This article is distributed under the terms of the Creative Commons Attribution License (CC BY 4.0).
引用: Readers should cite both the Bio-protocol article and the original research article where this protocol was used:
  1. Wirtshafter, H. S., Quan, M. and Wilson, M. A. (2021). Dissociating Behavior and Spatial Working Memory Demands Using an H Maze. Bio-protocol 11(5): e3947. DOI: 10.21769/BioProtoc.3947.
  2. Wirtshafter, H. S. and Wilson, M. A. (2020). Differences in reward biased spatial representations in the lateral septum and hippocampus. Elife 9: e55252.
提问与回复
提交问题/评论即表示您同意遵守我们的服务条款。如果您发现恶意或不符合我们的条款的言论,请联系我们:eb@bio-protocol.org。

如果您对本实验方案有任何疑问/意见, 强烈建议您发布在此处。我们将邀请本文作者以及部分用户回答您的问题/意见。为了作者与用户间沟通流畅(作者能准确理解您所遇到的问题并给与正确的建议),我们鼓励用户用图片的形式来说明遇到的问题。

如果您对本实验方案有任何疑问/意见, 强烈建议您发布在此处。我们将邀请本文作者以及部分用户回答您的问题/意见。为了作者与用户间沟通流畅(作者能准确理解您所遇到的问题并给与正确的建议),我们鼓励用户用图片的形式来说明遇到的问题。