Spatial navigation by congenitally blind individuals
Abstract
Spatial navigation in the absence of vision has been investigated from a variety of perspectives and disciplines. These different approaches have progressed our understanding of spatial knowledge acquisition by blind individuals, including their abilities, strategies, and corresponding mental representations. In this review, we propose a framework for investigating differences in spatial knowledge acquisition by blind and sighted people consisting of three longitudinal models (i.e., convergent, cumulative, and persistent). Recent advances in neuroscience and technological devices have provided novel insights into the different neural mechanisms underlying spatial navigation by blind and sighted people and the potential for functional reorganization. Despite these advances, there is still a lack of consensus regarding the extent to which locomotion and wayfinding depend on amodal spatial representations. This challenge largely stems from methodological limitations such as heterogeneity in the blind population and terminological ambiguity related to the concept of cognitive maps. Coupled with an over‐reliance on potential technological solutions, the field has diffused into theoretical and applied branches that do not always communicate. Here, we review research on navigation by congenitally blind individuals with an emphasis on behavioral and neuroscientific evidence, as well as the potential of technological assistance. Throughout the article, we emphasize the need to disentangle strategy choice and performance when discussing the navigation abilities of the blind population. WIREs Cogn Sci 2016, 7:37–58. doi: 10.1002/wcs.1375
For further resources related to this article, please visit the WIREs website.
INTRODUCTION
Blind individuals are faced with the challenge of finding their way through built environments that can be difficult to interpret, disorienting, and even intimidating.1, 2 Research on visual impairment and blindness has approached this challenge by attempting to gain an understanding of the cognitive processes underlying navigation without vision and developing assistive technologies for obstacle avoidance and route selection. Despite the impressive number of technological advances (for reviews, see Refs 3, 4, 5), these devices are not often used by the blind population.6 At the same time, basic research in visual impairment has somewhat stagnated because of a failure to incorporate modern frameworks and recent findings from the field of spatial cognition. For example, traditional frameworks regarding the microgenesis of spatial knowledge7 have resulted in inconsistent interpretations of survey knowledge (often termed ‘cognitive map’). This confusion has led to apparently contradictory results that have propagated through the field and, consequently, a lack of clarity regarding the navigation abilities of blind people.
We approach the topic from the perspective that blind and sighted people may have similar potential (i.e., abilities) for spatial learning that can support navigation. Our position is that, in order to investigate the abilities of blind individuals, researchers should study the relationship between navigation strategies (i.e., the implementation of particular action sequences8, 9) and navigation performance (i.e., the extent to which a person is successful with respect to a particular navigation task) rather than approaching either factor independently.10 Previous research that has studied both strategies and performance has typically constrained blind and sighted participants to adopt similar strategies. For example, this occurs when sighted participants are asked to wear blindfolds or when blind participants are asked to complete visually guided tasks (e.g., pointing judgments11). In such cases, a difference in performance is inconclusive with respect to abilities alone because the strategy adopted by both blind and sighted people invariably disadvantages one of the groups. These results are also inconclusive when researchers are unable to detect a difference in performance. Indeed, a nonsignificant difference does not necessarily provide evidence for the absence of an effect. In contrast, allowing participants to adopt different strategies would provide insight into the abilities of blind and sighted people. Here, similar performances would indicate similar abilities, and different performances would suggest different abilities (see Figure 1).

In order to provide a comprehensive review on navigation and blindness, we limit ourselves to research on congenitally blind individuals. Previous research often separates the congenitally blind people from the adventitiously blind, blindfolded sighted, and sighted people. Comparisons across these groups are appropriate but difficult to implement because of lack of agreement regarding the distinction between congenitally and adventitiously blind individuals in terms of acuity, age of onset, and the presence or absence of additional disabilities.12, 13, 14, 15 In addition, blind groups are often at an advantage when compared to blindfolded sighted groups given their previous reliance on other perceptual modalities. Moreover, comparisons between blind and sighted groups can be problematic if the task inherently favors the visual modality. We will also focus our discussion on meso‐ and macroscale spaces (i.e., spaces that are larger than the observer and navigable16), but we will include empirical examples that use microscale spaces in order to infer navigation behavior.
This review is organized into six sections. First, we present behavioral research in human spatial navigation. Here, we discuss the concepts of locomotion and wayfinding, different frames of reference (e.g., egocentric or allocentric), and the affordances provided by different scales of space (e.g., micro, meso, or macro). Second, we discuss discrete and continuous frameworks for the acquisition of spatial knowledge. Third, we contrast different theories regarding the spatial abilities of blind people (i.e., difference, deficiency, and inefficiency17) as they relate to three possible models for the acquisition of spatial knowledge (i.e., convergent, cumulative, and persistent). Fourth, we discuss multimodal processing (i.e., obtained via various modalities) and amodal representations (i.e., independent of a specific modality18) such as what is often termed the ‘cognitive map.’ Fifth, we review the neural correlates of navigation by the blind population as they relate to functional reorganization in brain regions specifically associated with navigation. Sixth, we consider the future of research in navigation by blind people including a review of technological advances and heterogeneity in the blind population. At the end of this review, we intend to provide a framework for the interpretation of prior work and the facilitation of future work.
HUMAN SPATIAL NAVIGATION
Everyday, blind individuals navigate indoor and outdoor environments that favor the visual sense. In order to reach a destination, they must plan and execute a series of decisions through these environments. Montello19 distinguished between wayfinding and locomotion components of spatial decision‐making. This distinction corresponds to the terms ‘orientation’ and ‘mobility’ that are commonly used in the visual impairment literature.20 Locomotion entails immediate responses to environmental features, such as avoiding a fire hydrant or stepping over a curb. These immediate responses are invariably egocentric because environmental information is acquired with reference to the observer's body.21 In contrast, wayfinding involves reasoning about immediate22 and remote23 environments and can employ transient (i.e., short‐term) and enduring (i.e., long‐term) mental representations.24 During wayfinding, people may also adopt reference frames other than those directly experienced (i.e., allocentric).
Reference Frames and Scales of Space
In wayfinding, egocentric and allocentric reference frames always involve at least one observer, two environmental features, and the spatial relations among them.25 Unlike egocentric reference frames, allocentric reference frames are primarily comprised of object‐to‐object relations and/or an abstract coordinate system.26 Various types of allocentric reference frames include environmental,25 intrinsic,27 and absolute.28 For the remainder of this study, we will limit ourselves to the distinction between egocentric and allocentric reference frames (for a review, see Klatzky29). There is a tendency to associate blindness with egocentrism,30 especially at larger scales,31 but researchers have yet to disentangle whether or not egocentrism in the blind has resulted from aspects of experimental design.10 For comparisons of blind and sighted individuals, more studies are needed in order to directly compare two types of task and/or performance criteria, one that favors egocentrism and one that favors allocentrism.
Both egocentric and allocentric references frames can be used to represent spaces of different scales. While several frameworks have been proposed,22, 32 we will use the typology described by Schinazi16 specifically developed for the classification of experiments with blind and visually impaired individuals. This typology distinguishes between micro‐ (i.e., manipulatory space that does not require full‐body locomotion; see Box 1), meso‐ (i.e., space that is larger than the observer but can be apprehended from one viewpoint), and macroscales (i.e., space that is larger than the observer but must be apprehended from multiple viewpoints). Previous research has suggested particular associations between the scale of the environment and the adopted frame of reference (for a recent review, see Ref 33), but the adoption of different reference frames may vary with respect to individual and task‐related factors (for a review, see Ref 34). Similarly, scale of space influences choice of navigation strategy.35
Many experiments in the visual impairment and navigation literature involve either learning or testing in a microscale space. In some cases, participants are asked to learn a microscale environment from which macroscale behaviors can be inferred.36, 37, 38, 39, 40, 41, 42, 43 In other cases, participants are asked to learn a macroscale space and transfer this knowledge to a microscale task.44, 45 There are particular challenges and opportunities associated with each of these approaches.
On the one hand, small‐scale experiments46, 47 (e.g., table‐top tasks, tasks in fMRI) restrict participants’ locomotion and limit proprioceptive feedback.48 In addition, these tasks cannot always dissociate egocentric and allocentric reference frames.33 Given that navigation at the macroscale requires the coordination of multiple reference frames,49, 50 these behavioral and neural mechanisms may only be partly represented by studies at the smaller scales33.
On the other hand, microscale investigations provide a new level of explanation for macroscale behavior using neuroscientific evidence. For example, activation of the occipital lobe during tactile tasks have revealed similarities in the neural processes of blind people doing a tactile task and sighted people doing the same task visually.51, 52, 53, 54 Although only a subset of these mechanisms can be studied at the microscale,33 greater experimental control is possible. Indeed, the systematic variation afforded by virtual reality may allow researchers to decompose different stages of navigation at the macroscale. Together, these considerations suggest that learning and testing at the microscale is necessary for navigation research with blind people, but researchers should be cautious when interpreting their results.
Spatial Knowledge Acquisition
At the macroscale, researchers have proposed two frameworks for describing the acquisition of spatial knowledge. The discrete (also known as ‘dominant’55) framework posits that spatial knowledge is acquired via three, qualitatively distinct stages7 (for an alternative discrete framework, see Ref 56). During the first ‘landmark’ stage, people begin to learn salient features of the environment that can be used for establishing a frame of reference. During the second ‘route’ stage, landmarks become connected via routes that progress from topological to Euclidean formats. These routes allow for the construction of ‘mini‐maps’ that are locally, but not globally, coherent.57 In the last ‘survey’ stage, people integrate these mini‐maps using an objective frame of reference. This global representation is sometimes referred to as a ‘cognitive map.’58 Survey knowledge is often considered to be Euclidean in the sense that people can draw inferences regarding the straight‐line distance between two points. Along with Golledge59 and Shemyakin,60 Siegel and White's7 framework was influenced by Piaget and Inhelder's61 stage theory of cognitive development.
The alternative ‘continuous’ framework55 differs from the discrete framework in at least two ways. First, the strict separation of landmark, route, and survey stages is criticized because people are capable of acquiring different types of spatial information in parallel.62, 63, 64 Indeed, Schinazi and Epstein64 found that, at decision points, landmarks and route information (i.e., direction of travel) were encoded simultaneously. Similarly, Ishikawa and Montello65 found large individual differences with respect to the timing of survey knowledge acquisition. Second, Euclidean spatial knowledge is formed earlier in spatial learning than the discrete framework would suggest.65, 66 These findings support theories of Bayesian integration67 and adaptive combination68 of different sources of spatial information (e.g., place learning, response learning, cue learning, and dead reckoning). These theories posit that the reliability of spatial cues in the environment help determine the extent to which the corresponding information is acquired.
SPATIAL LEARNING BY BLIND PEOPLE
Similarly, the Convergent Active Processing in Interrelated Networks (CAPIN) theory has described the relative weighting of spatial information obtained through different perceptual modalities.69, 70 In this case, the weighting is determined by the reliability of the cues provided by each perceptual modality (e.g., the precision of the information specifying an object's location). Developed from studies with blind and sighted children, this model posits that, in the absence of vision, other modalities receive greater weight than they otherwise would have. Redundancy in the information received through the different specialized modalities allows individuals to compensate for lack of vision. During wayfinding, some of the information provided by vision, audition, and proprioception is redundant. Compared to the other modalities, vision provides relatively precise information regarding the location of specific features for allocentric encoding. As such, sighted individuals may attribute more weight to vision than to the other modalities. When sighted individuals are blindfolded, these weights will remain the same. Consequently, blindfolded sighted people may underperform relative to blind people, even for tasks that require allocentric encoding.71 Blind individuals may be similarly capable of allocentric coding via audition and proprioception, but these modalities by themselves are, for the most part, insufficient. The effectiveness of auditory information is limited because not all meaningful features emit sounds, and the effectiveness of proprioceptive information may be limited because of physical barriers. However, the combination of audition and proprioception may facilitate the formation of an allocentric reference frame but at a different time scale (e.g., at the level of eye movements vs at the level of head rotations or walking).
Assuming redundancy in the spatial information provided by different modalities, the CAPIN theory would predict the amount of spatial knowledge acquired by blind and sighted individuals to eventually converge with experience. As such, the pattern predicted by the CAPIN theory represents one of three probable models (inspired by Chen72) for the progression of spatial knowledge acquisition by blind and sighted individuals (see Figure 2). These models represent extensions of the difference, deficiency, and inefficiency theories proposed by Fletcher17 that characterizes the history of research on the spatial abilities of blind people. Note that these models assume that vision provides sighted individuals with an initial advantage relative to blind individuals, but there have been a few cases in which the blind outperformed the sighted.73, 74 In addition, different models may apply to different spatial tasks. For a review of differences in memory and inferential tasks, see Ref 75, and for a review of methods in spatial cognition and blindness, see Ref 76.
Three models of differences in spatial knowledge acquisition between blind and sighted individuals. On the x‐axis is the amount of experience with a particular environment or task. On the y‐axis is the extent of spatial knowledge acquisition. Because of the lack of vision, blind individuals start at a disadvantage in each of these models. (a) Convergent model: the difference between blind and sighted individuals decreases over time until reaching a similar level of spatial knowledge. (b) Cumulative model: the difference between blind and sighted individuals increases with experience. (c) Persistent model: blind and sighted individuals continue to acquire spatial knowledge with experience, but differences in spatial knowledge remain constant.

Convergent Model
Specifically, the difference/convergent model suggests that blind individuals begin at a disadvantage relative to sighted individuals but that this disparity decreases with experience until reaching similar levels of performance. Here, experience may refer to exposure to a particular environment, the repeated performance of a particular task, or the general development of spatial ability with age. This model is supported by several lines of research on tactile discrimination,77 tactile matching,78 assembly,73 rotation tasks,73 and distance estimation73, 79 at the microscale. Hollins and Kelley80 also found that blind and sighted participants performed similarly in a spatial memory task (i.e., direction estimation) once blind participants were able to touch the task surface.80 At the mesoscale, blind and sighted participants performed similarly at triangle completion tasks81, 82, 83 and walking novel paths among objects in a room.84, 85 However, Millar10 and Liben86 note that the latter study does not necessarily provide evidence for a Euclidean representation given the curvature in the walked paths and possible auditory cues. In addition, Corazzini and colleagues.87 found that blind participants exhibited larger learning effects than blindfolded sighted participants in the egocentric condition of an auditory version of a Morris water maze task. However, these results are difficult to interpret because performance in egocentric and allocentric conditions differed on the first trial before learning could have occurred. At the macroscale, research has shown that, although blind participants tend to plan routes in more detail (compared to sighted participants88), the accuracy with which models of environments were reconstructed was similar for blind and sighted groups.88, 89 Indeed, a recent study found that congenitally blind participants outperformed blindfolded sighted participants in terms of shortcutting, direction estimates, distance estimates, and sketch mapping after walking two routes.71
Cumulative Model
The deficiency/cumulative model suggests that vision is critical for the development of spatial representation and that the progression of spatial knowledge acquisition is slower for blind people relative to sighted people. At its extreme, this model holds that, in the absence of vision, individuals are incapable of forming spatial representations.90 According to a relatively moderate interpretation of this model, blind people may be able to acquire spatial knowledge, but the gap between blind and sighted individuals increases with experience. There is little evidence to support this view. For example, Cleaves and Royal91 found that, for both memory and inferential tasks at the microscale, the disparity in performance between blind and sighted individuals increased with task complexity (i.e., finger mazes) and, for late blind individuals, the time since participants lost their vision. Similarly, at the meso‐ and macroscales, a longitudinal study found that blind children became less capable at estimating the straight‐line directions of different locations in their homes and neighborhoods. This disparity between blind and sighted children increased with the size of the environment under consideration.92
Persistent Model
In the inefficiency/persistent model, the absence of vision results in an initial disadvantage that remains constant with experience because auditory and proprioceptive cues are less effective for spatial knowledge acquisition than vision. Although spatial knowledge can be acquired by blind people (significantly above chance performance), this knowledge may be less precise and/or less accurate compared to the knowledge of sighted people. Worchel93 provided initial support for this theory using tasks that involves the reproduction and mental matching of geometric forms at the microscale and is often cited as evidence for inefficiency theory because the tasks favored the visual modality. However, upon careful examination, the results are more in line with the cumulative model given the superior performance of sighted participants and the significant relationship between age of onset and accuracy on spatial tasks.93 It should also be noted that most of the evidence for this theory is based on differences in performance at one particular point in time but is assumed to be the same across time. Consequently, this approach cannot distinguish between the three aforementioned longitudinal models. As a result, the number of studies supporting this theory may be overestimated. At the microscale, the sighted consistently outperformed the blind in a haptic version of the triangle completion task,94 mental scanning,95 incidental cued‐recall test,95 perspective‐taking,96 mental rotation,97, 98, 99 distance estimation,74 direction estimation,100 and change detection75 after locomotion.101 At the mesoscale, the performance of blind participants (relative to sighted participants) is sometimes lower for inferential tasks than memory tasks.17, 102 However, Rieser and colleagues103 found that vision during locomotion can facilitate the completion of particular memory tasks when compared to an inferential task. Similarly, blind participants performed worse than sighted participants in Euclidean and/or functional (i.e., route) distance estimation,104, 105 direction estimation,106 and a variety of other spatial tasks (e.g., short‐cutting, model construction107). For the most part, these patterns in performance are also present at the macroscale. Here, blind participants exhibited difficulties in terms of inferential direction tasks,108 direction estimation from memory,109 distance estimation,110 model construction,44, 111 and sketch maps.45
Interestingly, there is less support for the persistent model than the convergent or cumulative models in the developmental literature (for a review, see Refs 14, 112). Several researchers have reported a pattern that resembles the cumulative model for Piagetian reasoning,113 projective relations,114 perspective‐taking,115 and gross motor skills.116, 117, 118 According to these studies, vision plays a critical role in the early stages of development,119 but Millar10 warns that lags in the Piagetian stages of development may indicate deficiencies in the discrete framework rather than deficiencies in the blind individuals’ abilities. Others have suggested that the natural course of development may eliminate performance gaps in terms of sensorimotor understanding and exploration of the environment.120 Similarly, early interventions can reduce the gap in Piagetian reasoning.121 Here, research indicates a convergent pattern as children's spatial thinking becomes more abstract.120
SPATIAL PROCESSING AND REPRESENTATION BY BLIND PEOPLE
The spatial representations underlying navigation performance can be abstracted from different perceptual modalities.122, 123, 124 These modalities vary with respect to the amount of information that can be acquired simultaneously and sequentially.125 Previous research has suggested that vision allows for simultaneous perception, whereas audition126 and haptics127 usually allow for sequential perception. Disadvantages of the blind during navigation have been attributed to this distinction between vision and the other modalities.112 Indeed, a blind person with a cane can typically anticipate only one meter of spatial information per cane movement. However, the extent to which the information acquired by any modality is sequential or simultaneous depends on the spatial and temporal scales under consideration. As such, all of the perceptual modalities are sequential and simultaneous to some extent. Visual information, for example is distributed along time as well as space. The eyes tend to fixate one object at a time as they survey a scene,128 and larger scenes require the integration of visual information over a longer period of time. The advantage of vision is thus the speed with which the eyes can move compared to head or body movements. While sensory substitution devices (SSDs) have improved the spatial and temporal resolution of available spatial information (e.g., the EyeCane129), this information still needs to be abstracted into an interpretable form (e.g., using artificial intelligence preprocessing6).
Multimodal Processing and Amodal Representations
Amodality posits that spatial representations can be abstracted from the perceptual modality through which the information was originally acquired.18, 130 Thus far, there are two types of evidence for this proposition. First, several researchers have attributed similarities in the performance of blind and sighted individuals (i.e., convergence) to amodal spatial representations.79 Similar performance has been found for mental rotation tasks97, 98, 99 (but see Klatzky73), mental scanning tasks,74, 95 incidental cued‐recall tasks,95 Euclidean and functional distance estimation for regular street networks,79 and triangle completion tasks.83 Second, some researchers have investigated the functional equivalence of spatial information acquired through different perceptual modalities (for a review, see Refs 18, 130). Here, functional equivalence refers to similarities in performance resulting from information gained through two or more perceptual modalities (or language131, 132). These studies have found functional equivalence of vision and language in direction and distance judgments,133 vision and haptics in a spatial updating task,134 audition and language in a walk‐to‐target task,131 and vision and audition in a spatial working memory task.135 In addition, to confirm or disconfirm the primary role of vision, some of these experiments have used blind participants in addition to the blindfolded sighted or sighted participants.131, 134
Despite these efforts, amodality remains an open issue in the literature because of the difficulty in disentangling representation and process using behavioral data.136 Any hypothesis regarding the format of a representation underlying behavior must assume (either implicitly or explicitly) particular processes and vice versa.136 As such, the CAPIN model assumes that amodal representations underlie spatial behavior in order to support the hypothesis that spatial information can be acquired through multiple modalities.69 However, multimodal processing may also be paired with modality‐specific representations to form an alternative theory. Similarly, amodal representation may be paired with modality‐specific processing.18, 130 In order to provide evidence for or against amodal representation, we propose an approach using Bayesian models of integration.67 Here, for a navigation task, one would systematically vary the reliability of location cues from two different modalities (e.g., vision and audition) and the distance between the indicated locations. These two sets of parameters can be used in order to calculate the optimal localization response (according to the Bayesian model) from trial to trial. On the one hand, significant deviations from this optimal response would indicate evidence against amodality. Such deviations may result from either a failure to integrate the two cues or a disparity between the objective cues and the perceived cues. This disparity may be used in order to match cues from different modalities (e.g., vision and audition) in terms of perceived information content (e.g., distal access). At the same time, this possibility can be eliminated by also obtaining responses to the two cues individually (as recommended by Cheng et al.67). On the other hand, the extent to which the Bayesian model can predict localization responses would indicate evidence towards amodality.
Cognitive Maps
The amodal representations underlying navigation are often referred to as ‘cognitive maps.’ Psychologist Edward Tolman137 coined the term ‘cognitive map’ in 1948 in order to describe the neural mapping of stimuli to responses. Although he investigated the spatial behavior of ‘hungry’ rats in a variety of mazes, he intended the term to be used more broadly.138 This is evident in his discussion of such topics as motivation during development and race relations. Over two decades later, O'Keefe and Dostrovsky139 referred to Tolman in order to define their discovery of ‘a cognitive, or spatial, map of [a rat's] environment’ (p. 171). In the introduction to The Hippocampus as a Cognitive Map, O'Keefe and Nadel140 later revised this reference to Tolman by acknowledging that the original definition of cognitive map was too vague for their purposes. According to their new definition, cognitive maps were psychological spaces with absolute reference frames.141
The current challenge with cognitive maps stems from the vague definitions of the term,142 its constant reformulation,143, 144 and inaccurate citations of Tolman's and O'Keefe's previous work. At its most extreme, cognitive maps are considered structurally analogous to a cartographic map145 in that they represent Euclidean spatial relations in a global format, from a top–down view, and with an allocentric frame of reference.146, 147 While most researchers acknowledge that the cartographic map is only a yardstick for comparison,138 at least five issues regarding cognitive maps remain. Cognitive maps may be defined with respect to level of abstraction (e.g., amodal vs modality‐specific148), scope (e.g., global vs local58), metricality (e.g., Euclidean vs topological149, 150), perspective (e.g., orientation free, top‐down, or first‐person151, 152), and reference frame (e.g., egocentric vs allocentric, absolute vs relative153). Consequently, different researchers have employed different tasks in order to investigate cognitive maps.76 An additional source of confusion is the nesting of these various terms. For example, reference frames can include a perspective, but a perspective does not necessarily assume a particular reference frame.
With respect to research on the blind population, the term cognitive map continues to be loosely applied. There are many important findings regarding the content (i.e., the amount of acquired spatial knowledge) and quality (i.e., its character or format) of blind individuals’ spatial representations (see Spatial Learning by Blind People section). However, separate aspects of the term are conflated in discussions of whether blind people have cognitive maps. For example, researchers may use both route knowledge techniques (e.g., distance and direction estimates) and survey techniques (e.g., sketch maps) in order to probe ‘cognitive map knowledge.’44, 76, 88, 106, 107, 109, 154 Performances on these tasks may appear to provide overwhelming evidence for cognitive maps in blind people. However, this inconsistency in the operationalization of the term may also represent a variety of specific spatial abilities. Thus, we recommend the use of relatively well‐defined terms (e.g., metricality and reference frame) when investigating the content and quality of spatial representations in the absence of vision.
NEURAL CORRELATES OF HUMAN NAVIGATION
In order to further investigate the process of cognitive mapping, researchers have also employed neuroscientific methods. Research with animals has found that the medial temporal lobe is critical for the allocentric spatial representations (often referred to as cognitive maps) that underlie navigation.140 In particular, grid cells155 in the medial entorhinal cortex provide place cells139 primarily in the hippocampus (but see Whitlock156 for the parahippocampus and parietal cortex) with the coordinate system required for location‐specific encoding.157 Similarly, research using cell recordings in humans has identified place cells158 and grid cells.159 Consistent with an allocentric representation system, neuroimaging studies have found correlations between hippocampal activity and Euclidean distances between presented locations (specifically in the left anterior region160), directional responses during an initial learning phase,161 mental162 or virtual163 navigation along a known route between landmarks, and triangle completion.164 In addition, differences in the volume of the posterior hippocampus have been related to navigation expertise165 and training166 in taxi drivers and inferential pointing judgments by college students.66 Lesion studies further confirm the role of the hippocampus for encoding locations in an allocentric reference frame.167, 168, 169 Several studies have dissociated this allocentric system in the medial temporal lobe with a response‐based system centered in the caudate nucleus in both animals170 and humans171, 172, 173 (for reviews see Refs 174, 175, 176). A response‐based system encodes procedural knowledge such as specific action sequences during navigation.170
Also supporting human spatial navigation is a scene‐processing network centered around the parahippocampal place area (PPA177) and retrosplenial cortex (RSC).178 Both the PPA and RSC respond preferentially to scenes when compared to objects or faces.177 In particular, the PPA is responsible for encoding the spatial structure of the local scene (for a review, see Ref 179). The PPA is particularly responsive to landmarks at decisions points (i.e., those that are relevant for navigation64), and the parahippocampal gyrus more generally can discriminate between large and small objects.180 In contrast, the RSC is responsible for situating the local scene within the broader spatial environment (for a review, see Ref 179). In particular, the RSC integrates egocentric spatial information and may have a role in translating that information into an allocentric code.161 Such information includes the direction of travel at decision points64 and, more precisely, the observer's position and heading181 analogous to head direction cells in rats.182
Neural Correlates of Navigation by Blind People
Despite the advances in neuroscience research on navigation by sighted individuals, this work has been difficult to extend to blind individuals, especially for meso‐ and macroscale environments. Some researchers have opted to conduct the experimental task outside the MRI scanner and analyze the relationship between the structural image and task performance.183 Others have attempted to circumvent this challenge by employing tactile (e.g., finger mazes46 and shape recognition184, 185) or auditory tasks.186 However, tactile tasks sometimes require movement beyond the typical button responses that can cause artifacts in the functional imaging data without appropriate control conditions.46 Furthermore, auditory tasks have focused on spatial abilities that do not necessarily correspond to those used during navigation (e.g., sound localization without locomotion187). In addition, researchers have rarely allowed both blind and sighted participants to use their dominant modality in learning or testing within the same experiment (see discussion of the relationship between strategy, performance, and ability in the Introduction).
Studies in which blind participants used either auditory or tactile modalities have highlighted their brains’ capacity for both intramodal (e.g., changes in functional areas resulting from visual deprivation corresponding to the same modality) and cross‐modal (e.g., changes in functional areas resulting from visual deprivation corresponding to a different modality) plasticity188, 189 although these two phenomena are not mutually exclusive.190 Indeed, the occipital cortex (including the associative areas) can be recruited during both auditory191, 192 and tactile51, 52, 193 tasks (for reviews, see Refs 194, 195). In addition, structural imaging studies have found reduced volume in the occipital cortex for blind participants relative to sighted participants196, 197 (for a review, see Ref 198). A deformation‐based morphometry study also found that, while the early occipital cortex is reduced for blind individuals compared to sighted individuals, the associative occipital cortex is relatively large.199 Although the role of the occipital cortex is different for blind and sighted individuals in general, less is known about structural and functional differences in other areas that support navigation.
Analogous to studies with sighted people, research into the neural correlates of navigation by blind people have investigated allocentric, response‐based, and scene‐processing networks (see Figure 3). With respect to the allocentric network centered in the hippocampus, volumetric studies found that blind people have smaller right posterior hippocampi200, 201 and larger right anterior hippocampi183, 201 relative to sighted people. Fortin and colleagues183 also found that the size of the right hippocampus was correlated with performance on a wayfinding task in a maze. The hippocampus has also been implicated for navigation through a tactile finger maze46 and an Euclidean distance task (comparable to Morgan160) using auditory cues.186 These studies may appear to provide evidence for allocentrism in blind individuals. However, it should be noted that two of these studies200, 201 did not associate hippocampal volume with a navigation task64 or expertise.165, 166 In addition, the correlation between hippocampal size and performance found by Fortin and colleagues183 is based on data aggregated over blind and sighted participants for a task that did not necessarily require an allocentric reference frame. Furthermore, the imaging data from Gagnon and colleagues46 contrasted maze navigation with rest instead of a control task that could have disentangled the effect of navigation from that of general task completion. Interestingly, to date, no studies have found activation of or volumetric differences in the caudate related to a response‐based navigation system by blind people.202 Several studies have also implicated the scene‐processing network centered around the PPA and RSC for the haptic exploration of scenes (compared to objects184) and the discrimination of object sizes.185 These findings help to frame other investigations that have found activation in the parahippocampal gyrus for a tactile finger maze task46 and navigation through a virtual environment with the aid of an SSD.47

Previous research has also identified a variety of other brain regions (e.g., parietal and occipital cortices) that were active during navigation tasks by blind people (see Figure 3). Specifically, the right inferior parietal cortex was active during an Euclidean distance task using auditory cues,186 and the posterior parietal cortex was active during navigation through a virtual environment using a SSD.47 In addition, the occipital cortex was active during haptic47 (uncorrected46) and imagined203 navigation. Other areas may be related to navigation by blind people, but more research is needed before these can be tied to specific spatial knowledge constructs (precuneus and fusiform gyrus47; temporal parietal junction204; and superior temporal gyrus205). The recruitment of these areas by different modalities (e.g., auditory and tactile) may suggest the functional reorganization of cortical areas in order to support navigation.47 This is consistent with results from Wolbers and colleagues184 in which functional connectivity analysis indicated lack of significant connections between the occipital and posterior parahippocampus for the haptic discrimination task. Taken together, these results may support theories of amodal spatial representation (rather than visual recoding) in blind participants’ brains.
FUTURE CONSIDERATIONS
Technological Advances for Navigation Assistance
There are three different technological approaches to navigation assistance for blind people. First, noninvasive technologies including SSDs exploit the brain's natural ability to adapt in response to the environment.188, 206 Second, invasive technologies directly stimulate regions of the visual system with the aid of a prosthesis in order to bypass damaged areas.207, 208, 209, 210, 211, 212 In these two cases, the technologies attempt to bypass the missing sense in order to complement the user's existing sources of spatial information. Unlike SSDs, invasive technologies are not targeted towards congenitally blind people but are found to be effective for adventitiously blind people given that these devices require a developed visual system and/or prior visual experience.213 Indeed, studies using transcranial magnetic stimulation (TMS) in the visual cortex have demonstrated that elicited vision‐like sensations (i.e., phosphenes) increase with remaining visual acuity in adventitiously blind participants214 and may not occur for congenitally blind participants.198 Furthermore, Garcia and colleagues215 found that a retinal prosthesis did not provide sufficiently coherent or precise information for navigation. Third, general navigation aids can provide abstracted spatial information to the user using one of his/her functioning senses (e.g., GPS devices). Unlike SSDs and invasive technologies, these general aids can potentially be used by people with or without vision. Despite the large number of conceptual papers on this topic,216, 217, 218, 219, 220, 221, 222, 223, 224, 225 we will focus our discussion on the few examples of SSDs that have been empirically tested in the context of navigation. For more comprehensive reviews of technological navigation aids and SSDs, see Refs 4, 5, and 226.
SSDs translate visual information into tactile information, auditory information, or both in a noninvasive manner.3, 5, 227, 228 These devices have been available for several decades, but they have not been widely adopted by the blind population6, 229, 230 (but see Ref 231). Recently, researchers have investigated the potential of these technologies for navigation by blind people.47, 129, 232, 233, 234 At the same time, evidence from neuroscience regarding the brain's capacity for functional reorganization47, 54, 184, 235 suggests flexibility with respect to the channels through which perceptual information can be processed. Indeed, congenitally blind participants using SSDs recruit visual areas to recognize sounds, shapes, and movement187, 236 (see Neural Correlates of Navigation by Blind People section). This is consistent with the view that the brain considers information in a task‐dependent, rather than a modality‐dependent, manner.206
Since 2000, many researchers have investigated the use of SSDs for locomotion and wayfinding assistance in indoor and outdoor, real and virtual, environments using auditory and tactile feedback.233, 234, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266 While many of these SSDs are technically impressive, their evaluation too often relies on qualitative data,237, 238, 242, 243, 244, 245, 247, 251, 252 a very small number of blind participants (i.e., five or less),239, 241, 248, 250, 260, 262, 263, 267 or only blindfolded sighted participants.233, 240, 246, 249, 261, 264, 265 However, with an adequate number of blind participants, some researchers have demonstrated that blind people are capable of using some devices for navigation.234, 253, 254, 256, 257, 258, 268, 269
Some of these SSDs have been tested in the context of locomotion with varying success. For example, the Tongue Display Unit (TDU) transforms visual information into electrotactile stimulation that conveys the position of obstacles in the surrounding environment. Blind participants have been shown to effectively use the device in order to navigate a virtual maze47 and even outperform blindfolded sighted participants in a high contrast, life‐size, obstacle course.200 Similarly, Dunai and colleagues268 showed that training with their Acoustic Prototype allowed blind participants to perceive and avoid objects in indoor and outdoor obstacle courses. In addition, the EyeCane is a device for transforming distance information into both sounds and vibrations.270 The EyeCane has been successfully used in a virtual obstacle avoidance task270 and in a real life‐size Hebb‐Williams maze.129 Despite the recent achievements of both TDUs and the EyeCane, participants using these devices had some difficulties avoiding obstacles near the ground.129, 271
SSDs have also been developed for wayfinding assistance. For example, Marston and colleagues253 successfully directed blind participants along real world paths using different auditory displays. Similarly, Kalia and colleagues254 conveyed distance information to blind and blindfolded sighted participants with the aid of a digital map and synthetic speech. In addition, some researchers have shown that auditory and tactile virtual reality training can facilitate wayfinding in a corresponding real environment.258, 269 Together these studies suggest that SSDs can benefit blind people during navigation. However, Loomis and colleagues6 have argued that the success of SSDs is dependent on the extent to which the information they provide is relatively simple and task‐specific. Before SSDs can be widely applied, we need to gain a better understanding of the navigation abilities of the blind population.
Heterogeneity in the Blind Population
Any intervention intended to support navigation by blind people requires an understanding of their needs as a group and as individuals. Heterogeneity in the blind population has created many difficulties for researchers13, 272 and cross‐study comparisons.189 These difficulties stem from at least five methodological factors, including sample size, type of impairment, age of onset, level of education and intelligence, and level of orientation and mobility (for a review, see Ref 75). Many studies have reported findings regarding the navigation performance of blind people using small sample sizes (e.g., eight,109 seven,11 six,80 two,92 and one84), which makes it difficult to generalize from one study to another. Type of impairment and age of onset can also lead to challenges with the assignment of participants to experimental groups. Moore and colleagues273 estimate that 75% of those considered blind have some remaining vision. For example, people with only light perception are often classified as totally blind.274 Past studies have found performance differences related to both type of impairment (e.g., lower performance for retrolental fibroplasia275) and age of onset.110 In addition, level of education and intelligence has been found to correlate with performance on a map task275 and sometimes creates potential confounds.39 Finally, assessments of level of orientation and mobility at the time of testing may be an important factor to consider when studying the acquisition of spatial knowledge.83
Navigation Strategies
The study of navigation strategies provides an innovative way of addressing challenges associated with the heterogeneous nature of the blind population. Previous research has required blind and sighted participants to adopt similar strategies in order to complete a particular task. However, this approach has led to difficulties in extracting consistent patterns in spatial knowledge acquisition during navigation. While the number of studies on navigation strategies in blind people is limited,8, 16, 276 they suggest that performances on spatial tasks are strongly associated with strategy choice for both blind and sighted individuals. Such investigations were also limited in the past given methodological challenges such as collecting path information and the manual classification of strategies (but see Ref 35).
Originally, search strategies were classified by Hill and Ponder277 as part of orientation and mobility training. The perimeter (e.g., walking along the walls of a room) and gridline (e.g., walking parallel transects from one wall to the opposite wall) strategies allowed the visually impaired to systematically explore novel environments. After initial learning, Tellevik278 found that blindfolded sighted participants (all mobility instructors) tended to shift from the perimeter or gridline strategies to a reference point strategy (i.e., walking from a known location to each target object and back). Hill and colleagues276 further characterized three reference point strategies as object‐to‐object, object‐to‐wall, or object‐to‐start. They found that participants who chose one of these three strategies also provided the most accurate direction estimates irrespective of level of visual impairment.276 However, a subsequent study showed that blind participants used these reference point strategies less often than blindfolded sighted controls and instead opted for a cyclic search pattern (i.e., walking to each object sequentially8). In this case, the cyclic strategy was associated with worse performance than the reference point strategy.8, 279 Schinazi16 replicated the relationship between the reference point strategy and better performances on a variety of spatial tasks. However, when visual information is available, the cyclic strategy was also related to better performance.
CONCLUSION
The systematic testing of scientific models sometimes requires the adaptation of existing frameworks in order to incorporate findings from disparate fields. In this review, we have attempted to bridge spatial cognition and visual impairment literatures, including recent advances in neuroscience and technology, in order to gain a better understanding of the navigation abilities in blind people. Towards this end, we proposed that future research should allow for blind and sighted individuals to adopt different strategies that do not artificially limit their potential. We also proposed three models of spatial knowledge acquisition by blind and sighted people and attempted to characterize previous research in these terms. This procedure highlighted the importance of measuring spatial learning over time in order to assess learning potential in the absence of vision. This longitudinal approach also allows for the investigation of hypotheses regarding amodal spatial representation. Along with existing studies on the functional equivalence of different perceptual modalities, we proposed a line of research on the Bayesian integration of spatial cues from multiple modalities used by blind people. This is complemented by a review of the neural correlates of navigation by blind people within the context of functional reorganization. The topic of functional reorganization was also considered in light of new developments for SSDs that were specifically designed to aid navigation by blind individuals. Future considerations also included methodological issues resulting from heterogeneity in the blind population and the ways in which they may be addressed with research on navigation strategies.
References
- 1.
GolledgeGR . Geography and the disabled: a survey with special reference to vision impaired and blind populations. Trans Inst Br Geogr1993, 18:63–85.[Google Scholar] - 2.
ImrieR ,HallP . Inclusive Design: Designing and Developing Accessible Environments. London: Spon Press; 2001. - 3.
MaidenbaumS ,AbboudS ,AmediA . Sensory substitution: closing the gap between basic research and widespread practical visual rehabilitation. Neurosci Biobehav Rev2014, 41:3–15.[PubMed][Google Scholar] - 4.
GiudiceNA ,LeggeGE .Blind navigation and the role of technology In: , , , eds. The Engineering Handbook of Smart Technology for Aging, Disability, and Independence. Hoboken, NJ: John Wiley & Sons; 2008, 479–500. - 5. , , eds. Assistive Technology for Visually Impaired and Blind People. London: Springer; 2008.
- 6.
LoomisJM ,KlatzkyRL ,GiudiceNA .Sensory substitution of vision: importance of perceptual and cognitive processing In: , , eds. Assistive Technology for Blindness and Low Vision. Boca Raton, FL: CRC Press; 2012, 162–191. - 7.
SiegelAW ,WhiteSH . The development of spatial representations of large‐scale environments. Adv Child Dev Behav1975, 10:9–55.[PubMed][Google Scholar] - 8.
GaunetF ,Thinus‐BlancC . Early‐blind subjects’ spatial abilities in the locomotor space: exploratory strategies and reaction‐to‐change performance. Perception1996, 25:967–981.[PubMed][Google Scholar] - 9.
HölscherC ,MeilingerT ,VrachliotisG ,BrösamleM ,KnauffM . Up the down staircase: wayfinding strategies in multi‐level buildings. J Environ Psychol2006, 26:284–299.[Google Scholar] - 10.
MillarS . Models of sensory deprivation: the nature/nurture dichotomy and spatial representation in the blind. Int J Behav Dev1988, 11:69–87.[Google Scholar] - 11.
HaberL ,HaberRN ,PennigrothS ,NovakK ,RadgowskiH . Comparison of nine methods of indicating the direction of objects: data from blind subjects. Perception1993, 22:35–47.[PubMed][Google Scholar] - 12.
WarrenDH . Childhood visual impairment: perspectives on research design and methodology. J Vis Impair Blind1978, 72:404–411.[Google Scholar] - 13.
WarrenDH . Blindness and Early Childhood Development. 2nd ed.New York: American Foundation for the Blind; 1984. - 14.
ChessS ,GordonSG . Psychosocial development and human variance. Rev Res Educ1984, 11:3–62.[Google Scholar] - 15.
HolbrookC ,MamerL ,MacCuspieA ,McConnellD . The Impact of Vision Loss on the Development of Children from Birth to 12 Years: A Literature Review. Toronto, Canada: The Canadian National Institute of the Blind; 2000. - 16.
SchinaziVR . Representing space: the development, content and accuracy of mental representations by the blind and visually impaired. PhD Thesis, University College London, 2008. - 17.
FletcherFJ . Spatial representation in blind children. 1: development compared to sighted children. J Vis Impair Blind1980, 74:381–385.[Google Scholar] - 18.
LoomisJM ,KlatzkyRL ,GiudiceNA .Representing 3D space in working memory: spatial images from vision, touch, hearing, and language In: , , eds. Multisensory Imagery: Theory & Applications. New York: Springer; 2013, 131–155. - 19.
MontelloDR .Navigation In: , , eds. The Cambridge Handbook of Visuospatial Thinking. New York: Cambridge University Press; 2005. - 20.
LongRG ,GiudiceNA .Establishing and maintaining orientation for mobility In: , , , eds. Foundations of Orientation and Mobility, Volume 1 (History and Theory). New York: AFB Press; 2010, 45–62. - 21.
WangRF ,SpelkeES . Updating egocentric representations in human navigation. Cognition2000, 77:215–250.[PubMed][Google Scholar] - 22.
HeftH .The ecological approach to navigation: a Gibsonian perspective In: , ed. The Construction of Cognitive Maps. Dordrecht: Kluwer Academic Publishers; 1996. - 23.
WangRF ,BrockmoleJR . Simultaneous spatial updating in nested environments. Psychon Bull Rev2003, 10:981–986.[PubMed][Google Scholar] - 24.
WallerD ,HodgsonE . Transient and enduring spatial representations under disorientation and self‐rotation. J Exp Psychol Learn Mem Cogn2006, 32:867–882.[PubMed][Google Scholar] - 25.
SheltonAL ,McNamaraTP . Systems of spatial reference in human memory. Cogn Psychol2001, 43:274–310.[PubMed][Google Scholar] - 26.
ZaehleT ,JordanK ,WustenbergT ,BaudewigJ ,DechentP ,MastFW . The neural basis of the egocentric and allocentric spatial frame of reference. Brain Res2007, 1137:92–103.[PubMed][Google Scholar] - 27.
MouW ,McNamaraTP . Intrinsic frames of reference in spatial memory. J Exp Psychol Learn Mem Cogn2002, 15:887–897.[Google Scholar] - 28.
LevinsonSC .Frames of reference and Molyneux's question: cross‐linguistic evidence In: , , , , eds. Language and Space. Cambridge, MA: MIT Press; 1996, 109–169. - 29.
KlatzkyRL .Allocentric and egocentric spatial representations: definitions, distinctions, and interconnections In: , , , , eds. An Interdisciplinary Approach to Representation and Processing of Spatial Knowledge. Berlin: Springer‐Verlag; 1998, 1–17. - 30.
PasqualottoA ,SpillerMJ ,JansariAS ,ProulxMJ . Visual experience facilitates allocentric spatial representation. Behav Brain Res2013, 236:175–179.[PubMed][Google Scholar] - 31.
IachiniT ,RuggieroG ,RuotoloF . Does blindness affect egocentric and allocentric frames of reference in small and large scale spaces?Behav Brain Res2014, 273:73–81.[PubMed][Google Scholar] - 32.
MontelloRD ,GolledgeGR . Scale and Detail in the Cognition of Geographinc Information: Report of Spacialist Meeting of Project Varenius. Santa Barbara, CA: National Centre for Geographic Information and Analysis; 1999. - 33.
WolbersT ,WienerJM . Challenges for identifying the neural mechanisms that support spatial navigation: the impact of spatial scale. Front Hum Neurosci2014, 8:571.[PubMed][Google Scholar] - 34.
WolbersT ,HegartyM . What determines our navigational abilities?Trends Cogn Sci2010, 14:138–146.[PubMed][Google Scholar] - 35.
PingelTJ ,SchinaziVR . The relationship between scale and strategy in search‐based way finding. Cartogr Perspect2014, 77:33–45.[Google Scholar] - 36.
EspinosaMA ,OchaítaE . Using tactile maps to improve the practical spatial knowledge of adults who are blind. J Vis Impair Blind1998, 92:338–345.[Google Scholar] - 37.
HermelinB ,O'ConnorN . Location and distance estimates by blind and sighted children. Q J Exp Psychol1975, 27:295–301.[PubMed][Google Scholar] - 38.
LedermanSJ ,KlatzkyRL ,BarberPO . Spatial and movement‐based heuristics for encoding pattern information through touch. J Exp Psychol Gen1985, 114:33–49.[PubMed][Google Scholar] - 39.
HermanJE ,HermanTG ,ChatmanSP . Constructing cognitive maps from partial information: a demonstration study with congenitally blind subjects. J Vis Impair Blind1983, 77:195–198.[Google Scholar] - 40.
UngarS . Can the visually impaired children use tactile maps to estimate directions?J Vis Impair Blind1994, 88:221–228.[Google Scholar] - 41.
EdwardsR ,UngarS ,BladesM . Route descriptions by visually impaired and sighted children from memory and from maps. J Vis Impair Blind1998, 92:512–521.[Google Scholar] - 42.
UngarS . The ability of visually impaired children to locate themselves on a tactile map. J Vis Impair Blind1996, 90:526–531.[Google Scholar] - 43.
UngarS ,BladesM . Teaching visually impaired children to make distance judgements from a tactile map. J Vis Impair Blind1997, 91:163–177.[Google Scholar] - 44.
CaseySM . Cognitive mapping by the blind. J Vis Impair Blind1978, 72:297–301.[Google Scholar] - 45.
JacobsonRD . Cognitive mapping without sight: four preliminary studies of spatial learning. J Environ Psychol1998, 18:289–305.[Google Scholar] - 46.
GagnonL ,SchneiderFC ,SiebnerHR ,PaulsonOB ,KupersR ,PtitoM . Activation of the hippocampal complex during tactile maze solving in congenitally blind subjects. Neuropsychologia2012, 50:1663–1671.[PubMed][Google Scholar] - 47.
KupersR ,ChebatDR ,MadsenKH ,PaulsonOB ,PtitoM . Neural correlates of virtual route recognition in congenital blindness. Proc Natl Acad Sci USA2010, 107:12716–12721.[PubMed][Google Scholar] - 48.
TaubeJS ,ValerioS ,YoderRM . Is navigation in virtual reality with FMRI really navigation?J Cogn Neurosci2013, 25:1008–1019.[PubMed][Google Scholar] - 49.
MeilingerT ,RieckeBE ,BülthoffHH . Local and global reference frames for environmental spaces. Q J Exp Psychol (Hove)2014, 67:542–569.[PubMed][Google Scholar] - 50.
GreenauerN ,WallerD . Micro‐ and macroreference frames: specifying the relations between spatial categories in memory. J Exp Psychol Learn Mem Cogn2010, 36:938–957.[PubMed][Google Scholar] - 51.
CohenLG ,CelnikP ,Pascual‐LeoneA ,CorwellB ,FalzL ,DambrosiaJ ,HondaM ,SadatoN ,GerloffC ,CataláMD , et al. Functional relevance of cross‐modal plasticity in blind humans. Nature1997, 389:180–183.[PubMed][Google Scholar] - 52.
SadatoN ,Pascual‐LeoneA ,GrafmanJ ,DeiberMP ,IbañezV ,HallettM . Neural networks for Braille reading by the blind. Brain1998, 121:1213–1229.[PubMed][Google Scholar] - 53.
BurtonH ,McLarenDG ,SinclairRJ . Reading embossed capital letters: an fMRI study in blind and sighted individuals. Hum Brain Mapp2006, 27:325–339.[PubMed][Google Scholar] - 54.
PtitoM ,KupersR . Cross‐modal plasticity in early blindness. J Integr Neurosci2005, 4:479–488.[PubMed][Google Scholar] - 55.
MontelloRD .A new framework for understanding the acquisition of spatial knowledge in large‐scale environments In: , , eds. Spatial and Temporal Reasoning in Geographic Information System. New York: Oxford University Press; 1998, 143–154. - 56.
ChrastilER . Neural evidence supports a novel framework for spatial navigation. Psychon Bull Rev2013, 20:208–227.[PubMed][Google Scholar] - 57.
DodgeM ,KitchinR . Mapping Cyberspace. New York: Routledge; 2001. - 58.
ShettleworthSJ . Cognition, Evolution, and Behavior. New York: Oxford University Press; 2010. - 59.
GolledgeGR . Representing, interpreting and using cognized environments. Pap Proc Reg Sci Assoc1978, 41:169–204.[Google Scholar] - 60.
ShemyakinF .Orientation in space In: , ed. Psychological Science in the USSR. Washington, DC: US Office of Technical Reports; 1962, 186–255. - 61.
PiagetJ ,InhelderB . The Child's Conception of Space. London: Routledge and Kegan Paul; 1956. - 62.
BladesM .Wayfinding theory and research: the need for a new appraoch In: , , eds. Cognitive and Linguistic Aspects of Geographic Science. Dordrecht: Kluwer Academic Publishers; 1991, 137–165. - 63.
FoleyJE ,CohenAJ . Working mental representations of the environment. Environ Behav1984, 16:713–729.[Google Scholar] - 64.
SchinaziVR ,EpsteinRA . Neural correlates of real‐world route learning. Neuroimage2010, 53:725–735.[PubMed][Google Scholar] - 65.
IshikawaT ,MontelloRD . Spatial knowledge acquistion from direct experience in the environment: individual differences in the development of metric knowledge and the integration of separately learned places. Cogn Psychol2006, 52:93–129.[PubMed][Google Scholar] - 66.
SchinaziVR ,NardiD ,NewcombeNS ,ShipleyTF ,EpsteinRA . Hippocampal size predicts rapid learning of a cognitive map in humans. Hippocampus2013, 23:515–528.[PubMed][Google Scholar] - 67.
ChengK ,ShettleworthSJ ,HuttenlocherJ ,RieserJJ . Bayesian integration of spatial information. Psychol Bull2007, 133:625–637.[PubMed][Google Scholar] - 68.
NewcombeN ,HuttenlocherJ .Development of spatial cognition In: , , eds. Handbook of Child Psychology. Hoboken, NJ: John Wiley & Sons; 2006, 734–776. - 69.
MillarS . Understanding and Representing Space: Theory and Evidence from Studies with Blind and Sighted Children. Oxford: Oxford University Press; 1994. - 70.
MillarS .Modaility and mind: convergent active processing in interrelated networks as a model of development and perception In: , ed. Touch, Representation and Blindness. Oxford and New York: Oxford University Press; 2000, 99–141. - 71.
TintiC ,AdenzatoM ,TamiettoM ,CornoldiC . Visual experience is not necessary for efficient survey spatial cognition: evidence from blindness. Q J Exp Psychol (Hove)2006, 59:1306–1328.[PubMed][Google Scholar] - 72.
ChenE ,MatthewsKA ,BoyceWT . Socioeconomic differences in children's health: how and why do these relationships change with age?Psychol Bull2002, 128:295–329.[PubMed][Google Scholar] - 73.
KlatzkyLR ,GolledgeGR ,LoomisJM ,CicinelliJG ,PellegrinoJW . Performance of blind and sighted persons on spatial tasks. J Vis Impair Blind1995, 89:70–82.[Google Scholar] - 74.
BlancoF ,TraviesoD . Haptic exploration and mental estimation of distances in a fictitious island: from mind's eye to mind's hand. J Vis Impair Blind2003, 97:298–300.[Google Scholar] - 75.
Thinus‐BlancC ,GaunetF . Representation of space in blind persons: vision as a spatial sense?Psychol Bull1997, 121:20–42.[PubMed][Google Scholar] - 76.
KitchinRM ,JacobsonRD . Techniques to collect and analyze the cognitive map knowledge of persons with visual impairment or blindness: issues of validity. J Vis Impair Blind1997, 91:360–376.[Google Scholar] - 77.
JuurmaaJ . Transposition in mental spatial manipulation: a theoretical analysis. Am Found Blind Res Bull1973, 26:87–134.[Google Scholar] - 78.
PostmaA ,ZuidhoekS ,NoordzijML ,KappersAML . Differences between early‐blind, late‐blind, and blindfolded‐sighted people in haptic spatial‐configuration learning and resulting memory traces. Perception2007, 36:1253–1265.[PubMed][Google Scholar] - 79.
CarreirasM ,CodinaB . Spatial cognition of the blind and sighted: visual and amodal hypothesis. Cah Psychol Cogn1992, 12:51–78.[Google Scholar] - 80.
HollinsM ,KelleyEK . Spatial updating in blind and sighted people. Percept Psychophys1988, 43:380–388.[PubMed][Google Scholar] - 81.
KlatzkyLR ,LoomisJM ,GolledgeGR .Encoding spatial representations through nonvisually guided locomotion: tests of human path integration In: , ed. The Psychology of Learning and Motivation. San Diego, CA: Academic Press; 1997, 41–84. - 82.
KlatzkyRL ,LoomisJM ,GolledgeRG ,FujitaN ,PellegrinoJW . Navigation without vision by blind and sighted. Bull Psychon Soc1990, 28:484.[Google Scholar] - 83.
LoomisJM ,KlatzkyRL ,GolledgeRG ,CicinelliJG ,PellegrinoJW ,FryPA . Nonvisual navigation by blind and sighted: assessment of path integration ability. J Exp Psychol Gen1993, 122:73–91.[PubMed][Google Scholar] - 84.
LandauB ,GleitmanH ,SpelkeE . Spatial knowledge and geometric representation in a child blind from birth. Science1981, 213:1275–1278.[PubMed][Google Scholar] - 85.
LandauB ,SpelkeE ,GleitmanH . Spatial knowledge in a young blind child. Cognition1984, 16:225–260.[PubMed][Google Scholar] - 86.
LibenL .Conceptual issues in the development of spatial cognition In: , , , eds. Spatial Cognition: Brain Bases and Development. Hillsdale, NJ: Earlbaum; 1988, 167–194. - 87.
CorazziniLL ,TintiC ,SchmidtS ,MirandolaC ,CornoldiC . Developing spatial knowledge in the absence of vision: allocentric and egocentric representations generated by blind people when supported by auditory cues. Psychol Belg2010, 50:327.[Google Scholar] - 88.
PassiniR ,ProulxG . Wayfinding without vision: an experiment with congenitally, totally blind people. Environ Behav1988, 20:227–252.[Google Scholar] - 89.
GolledgeRG ,KitchinR ,BladesM ,JacobsonRD . Cognitive maps, spatial abilities, and human wayfinding. In: Paper presented at the Tokyo Metropolitan University Coloquium, Tokyo, Japan, 2000. - 90.
von SendenM . Space and Sight; the Perception of Space and Shape in the Congenitally Blind before and after Operation. Glencoe, IL: Free Press; 1960. - 91.
CleavesWT ,RoyalRW . Spatial memory for configurations by congenitally blind, lare blind and sighted adults. J Vis Impair Blind1979, 73:13–19.[Google Scholar] - 92.
BigelowEA . Spatial mapping of familiar locations in blind children. J Vis Impair Blind1991, 85:113–117.[Google Scholar] - 93.
WorchelP . Space perception and orientation in the blind. Psychol Monogr1951, 65:1–28.[Google Scholar] - 94.
BrambringM . The structure of haptic space in the blind and sighted. Psychol Res1976, 38:283–302.[PubMed][Google Scholar] - 95.
KerrHN . The role of “visual imagery” experiments: evidence from the congenitally blind. J Environ Psychol1983, 112:265–277.[Google Scholar] - 96.
HellerMA ,KennedyJ . Perspective taking, pictures and the blind. Percept Psychophys1990, 45:459–466.[PubMed][Google Scholar] - 97.
DoddsAG . Mental rotation and visual imagery. J Vis Impair Blind1983, 77:16–18.[Google Scholar] - 98.
CarpenterPA ,EisenbergP . Mental rotation and the frame of reference in blind and sighted individuals. Percept Psychophys1978, 23:117–124.[PubMed][Google Scholar] - 99.
MarmorGS ,ZabackLA . Mental rotation by the blind: does mental rotation depend on visual imagery?J Exp Psychol Hum Percept Perform1976, 2:515–521.[PubMed][Google Scholar] - 100.
ColucciaE ,MammarellaIC ,CornoldiC . Centred egocentric, decentred egocentric, and allocentric spatial representations in the peripersonal space of congenital total blindness. Perception2009, 38:679–693.[PubMed][Google Scholar] - 101.
PasqualottoA ,NewellFN . The role of visual experience on the representation and updating of novel haptic scenes. Brain Cogn2007, 65:184–194.[PubMed][Google Scholar] - 102.
JuurmaaJ ,Lehtinen‐RailoS . Visual experience and access to spatial knowledge. J Vis Impair Blind1994, 88:157–170.[Google Scholar] - 103.
RieserJJ ,GuthDA ,HillEW . Sensitivity to perspective structure while walking without vision. Perception1986, 15:173–188.[PubMed][Google Scholar] - 104.
RieserJJ ,LockmanJJ ,PickLH . The role of visual experience in knowldge of spatial layout. Percept Psychophys1980, 28:185–190.[PubMed][Google Scholar] - 105.
VeraartC ,Wanet‐DefalqueMC . Representation of locomotor space by the blind. Percept Psychophys1987, 42:132–139.[PubMed][Google Scholar] - 106.
DoddsAG ,HowarthCI ,CarterDC . The mental maps of the blind: the role of previous visual experience. J Vis Impair Blind1982, 76:5–12.[Google Scholar] - 107.
PassiniR ,ProulxG ,RainvilleC . The spatio‐cognitive abilities of the visually impaired population. Environ Behav1990, 22:91–116.[Google Scholar] - 108.
HermanJF ,ChatmanSP ,RothS . Cognitive mapping in blind people: acquisition of spatial relationships in a large‐scale environment. J Vis Impair Blind1983, 77:161–166.[Google Scholar] - 109.
ByrneRW ,SalterE . Distances and directions in the cognitive maps of the blind. Can J Psychol1983, 37:293–299.[PubMed][Google Scholar] - 110.
RieserJJ ,HillEW ,TalorCR ,BradfieldA ,RosenS . Visual experience, visual field size, and the development of nonvisual sensitivity to the spatial structure of outdoor neighborhoods explored by walking. J Exp Psychol Gen1992, 121:210–221.[PubMed][Google Scholar] - 111.
HollyfieldRM ,FoulkeE . The spatial cogntion of blind pedestrians. J Vis Impair Blind1983, 77:204–210.[Google Scholar] - 112.
RosaA ,OchaítaAE . Psicología de La Ceguera. Madrid: Alianza; 1993. - 113.
HatwellY . Privation Sensorielle et Intelligence, Effets de La Câecitâe Prâecoce Sur La Genáese Des Structures Logiques de L'intelligence. Paris: Presses Universitaires de France; 1966. - 114.
SimpkinsKE ,SiegelAJ . The blind child's construction of the projective straight line. J Vis Impair Blind1979, 73:233–239.[Google Scholar] - 115.
SimpkinsKE . Development of the concept of space. J Vis Impair Blind1979, 73:81–85.[Google Scholar] - 116.
AdelsonE ,FraibergS . Gross motor development in infants blind from birth. Child Dev1974, 45:114–126.[PubMed][Google Scholar] - 117.
BrambringM . Divergent development of gross motor skills in children who are blind and sighted. J Vis Impair Blind2006, 100:620–634.[Google Scholar] - 118.
HattonDD ,BaileyDB ,BurchinalMR ,FerrellKA . Developmental growth cruves of preschool children with vision impairments. Child Dev1997, 68:788–806.[Google Scholar] - 119.
FoulkeE ,HatlenP . A collaboration of two technologies. Part 2: perceptual and cognitive training: its nature and importance. Br J Vis Impair1992, 10:47–49.[Google Scholar] - 120.
ReynellJ . Developmental patterns of visually handicapped children. Child Care Health Dev1978, 4:291–303.[PubMed][Google Scholar] - 121.
StephensB ,GrubeC . Development of paigetian reasoning in congenitally blind children. J Vis Impair Blind1982, 76:133–143.[Google Scholar] - 122.
LaceyS ,StillaR ,SreenivasanK ,DeshpandeG ,SathianK . Spatial imagery in haptic shape perception. Neuropsychologia2014, 60:144–158.[PubMed][Google Scholar] - 123.
TcheangL ,BülthoffHH ,BurgessN . Visual influence on path integration in darkness indicates a multimodal representation of large‐scale space. Proc Natl Acad Sci USA2011, 108:1152–1157.[PubMed][Google Scholar] - 124.
LaceyS ,SathianK .Representation of object form in vision and touch In: , , eds. The Neural Bases of Multisensory Processes. Boca Raton, FL: CRC; 2012. - 125.
CattaneoZ ,VecchiT . Supramodality effects in visual and haptic spatial processes. J Exp Psychol Learn Mem Cogn2008, 34:631–642.[PubMed][Google Scholar] - 126.
BregmanAS ,PinkerS . Auditory streaming and the building of timbre. J Psychol1978, 32:19–31.[Google Scholar] - 127.
RuggieroG ,RuotoloF ,IachiniT . Egocentric/allocentric and coordinate/categorical haptic encoding in blind people. Cogn Process2012, 13(Suppl 1):S313–S317.[PubMed][Google Scholar] - 128.
DuchowskiA . Eye Tracking Methodology: Theory and Practice. London: Springer Science & Business Media; 2007. - 129.
ChebatD‐R ,MaidenbaumS ,AmediA . Navigation using sensory substitution in real and virtual mazes. PLoS ONE2015, 10:e0126307.[PubMed][Google Scholar] - 130.
LoomisJM ,KlatzkyRL .Functional equivalence of spatial representations from vision, touch, and hearing: relevance for sensory substitution In: , , , , eds. Blindness and Brain Plasticity in Navigation and Object Perception. New York: Lawrence Earlbaum Associates; 2007, 155–184. - 131.
LoomisJ ,LippaY ,KlatzkyR ,GolledgeR . Spatial updating of locations specified by 3‐D sound and spatial language. J Exp Psychol Learn Mem Cogn2002, 28:335–345.[PubMed][Google Scholar] - 132.
KlatzkyRL ,LippaY ,LoomisJM ,GolledgeRG . Encoding, learning, and spatial updating of multiple object locations specified by 3‐D sound, spatial language, and vision. Exp Brain Res2003, 149:48–61.[PubMed][Google Scholar] - 133.
AvraamidesMN ,LoomisJM ,KlatzkyRL ,GolledgeRG . Functional equivalence of spatial representations derived from vision and language: evidence from allocentric judgments. J Exp Psychol Learn Mem Cogn2004, 30:801–814.[Google Scholar] - 134.
GiudiceNA ,BettyMR ,LoomisJM . Functional equivalence of spatial images from touch and vision: evidence from spatial updating in blind and sighted individuals. J Exp Psychol Learn Mem Cogn2011, 37:621–634.[PubMed][Google Scholar] - 135.
LoomisJM ,KlatzkyRL ,McHughB ,GiudiceNA . Spatial working memory for locations specified by vision and audition: testing the amodality hypothesis. Atten Percept Psychophys2012, 74:1260–1267.[PubMed][Google Scholar] - 136.
AndersonJR . Arguments concerning representations for mental imagery. Psychol Rev1978, 85:249–277.[Google Scholar] - 137.
TolmanEC . Cognitive maps in rats and men. Psychol Rev1948, 55:189–208.[PubMed][Google Scholar] - 138.
DownsMR .Maps and mapping as metaphors for spatial representations In: , , , eds. Spatial Representation and Behavior across the Life Span. New York: Academic Press; 1981, 143–166. - 139.
O'KeefeJ ,DostrovskyJ . The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely‐moving rat. Brain Res1971, 34:171–175.[PubMed][Google Scholar] - 140.
O'KeefeJ ,NadelL . The Hippocampus as a Cognitive Map. Oxford: Clarendon Press; 1978. - 141.
O'KeefeJ ,ConwayDH . Hippocampal place units in the freely moving rat: why they fire where they fire. Exp Brain Res1978, 31:573–590.[PubMed][Google Scholar] - 142.
DownsR ,SteaD . Image and Environment: Cognitive Mapping and Spatial Behaviour. Chicago, IL: Aldine; 1973. - 143.
TverskyB .Cognitive maps, cognitive collages, and spatial mental models In: , , eds. Spatial Information Theory: A Theoretical Basis for GIS. Berlin: Springer‐Verlag; 1993, 14–24. - 144.
KitchinRM . Cognitive maps: what are they and why study them?J Environ Psychol1994, 14:1–19.[Google Scholar] - 145.
PeuquetDJ . Representations of Space and Time. New York: Guilford Press; 2002. - 146.
MontelloRD .The geometry of environmental knowledge In: , , , eds. Theories and Methods of Spatial Reasoning in Geographic Space. Berlin: Springer‐Verlag; 1992, 136–152. - 147.
KuipersBJ . The map in the head metaphor. Environ Behav1982, 14:202–220.[Google Scholar] - 148.
PortugaliJ . The Construction of Cognitive Maps. Dordrecht: Kluwer Academic Publishers; 1996. - 149.
GärlingT ,BöökA ,LindbergE ,ArceC . Evidence of a response bias explanation of noneuclidean cognitive maos. Prof Geogr1991, 42:143–149.[Google Scholar] - 150.
GolledgeGR ,HubertLJ . Some comments on non‐Euclidean mental maps. Environ Plan A1982, 14:107–118.[Google Scholar] - 151.
ThorndykeP ,Hayes‐RothB . Differences in spatial knowledge acquired from maps and navigation. Cogn Psychol1982, 14:560–581.[PubMed][Google Scholar] - 152.
ShollJM . Cognitive maps as orienting schemata. J Exp Psychol Learn Mem Cogn1987, 13:615–628.[PubMed][Google Scholar] - 153.
LloydR . Cognitive maps: encoding and decoding information. Ann Assoc Am Geogr1989, 79:101–124.[Google Scholar] - 154.
GolledgeRG .Human wayfinding and cognitive maps In: , ed. Wayfinding Behavior: Cognitive Mapping and Other Spatial Processes. Baltimore, MD: John Hopkins University Press; 1999. - 155.
HaftingT ,FyhnM ,MoldenS ,MoserMB ,MoserEI . Microstructure of a spatial map in the entorhinal cortex. Nature2005, 436:801–806.[PubMed][Google Scholar] - 156.
WhitlockJR ,SutherlandRJ ,WitterMP ,MoserM‐B ,MoserEI . Navigating from hippocampus to parietal cortex. Proc Natl Acad Sci USA2008, 105:14755–14762.[PubMed][Google Scholar] - 157.
EpsteinRA ,VassLK . Neural systems for landmark‐based wayfinding in humans. Philos Trans R Soc Lond B Biol Sci2014, 369:20120533.[PubMed][Google Scholar] - 158.
EkstromAD ,KahanaMJ ,CaplanJB ,FieldsTA ,IshamEA ,NewmanEL ,FriedI . Cellular networks underlying human spatial navigation. Nature2003, 425:184–188.[PubMed][Google Scholar] - 159.
JacobsJ ,WeidemannCT ,MillerJF ,SolwayA ,BurkeJF ,WeiX‐X ,SuthanaN ,SperlingMR ,SharanAD ,FriedI , et al. Direct recordings of grid‐like neuronal activity in human spatial navigation. Nat Neurosci2013, 16:1188–1190.[PubMed][Google Scholar] - 160.
MorganLK ,MacEvoySP ,AguirreGK ,EpsteinRA . Distances between real‐world locations are represented in the human hippocampus. J Neurosci2011, 31:1238–1245.[PubMed][Google Scholar] - 161.
WolbersT ,BuchelC . Dissociable retrosplenial and hippocampal contributions to successful formation of survey representations. J Neurosci2005, 25:3333–3340.[PubMed][Google Scholar] - 162.
GhaëmO ,MelletE ,CrivelloF ,TzourioN ,MazoyerB ,BerthozA ,DenisM . Mental navigation along memorized routes activates the hippocampus, precuneus, and insula. Neuroreport1997, 8:739–744.[PubMed][Google Scholar] - 163.
MaguireEA ,FrithCD ,BurgessN ,DonnettJG ,O'KeefeJ . Knowing where things are: parahippocampal involvement in encoding object locations in virtual mental imagery. J Cogn Neurosci1998, 10:61–76.[PubMed][Google Scholar] - 164.
WolbersT ,WienerJM ,MallotHA ,BuchelC . Differential recruitment of the hippocampus, medial prefrontal cortex, and the human motion complex during path integration in humans. J Neurosci2007, 27:9408–9416.[PubMed][Google Scholar] - 165.
MaguireEA ,GadianDG ,JohnsrudeIS ,GoodCD ,AshburnerJ ,FrackowiakRSJ ,FrithCD . Navigation‐related structural change in the hippocampi of taxi drivers. Proc Natl Acad Sci USA2000, 97:4398–4403.[PubMed][Google Scholar] - 166.
WoollettK ,MaguireEA . Acquiring “the knowledge” of London's layout drives structural brain changes. Curr Biol2011, 21:2109–2114.[PubMed][Google Scholar] - 167.
AbrahamsS ,PickeringA ,PolkeyCE ,MorrisRG . Spatial memory deficits in patients with unilateral damage to the right hippocampal formation. Neuropsychologia1997, 35:11–24.[PubMed][Google Scholar] - 168.
HoldstockJS ,MayesAR ,CezayirliE ,IsaacCL ,AggletonJP ,RobertsN . A comparison of egocentric and allocentric spatial memory in a patient with selective hippocampal damage. Neuropsychologia2000, 38:410–425.[PubMed][Google Scholar] - 169.
FeigenbaumJD ,MorrisRG . Allocentric versus egocentric spatial memory after unilateral temporal lobectomy in humans. Neuropsychology2004, 18:462–472.[PubMed][Google Scholar] - 170.
PackardMG ,McGaughJL . Inactivation of hippocampus or caudate nucleus with lidocaine differentially affects expression of place and response learning. Neurobiol Learn Mem1996, 65:65–72.[PubMed][Google Scholar] - 171.
HartleyT ,MaguireEA ,SpiersHJ ,BurgessN . The well‐worn routed and the path less traveled: distinct nerural bases of route following and wayfinding in humans. Neuron2003, 37:877–888.[PubMed][Google Scholar] - 172.
IariaG ,PetridesM ,DagherA ,PikeB ,BohbotVD . Cognitive strategies dependent on the hippocampus and caudate nucleus in human navigation: variability and change with practice. J Neurosci2003, 23:5945–5952.[PubMed][Google Scholar] - 173.
BohbotVD ,LerchJ ,ThorndycraftB ,IariaG ,ZijdenbosAP . Gray matter differences correlate with spontaneous strategies in a human virtual navigation task. J Neurosci2007, 27:10078–10083.[PubMed][Google Scholar] - 174.
BurgessN . Spatial cognition and the brain. Ann N Y Acad Sci2008, 1124:77–97.[PubMed][Google Scholar] - 175.
NadelL ,HardtO . The spatial brain. Neuropsychology2004, 18:473–476.[PubMed][Google Scholar] - 176.
EkstromAD ,ArnoldAEGF ,IariaG . A critical review of the allocentric spatial representation and its neural underpinnings: toward a network‐based perspective. Front Hum Neurosci2014, 8:803.[PubMed][Google Scholar] - 177.
EpsteinR ,KanwisherN . A cortical representation of the local visual environment. Nature1998, 392:598–601.[PubMed][Google Scholar] - 178.
EpsteinRA ,ParkerWE ,FeilerAM . Where am I now? Distinct roles for parahippocampal and retrosplenial cortices in place recognition. J Neurosci2007, 27:6141–6149.[PubMed][Google Scholar] - 179.
EpsteinRA . Parahippocampal and retrosplenial contributions to human spatial navigation. Trends Cogn Sci2008, 12:388–396.[PubMed][Google Scholar] - 180.
KonkleT ,OlivaA . A real‐world size organization of object responses in occipitotemporal cortex. Neuron2012, 74:1114–1124.[PubMed][Google Scholar] - 181.
MarchetteSA ,VassLK ,RyanJ ,EpsteinRA . Anchoring the neural compass: coding of local spatial reference frames in human medial parietal lobe. Nat Neurosci2014, 17:1598–1606.[PubMed][Google Scholar] - 182.
TaubeJS ,MullerRU ,RanckJB . Head‐direction cells recorded from the postsubiculum in freely moving rats. II: effects of environmental manipulations. J Neurosci1990, 10:436–447.[PubMed][Google Scholar] - 183.
FortinM ,VossP ,LordC ,LassondeM ,PruessnerJ ,Saint‐AmourD ,RainvilleC ,LeporeF . Wayfinding in the blind: larger hippocampal volume and supranormal spatial navigation. Brain2008, 131(Pt 11):2995–3005.[PubMed][Google Scholar] - 184.
WolbersT ,KlatzkyRL ,LoomisJM ,WutteMG ,GiudiceNA . Modality‐independent coding of spatial layout in the human brain. Curr Biol2011, 21:984–989.[PubMed][Google Scholar] - 185.
HeC ,PeelenMV ,HanZ ,LinN ,CaramazzaA ,BiY . Selectivity for large nonmanipulable objects in scene‐selective visual cortex does not require visual experience. Neuroimage2013, 79:1–9.[PubMed][Google Scholar] - 186.
ChanCCH ,WongAWK ,TingK‐H ,Whitfield‐GabrieliS ,HeJ ,LeeTMC . Cross auditory‐spatial learning in early‐blind individuals. Hum Brain Mapp2012, 33:2714–2727.[PubMed][Google Scholar] - 187.
De VolderAG ,Catalan‐AhumadaM ,RobertA ,BolA ,LabarD ,CoppensA ,MichelC ,VeraartC . Changes in occipital cortex activity in early blind humans using a sensory substitution device. Brain Res1999, 826:128–134.[PubMed][Google Scholar] - 188.
BubicA ,Striem‐AmitE ,AmediA .Large‐scale brain plasticity following blindness and the use of sensory substitution devices In: , , eds. Multisensory Object Perception in the Primate Brain. New York: Springer; 2010, 351–380. - 189.
CattaneoZ ,VecchiT . Blind Vision: The Neuroscience of Visual Impairment. Cambridge, MA: MIT Press; 2011. - 190.
BurtonH . Visual cortex activity in early and late blind people. J Neurosci2003, 23:4005–4011.[PubMed][Google Scholar] - 191.
WeeksR ,HorwitzB ,Aziz‐SultanA ,TianB ,WessingerCM ,CohenLG ,HallettM ,RauscheckerJP . A positron emission tomographic study of auditory localization in the congenitally blind. J Neurosci2000, 20:2664–2672.[PubMed][Google Scholar] - 192.
GougouxF ,ZatorreRJ ,LassondeM ,VossP ,LeporeF . A functional neuroimaging study of sound localization: visual cortex activity predicts performance in early‐blind individuals. PLoS Biol2005, 3:e27.[PubMed][Google Scholar] - 193.
ZangaladzeA ,EpsteinCM ,GraftonST ,SathianK . Involvement of visual cortex in tactile discrimination of orientation. Nature1999, 401:587–590.[PubMed][Google Scholar] - 194.
RenierL ,De VolderAG ,RauscheckerJP . Cortical plasticity and preserved function in early blindness. Neurosci Biobehav Rev2014, 41:53–63.[PubMed][Google Scholar] - 195.
AmediA ,MerabetLB ,BermpohlF ,Pascual‐LeoneA . The occipital cortex in the blind: lessons about plasticity and vision. Curr Dir Psychol Sci2005, 14:306–311.[Google Scholar] - 196.
NoppeneyU ,FristonKJ ,AshburnerJ ,FrackowiakR ,PriceCJ . Early visual deprivation induces structural plasticity in gray and white matter. Curr Biol2005, 15:R488–R490.[PubMed][Google Scholar] - 197.
LeporéN ,VossP ,LeporeF ,ChouY‐Y ,FortinM ,GougouxF ,LeeAD ,BrunC ,LassondeM ,MadsenSK , et al. Brain structure changes visualized in early‐ and late‐onset blind subjects. Neuroimage2010, 49:134–140.[PubMed][Google Scholar] - 198.
KupersR ,PtitoM .Insights from darkness: what the study of blindness has taught us about brain structure and function In: , , , , eds. Progress in Brain Research, vol. 192Amsterdam: Elsevier B.V; 2011, 17–31. - 199.
YangC ,WuS ,LuW ,BaiY ,GaoH . Anatomic differences in early blindness: a deformation‐based morphometry MRI study. J Neuroimaging2014, 24:68–73.[PubMed][Google Scholar] - 200.
ChebatD‐R ,ChenJ‐K ,SchneiderF ,PtitoA ,KupersR ,PtitoM . Alterations in right posterior hippocampus in early blind individuals. Neuroreport2007, 18:329–333.[PubMed][Google Scholar] - 201.
LeporéN ,ShiY ,LeporeF ,FortinM ,VossP ,ChouY‐Y ,LordC ,LassondeM ,DinovI ,TogaAW , et al. Patterns of hippocampal shape and volume differences in blind subjects. Neuroimage2009, 46:949–957.[PubMed][Google Scholar] - 202.
VossP ,FortinM ,CorboV ,PruessnerJC ,LeporeF . Assessment of the caudate nucleus and its relation to route learning in both congenital and late blind individuals. BMC Neurosci2013, 14:113.[PubMed][Google Scholar] - 203.
KoberSE ,WoodG ,KamplC ,NeuperC ,IschebeckA . Electrophysiological correlates of mental navigation in blind and sighted people. Behav Brain Res2014, 273:106–115.[PubMed][Google Scholar] - 204.
HalkoMA ,ConnorsEC ,SánchezJ ,MerabetLB . Real world navigation independence in the early blind correlates with differential brain activity associated with virtual navigation. Hum Brain Mapp2014, 35:2768–2778.[PubMed][Google Scholar] - 205.
DeutschländerA ,StephanT ,HüfnerK ,WagnerJ ,WiesmannM ,StruppM ,BrandtT ,JahnK . Imagined locomotion in the blind: an fMRI study. Neuroimage2009, 45:122–128.[PubMed][Google Scholar] - 206.
ReichL ,MaidenbaumS ,AmediA . The brain as a flexible task machine: implications for visual rehabilitation using noninvasive vs. invasive approaches. Curr Opin Neurol2012, 25:86–95.[PubMed][Google Scholar] - 207.
HumayunMS ,de JuanEJr ,WeilandJD ,DagnelieG ,KatonaS ,GreenbergR ,SuzukiS . Pattern electrical stimulation of the human retina. Vision Res1999, 39:2569–2576.[PubMed][Google Scholar] - 208.
OozeerM ,VeraartC ,LegatV ,DelbekeJ . Simulation of intra‐orbital optic nerve electrical stimulation. Med Biol Eng Comput2005, 43:608–617.[PubMed][Google Scholar] - 209.
DowlingJ . Current and future prospects for optoelectronic retinal prostheses. Eye (Lond)2009, 23:1999–2005.[PubMed][Google Scholar] - 210.
DobelleWH . Artificial vision for the blind by connecting a television camera to the visual cortex. ASAIO J2000, 46:3–9.[PubMed][Google Scholar] - 211.
NormannRA ,MaynardEM ,GuilloryKS ,WarrenDJ . Cortical implants for the blind. IEEE Spectr1996, 33:54–59.[Google Scholar] - 212.
LuoYH‐L ,ZhongJJ ,da CruzL . The use of Argus® II retinal prosthesis by blind subjects to achieve localisation and prehension of objects in 3‐dimensional space. Graefes Arch Clin Exp Ophthalmol2015, 253:1907–1914.[PubMed][Google Scholar] - 213.
MerabetLB ,RizzoJF ,AmediA ,SomersDC ,Pascual‐LeoneA . What blindness can tell us about seeing again: merging neuroplasticity and neuroprostheses. Nat Rev Neurosci2005, 6:71–77.[PubMed][Google Scholar] - 214.
GotheJ ,BrandtSA ,IrlbacherK ,RorichtS ,SabelBA ,MeyerB‐U . Changes in visual cortex excitability in blind subjects as demonstrated by transcranial magnetic stimulation. Brain2002, 125:479–490.[PubMed][Google Scholar] - 215.
GarciaS ,PetriniK ,RubinGS ,Da CruzL ,NardiniM . Visual and non‐visual navigation in blind patients with a retinal prosthesis. PLoS ONE2015, 10:e0134369.[PubMed][Google Scholar] - 216.
ScherlenA‐C ,DumasJC ,GuedjB ,VignotA . “RecognizeCane”: the new concept of a cane which recognizes the most common objects and safety clues. In: 29th Annual International Conference of the IEEE on Engineering in Medicine and Biology Society (EMBS 2007), vol. 2007, Lyon, France, 6357–6360. - 217.
BourbakisN . Sensing surrounding 3‐D space for navigation of the blind: a prototype system featuring vibration arrays and data fusion provides a near real‐time feedback. IEEE Eng Med Biol Mag2008, 27:49–55.[PubMed][Google Scholar] - 218.
JohnsonLA ,HigginsCM . A navigation aid for the blind using tactile‐visual sensory substitution. Conf Proc IEEE Eng Med Biol Soc2006, 1:6289–6292.[PubMed][Google Scholar] - 219.
MannS ,HuangJ ,JanzenR ,LoR ,RampersadV ,ChenA ,DohaT . Blind navigation with a wearable range camera and vibrotactile helmet. In: Proceedings of the 19th ACM International Conference on Multimedia (MM’11) New York: ACM Press; 2011, 1325. - 220.
JoséJ ,FarrajotaM ,RodriguesJMF ,du BufHJ . The SmartVision local navigation aid for blind and visually impaired persons. Int J Digit Content Technol Appl2011, 5:362–375.[Google Scholar] - 221.
KammounS ,ParseihianG ,GutierrezO ,BrilhaultA ,SerpaA ,RaynalM ,OriolaB ,MacéMJ‐M ,AuvrayM ,DenisM , et al. Navigation and space perception assistance for the visually impaired: the NAVIG project. IRBM2012, 33:182–189.[Google Scholar] - 222.
PresslB ,WieserM .A computer‐based navigation system tailored to the needs of blind people In: , , , , eds. Computers Helping People with Special Needs, vol. 4061Berlin/Heidelberg: Springer; 2006. - 223.
Sai SanthoshS ,SasiprabhaT ,JebersonR .BLI–NAV embedded navigation system for blind people In: Recent Advances in Space Technology Services and Climate Change 2010 (RSTS & CC‐2010). Chennai, India: IEEE; 2010, 277–282. - 224.
SchmitzB ,BeckerS ,BlessingA ,GroßmannM . Acquisition and presentation of diverse spatial context data for blind navigation. In: 2011 IEEE 12th International Conference on Mobile Data Management, vol 1. Luleå, Sweden: IEEE; 2011, 276–284. - 225.
SerrãoM ,ShahrabadiS ,MorenoM ,JoséJT ,RodriguesJI ,RodriguesJMF ,du BufJMH . Computer vision and GIS for the navigation of blind persons in buildings. Univ Access Inf Soc2014, 14:67–80.[Google Scholar] - 226.
DakopoulosD ,BourbakisNG . Wearable obstacle avoidance electronic travel aids for blind: a survey. IEEE Trans Syst Man Cybern C Appl Rev2010, 40:25–35.[Google Scholar] - 227.
MeijerPBL . An experimental system for auditory image representation. IEEE Trans Biomed Eng1992, 1992:112–121.[PubMed][Google Scholar] - 228.
Bach‐y‐RitaP ,CollinsC ,SaundersF ,WhiteB ,ScaddenL . Vision substitution by tactile image projection. Nature1969, 221:963–964.[PubMed][Google Scholar] - 229.
ElliGV ,BenettiS ,CollignonO . Is there a future for sensory substitution outside academic laboratories?Multisens Res2014, 27:271–291.[PubMed][Google Scholar] - 230.
SpenceC . The skin as a medium for sensory substitution. Multisens Res2014, 27:293–312.[PubMed][Google Scholar] - 231.
FarcyR ,LerouxR ,JuchaA ,DamaschiniR ,GrégoireC ,ZogaghiA . Electronic travel aids and electronic orientation aids for blind people: technical, rehabilitation, and everyday life points of view. In: Conference & Workshop on Assistive Technologies for People with Vision & Hearing Impairments Technology for Inclusion, Kufstein, Austria, 2006. - 232.
ShovalS ,BorensteinJ ,KorenY . The NavBelt—a computerized travel aid for the blind based on mobile robotics technology. IEEE Trans Biomed Eng1998, 45:1376–1386.[PubMed][Google Scholar] - 233.
SegondH ,WeissD ,SampaioE . Human spatial navigation via a visuo‐tactile sensory substitution system. Perception2005, 34:1231–1249.[PubMed][Google Scholar] - 234.
ChebatD‐R ,SchneiderFC ,KupersR ,PtitoM . Navigation with a sensory substitution device in congenitally blind individuals. Neuroreport2011, 22:342–347.[PubMed][Google Scholar] - 235.
AmediA ,SternWM ,CamprodonJA ,BermpohlF ,MerabetL ,RotmanS ,HemondC ,MeijerP ,Pascual‐LeoneA . Shape conveyed by visual‐to‐auditory sensory substitution activates the lateral occipital complex. Nat Neurosci2007, 10:687–689.[PubMed][Google Scholar] - 236.
PoirierC ,De VolderAG ,ScheiberC . What neuroimaging tells us about sensory substitution. Neurosci Biobehav Rev2007, 31:1064–1070.[PubMed][Google Scholar] - 237.
HiraharaY ,SakuraiY ,ShiiduY ,YanashimaK ,MagataniK . Development of the navigation system for the visually impaired by using white cane. In: 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, vol 1. New York: IEEE; 2006, 4893–4896. - 238.
AndoB . A smart multisensor approach to assist blind people in specific urban navigation tasks. IEEE Trans Neural Syst Rehabil Eng2008, 16:592–594.[PubMed][Google Scholar] - 239.
VelazquezR ,BazanO . Preliminary evaluation of podotactile feedback in sighted and blind users. Conf Proc IEEE Eng Med Biol Soc2010, 2010:2103–2106.[PubMed][Google Scholar] - 240.
AdameRM ,MöllerK ,SeemanE . Wearable navigation aids for visually impaired people based on vibrotactile skin stimuli. In: Roa Romero LM, ed. IFMBE Proceedings: XIII Mediterranean Conference on Medical and Biological Engineering and Computing 2013, vol 41. Cham: Springer International Publishing; 2014. - 241.
Bousbia‐SalahM ,BettayebM ,LarbiA . A navigation aid for blind people. J Intell Robot Syst2011, 64:387–400.[Google Scholar] - 242.
ChenH‐E ,LinY‐Y ,ChenC‐H ,WangI‐F . BlindNavi: a navigation app for the visually impaired smartphone user. In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA’15) New York: ACM Press; 2015, 19–24. - 243.
CapernaS ,ChengC ,ChoJ ,FanV ,LuthraA ,O'LearyB ,ShengJ ,SunA ,StearnsL ,TesslerR , et al. A navigation and object location device for the blind. PhD Thesis, University of Maryland, College Park, 2009. - 244.
De FeliceF ,RennaF ,AttolicoG ,DistanteA . A haptic/acoustic application to allow blind the access to spatial information. In: Second Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (WHC’07) Washington, DC: IEEE; 2007, 310–315. - 245.
HubA ,HartterT ,KombrinkS ,ErtlT . Real and virtual explorations of the environment and interactive tracking of movable objects for the blind on the basis of tactile‐acoustical maps and 3D environment models. Disabil Rehabil Assist Technol2008, 3:57–68.[PubMed][Google Scholar] - 246.
HaraM ,ShokurS ,YamamotoA ,HiguchiT ,GassertR ,BleulerH . Virtual environment to evaluate multimodal feedback strategies for augmented navigation of the visually impaired. Conf Proc IEEE Eng Med Biol Soc2010, 2010:975–978.[PubMed][Google Scholar] - 247.
TakatoriN ,NojimaK ,MatsumotoM ,YanashimaK ,MagataniK . Development of voice navigation system for the visually impaired by using IC tags. Conf Proc IEEE Eng Med Biol Soc2006, 1:5181–5184.[PubMed][Google Scholar] - 248.
BolognaG ,DevilleB ,PunT . Blind navigation along a sinuous path by means of the see color interface. In: Mira J, Ferrández JM, Álvarez JR, de la Paz F, Toledo FJ, eds. Bioinspired Applications in Artificial and Natural Computation, vol 5602. Berlin/Heidelberg: Springer; 2009. - 249.
DuretteB ,LouvetonN ,AlleyssonD ,HeraultJ . Visuo‐auditory sensory substitution for mobility assistance: testing TheVIBE. In: Workshop on Computer Vision Applications for the Visually Impaired, Marseille, France, 2008, 1–29. - 250.
KärcherSM ,FenzlaffS ,HartmannD ,NagelSK ,KönigP . Sensory augmentation for the blind. Front Hum Neurosci2012, 6:37.[PubMed][Google Scholar] - 251.
KammounS ,JouffraisC ,GuerreiroT ,NicolauH ,JorgeJ . Guiding blind people with haptic feedback. In: Pervasive 2012 Workshop on Frontiers in Accessibility for Pervasive Computing, Newcastle, UK, 2012, 217–226. - 252.
KamińskiŁ ,BrunieckiK . Mobile navigation system for visually impaired users in the urban environment. Metrol Meas Syst2012, 19:245–256.[Google Scholar] - 253.
MarstonJR ,LoomisJM ,KlatzkyRL ,GolledgeRG ,SmithE . Evaluation of spatial displays for navigation without sight. ACM Trans Appl Percept2006, 3:110–124.[Google Scholar] - 254.
KaliaAA ,LeggeGE ,RoyR ,OgaleA . Assessment of indoor route‐finding technology for people with visual impairment. J Vis Impair Blind2010, 104:135–147.[PubMed][Google Scholar] - 255.
LeggeGE ,BeckmannPJ ,TjanBS ,HaveyG ,KramerK ,RolkoskyD ,GageR ,ChenM ,PuchakayalaS ,RangarajanA . Indoor navigation by people with visual impairment using a digital sign system. PLoS ONE2013, 8:e76783.[PubMed][Google Scholar] - 256.
GanzA ,SchaferJ ,PuleoE ,WilsonC ,RobertsonM . Quantitative and qualitative evaluation of PERCEPT indoor navigation system for visually impaired users. Conf Proc IEEE Eng Med Biol Soc2012, 2012:5815–5818.[PubMed][Google Scholar] - 257.
IvanovR. Indoor navigation system for visually impaired. In: Proceedings of the 11th International Conference on Computer Systems and Technologies and Workshop for PhD Students in Computing on International Conference on Computer Systems and Technologies (CompSysTech’10) New York: ACM Press; 2010, 143. - 258.
MerabetLB ,ConnorsEC ,HalkoMA ,SánchezJ . Teaching the blind to find their way by playing video games. PLoS ONE2012, 7:e44958.[PubMed][Google Scholar] - 259.
LahavO ,SchloerbDW ,KumarS ,SrinivasanMA . A virtual environment for people who are blind—a usability study. J Assist Technol2012, 6:1–21.[Google Scholar] - 260.
LahavO ,SchloerbDW ,SrinivasanMA . Newly blind persons using virtual environment system in a traditional orientation and mobility rehabilitation program: a case study. Disabil Rehabil Assist Technol2012, 7:420–435.[PubMed][Google Scholar] - 261.
CardinS ,ThalmannD ,VexoF . A wearable system for mobility improvement of visually impaired people. Vis Comput2006, 23:109–118.[Google Scholar] - 262.
GuerreroLA ,VasquezF ,OchoaSF . An indoor navigation system for the visually impaired. Sensors2012, 12:8236–8258.[PubMed][Google Scholar] - 263.
SánchezJ ,de la TorreN . Autonomous navigation through the city for the blind. In: Proceedings of the 12th international ACM SIGACCESS Conference on Computers and accessibility (ASSETS’10) New York: ACM Press; 2010, 195. - 264.
BhatlawandeSS ,MukhopadhyayJ ,MahadevappaM . Ultrasonic spectacles and waist‐belt for visually impaired and blind person. In: 2012 National Conference on Communications (NCC) Kharagpur, India: IEEE; 2012, 1–4. - 265.
Hartcher‐O'BrienJ ,AuvrayM ,HaywardV . Perception of distance‐to‐obstacle through time‐delayed tactile feedback. In: 2015 I.E. World Haptics Conference (WHC) Evanston, IL: IEEE; 2015, 7–12. - 266.
StollC ,Palluel‐GermainR ,Richard FristotV ,PellerinD ,AlleyssonD ,GraffC . Navigating from a depth image converted into sound. Appl Bionics Biomech2015, 2015:9.[Google Scholar] - 267.
RiehleTH ,AndersonSM ,LichterPA ,GiudiceNA ,SheikhSI ,KnueselRJ ,KollmannDT ,HedinDS . Indoor magnetic navigation for the blind. Conf Proc IEEE Eng Med Biol Soc2012, 2012:1972–1975.[PubMed][Google Scholar] - 268.
DunaiL ,Peris‐FajarnésG ,LlunaE ,DefezB . Sensory navigation device for blind people. J Navig2013, 66:349–362.[Google Scholar] - 269.
LahavO ,MioduserD . Haptic‐feedback support for cognitive mapping of unknown spaces by people who are blind. Int J Hum Comput Stud2008, 66:23–35.[Google Scholar] - 270.
MaidenbaumS ,Levy‐TzedekS ,ChebatD‐R ,AmediA . Increasing accessibility to the blind of virtual environments, using a virtual mobility aid based on the “EyeCane”: feasibility study. PLoS One2013, 8:e72555.[PubMed][Google Scholar] - 271.
BuchsG ,MaidenbaumS ,AmediA .Obstacle identification and avoidance using the “EyeCane”: a tactile sensory substitution device for blind individuals In: , , eds. Haptics: Neuroscience, Devices, Modeling, and Applications, Versailles, France, 2014, 96–103. - 272.
WarrenDH . Blindness and Early Childhood Development. New York: American Foundation for the Blind; 1977. - 273.
MooreJE ,GravesWH ,PattersonJB . Foundations of Rehabilitation Counseling with Persons Who Are Blind or Visually Impaired. New York: AFB Press; 1997. - 274.
HuebnerKM .Visual impairment In: , , eds. Foundations of Education, Vol. I: History and Theory of Teaching Children and Youths With Visual Impairments. New York: AFB Press; 2000. - 275.
FletcherFJ . Spatial representation in blind children. 3: effects of individual differences. J Vis Impair Blind1981, 75:46–49.[Google Scholar] - 276.
HillEW ,RieserJJ ,HillMM ,HillM ,HalpinJ ,HalpinR . How persons with visual impairments explore novel spaces: strategies of good and poor performers. J Vis Impair Blind1993, 87:295–301.[Google Scholar] - 277.
HillE ,PonderP . Orientation and Mobility Techniques: A Guide for the Practitioner. New York: American Foundation for the Blind; 1976. - 278.
TellevikJM . Influence of spatial exploration patterns on cognitive mapping by blindfolded sighted persons. J Vis Impair Blind1992, 86:221–224.[Google Scholar] - 279.
GaunetF ,MartinezJ‐L ,Thinus‐BlancC . Early‐blind subjects’ spatial representation of manipulatory space: exploratory strategies and reaction to change. Perception1997, 26:345–366.[PubMed][Google Scholar]