• PNAS Anthropology Portal
  • Science Sessions: The PNAS Podcast Program

Locally embedded presages of global network bursts

  1. Taro Toyoizumic
  1. aDepartment of Basic Neuroscience, University of Geneva, Centre Médical Universitaire, Genève 1211, Switzerland;
  2. bPrecursory Research for Embryonic Science and Technology (PRESTO), Japan Science and Technology Agency, Kawaguchi, Saitama 332-0012, Japan;
  3. cRIKEN Brain Science Institute, Wako, Saitama 351-0198, Japan;
  4. dGraduate School of Information Science and Technology, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656, Japan;
  5. eDepartment of Biosystems Science and Engineering, ETH Zurich, Basel 4058, Switzerland;
  6. fResearch Center for Advanced Science and Technology, The University of Tokyo, Meguro-ku, Tokyo 153-8904, Japan
  1. Edited by Terrence J. Sejnowski, Salk Institute for Biological Studies, La Jolla, CA, and approved July 28, 2017 (received for review April 14, 2017)

Significance

This paper reports an approach to detecting the “early warnings” of upcoming global state transitions in a network based on its local dynamics, demonstrating that seemingly stochastic global events can be predicted by local deterministic dynamics. Based on the method using a nonlinear state-space reconstruction, we show that, surprisingly, dynamics of individual neurons can robustly predict the upcoming synchronous burst in the neural population at high signal-to-noise ratios, which even outperform the predictions based on population activity. We explain this apparently counterintuitive property by the network structures realizing in the nonbursting period, which is supported by a manipulative experiment and analyses. These results reveal basic properties of the bursting network dynamics.

Abstract

Spontaneous, synchronous bursting of neural population is a widely observed phenomenon in nervous networks, which is considered important for functions and dysfunctions of the brain. However, how the global synchrony across a large number of neurons emerges from an initially nonbursting network state is not fully understood. In this study, we develop a state-space reconstruction method combined with high-resolution recordings of cultured neurons. This method extracts deterministic signatures of upcoming global bursts in “local” dynamics of individual neurons during nonbursting periods. We find that local information within a single-cell time series can compare with or even outperform the global mean-field activity for predicting future global bursts. Moreover, the intercell variability in the burst predictability is found to reflect the network structure realized in the nonbursting periods. These findings suggest that deterministic local dynamics can predict seemingly stochastic global events in self-organized networks, implying the potential applications of the present methodology to detecting locally concentrated early warnings of spontaneous seizure occurrence in the brain.

The collective firing dynamics of neural population have been related to emergent functions and dysfunctions in the brain. Unraveling the structure of dynamics emerging from the collective behavior of neural networks is an ultimate issue in studies of complex systems (1). The simplest and most profound example of collective dynamics is synchronous burst: the simultaneous occurrence of closely spaced action potentials (“bursts”) across a large number of neurons in the network. Within neural circuits in vivo, the synchronous bursts are believed to serve important roles in information storage and transmission (2, 3) as well as in disease states including epileptic seizures (4?6). Intriguingly, neurons cultured in vitro also display spontaneous synchronous bursts akin to those observed in vivo (7?9). This indicates that the collective bursting is a universal behavior of neural networks in a wide variety of preparations, for which new approaches to understanding its foundations are much needed (1, 10, 11).

Classic biophysical models of the synchronized bursts assume broadly diverging synaptic interaction that propagates activity of a single cell to the remaining neural population (4, 12?14). This view has been partially supported by experiments demonstrating that stimulating a single neuron can change the large-scale dynamics of neural populations (15?17). These models and experiments suggest a close link between single neuron activity and the global state of a neural population. On the other hand, the previous stochastic models do not specify when a global burst occurs; forecasting a spontaneous occurrence of synchronous burst from the preceding individual neural activities in nonbursting states remains a highly challenging issue, particularly in real nervous systems which feature the heterogeneity of local–global coupling (e.g., nonuniform coupling strength between individual neural activity and the overall firing of the population) (18). Indeed, recent studies demonstrate potentially diverse dynamical mechanisms accompanied by complex but reproducible patterns of activity sequences leading to the synchronous bursts (11, 19), implying the existence of nonlinear deterministic mechanisms around the burst initiations.

The present paper investigates the deterministic aspects of burst initiation. In particular, we question whether individual neuron dynamic in nonbursting periods can predict an upcoming synchronous burst in cultured neural population, which provides a simplest model system of self-organized neurons. To cope with the heterogeneous nature of local–global coupling, we introduce an model-free forecasting method based on a nonlinear state-space reconstruction technique. The method extends the data-driven nonlinear forecasting techniques (20????25) to characterize the global–local correspondence in network dynamics, and reveals the deterministic relationships among individual neuron traces and the global state. Using this method, we report that dynamic of only one neuron can robustly predict the upcoming synchronous burst in the neural population, at high signal-to-noise ratio. Surprisingly, the burst predictability with an appropriately chosen neuron even outperforms that with the global (i.e., population mean-field) activity of neural population, demonstrating that macroscopic fluctuation of neural population is better predicted by the microscopic dynamic of a specific single cell, rather than the macroscopic state itself. This phenomenon is explained based on the heterogeneous causal interactions among neurons. The present findings demonstrate the effectiveness of the nonlinear state-space reconstruction techniques for analyzing the causal coupling in in vitro system dynamics.

Results

Spontaneous Synchronous Bursts in Cultured Neurons.

Rat cortical neurons were grown on high-dense complementary metal–oxide–semiconductor (CMOS)-based multielectrode arrays (MEAs) for 12–41 d in vitro (Fig. 1A) (26, 27). The devices allow us to simultaneously stimulate and record from multiple neurons lying on the array surface at a high spatiotemporal resolution (Methods). Each recording yielded simultaneous traces of 60–98 cells at a temporal resolution of 2 kHz. Fig. 1B shows the cell distribution recorded with a representative array preparation. In our preparations of sparse culturing, the cell bodies were well isolated, and the action potential of each cell was identified accurately based on its spatial location and waveform. Spike action potentials were detected using standard LimAda method (28).

We first analyzed the spontaneous activities of the recorded neural population. During the spontaneous firing, the neurons showed intermittent synchronous bursts, which were interleaved with silent periods (Fig. 1C). To describe the macroscopic network dynamics, we defined population activity by the mean firing rate across all recorded neurons (Fig. 1D). The synchronous bursting period was defined by time bins in which the population activity exceeded a threshold determined for each preparation, according to the previously established method based on interspike interval distributions (27) (Fig. 1D, Inset). The timing of synchronous burst occurrence did not show any clear periodicity. Duration of bursts varied from about 200 to 800 ms. The different preparations showed distinct complex patterns of collective activity dynamics when visualized with standard dimensionality reduction techniques such as principal component analysis (Fig. S1).

Fig. S1.

Population activity dynamics visualized with principal component analysis (PCA). The trajectories of population activity were plotted within the space of top three principal components (PCs one to three), by applying PCA to the multineuron time series for each preparation. The results for three representative preparations are shown.

Predicting Global Network Bursts.

To capture the potentially complex temporal structures in neural activities around bursting dynamics, we developed a model-free method that is able to quantify the general relationships between the network and single-element dynamics in high-dimensional nonlinear systems. The present protocol is based on the state-space reconstruction using the delay-embedding theorems in deterministic dynamical systems (29, 30). The method can be interpreted as an application of the previous nonlinear forecasting methods, known as “simplex projection” (20) and “convergence cross-mapping (CCM)” (31), but here applied in a cross-scale domain with time delays and targeting at a specific global event. It extends/integrates the previous methods in three aspects. First, the method focuses particularly on the relationships between a global state variable (e.g., the mean-field variable) and local variables. Second, the method quantifies the predictability of the global variable based on temporally distant dynamics of each local variable. Third, it constructs a detector of a specific global event (e.g., the spontaneous synchronous burst) through bidirectional mappings between the global and local variables. Because “targeting” a specific global event is a prominent feature of our present approach, we will sometimes mention the method parsimoniously as “event-targeted CCM (eCCM)” in what follows. Note that the idea of mapping variables under time delays is also found in previous studies (23). In addition, the general problem of how the nonlinear signals in time series are affected by scale has also been discussed in other contexts (20, 24, 25). Below we summarize our basic protocol for the eCCM analysis.

Fig. 2 illustrates the analytic protocol (further details are provided in Methods). We first reconstruct the dynamics of population activity and each single-neuron activity, respectively, in delay-coordinate state spaces (Fig. 2 A and B):<mml:math display="block"><mml:mrow><mml:msubsup><mml:mrow><mml:munder accentunder="true"><mml:mi mathvariant="bold-italic">b</mml:mi><mml:mo stretchy="true">ˉ</mml:mo></mml:munder></mml:mrow><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:msubsup><mml:mo>=</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mi>τ</mml:mi></mml:mrow></mml:msub><mml:mo>…</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>d</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>τ</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mrow></mml:math>bˉtd=(bt,bt-τ…,bt-(d-1)τ),[1]<mml:math display="block"><mml:mrow><mml:msubsup><mml:mrow><mml:munder accentunder="true"><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mo stretchy="true">ˉ</mml:mo></mml:munder></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup><mml:mo>=</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mi>τ</mml:mi></mml:mrow></mml:msub><mml:mo>…</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>d</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>τ</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mrow></mml:math>xˉi,td=(xi,t,xi,t-τ…,xi,t-(d-1)τ),[2]where bt, and xi,t are the population mean-field and neuron i’s activity at time t, respectively; d denotes the embedding dimensions (the number of delay coordinates); τ is the unit delay size. In this study, the local time series xi,t was defined by each neuron i’s spike train smoothed with a Gaussian kernel, and bt was defined by the mean of xi,t over all of the simultaneously recorded neurons (Methods). For convenience, we refer to btd and xi,td as the “global” and “local” state trajectories, respectively. We measure the accuracy of synchronous-burst prediction by seeking “presages” of global bursts in each local state trajectories, based on the accuracy of forecasting using a nearest-neighbor model (Fig. 2A; see also the figure legend for a step-by-step description of the protocol). The rationale behind this protocol is as follows: if neuron i has enough information to predict the future synchronous burst occurring after a time span ?Δt (?Δt > 0), neuron i’s state up to time ?Δt before the targeted burst should be already differentiated from the states without any future bursting, forming a cluster of states that deviate from those predicting nonbursting periods in the reconstructed state space. This early warning is detected by finding the bidirectional maps between the targeted global event (i.e., the synchronous burst) and the local states (i.e., single-neuron dynamics) as illustrated in Fig. 2A. Similarly, we define the accuracy of “postdiction” of bursts (i.e., detecting a burst event based on neural activities after its occurrence) with the same protocol as that for prediction except for using time span Δt with a positive value. We iterated this procedure for all of the neurons to characterize the burst predictabilities in individual neurons. In addition, we quantified the “self-predictability” of the mean-field activity by replicating the above procedure based on its own (global) state trajectory bt, instead of a local state trajectory in individual neurons xi,t (Fig. 2B).

Fig. 2.

eCCM analysis relates the global-network events to the preceding local states through the state-space reconstruction. (A) Forecasting by a local signal. In each box, the left and right trajectories depict the delay-coordinate reconstruction (see the text) of the mean field (global state) and of a single-neuron activity (local state), respectively. A representative neuron (Cell 1) is shown. Dark-blue dots: the time points of peak global activities during the bursts; magenta dots: the temporally preceding states from the individual global-activity peaks (here, shift time = ?100 ms); magenta dots: the neighboring states within the reconstructed state space for Cell 1 (10 nearest neighbors for each burst peak); cyan cross: the output of the present protocol (i.e., the predicted mean field). A single neuron’s ability at predicting each burst event is quantified in five steps: (i) selecting a time point t on a local-state trajectory (say, of neuron i) that corresponds to a peak population activity found in the global-state trajectory (which is a target bursting state to be predicted), (ii) tracing back the local-state trajectory for a given time span, Δt, (iii) collecting “neighbor” time points corresponding to states nearby t ? Δt in the local-state trajectory (avoiding points temporally too near), (iv) mapping the neighbor time points back into the global-state trajectory, and (v) forwarding the time with Δt on the global-state trajectory to obtain the global-state prediction (the cyan cross). When the global-state prediction deviates from the nonbursting-state distribution by having a significantly large value, we defined it as a successful detection of burst (refer to Methods for formal descriptions). The accuracy of burst prediction is higher when the traced-back states (the red dots) deviate more from the ones in the nonbursting period in the local trajectory. Although the combination of tracing time in the retrograde direction with Δt (step ii) and mutual mapping between global and local states (steps i and iv) are newly introduced features in this study, step v alone could be regarded as a variant of the previously proposed forecasting methods based on simplex projection (20). On the other hand, if steps iiiv alone are applied (i.e., without any constraint of the targeted event, unlike our current protocol with steps i and ii), it reduces to the CCM algorithm including time delays (23, 31) but applied to the cross-scale predictions. (B) Forecasting by a global signal. The same as in A, except that the global state itself is used instead of the local state.

Using this method, we investigated the predictability of synchronous bursts based on the individual neural activities or the population mean field. In particular, we addressed two key questions: (i) how accurately can each single neuron predict the future synchronous burst events, and (ii) if a subset of neurons predicts synchronous bursts better than others, how are they related to the underlying neural network structure?

Single-Neuron Dynamics Can Predict Global Network Bursts.

First, we asked how accurately individual neurons predict (or postdict) the occurrence of synchronous bursts, and how their predictions compare with that based on the population mean field, which defines synchronous burst. We quantified the rate of successful synchronous-burst predictions in each single neuron with keeping the false-positive rate at 5% (Methods). Interestingly, in some cells (e.g., Fig. 3A), the rates of successful burst predictions based on the eCCM analysis could be relatively high (>50% of trials) even when the network appeared to be silent in terms of the global mean-field firing rate (Fig. 3D, ?1,000 ~ ?500 ms). Corresponding to this, the success rate of burst detection using a single-neuron dynamic in the eCCM could be even higher than that with the global mean-field dynamics (Fig. 3B). The predictability of burst occurrence varied among neurons, suggesting the heterogeneity of burst predictability across neurons. However, the accuracy of predictions in individual neurons was highly consistent among different sets of burst trials within each preparation (ρ = 0.79 ~ 0.93, P < 2 × 10?23, Spearman’s rank correlation between the first and the second halves of trials; Fig. 3E), demonstrating that this cell-to-cell variability reflects a robust property of each neuron. Across all of the preparations, about 1/3 of neurons were found to predict synchronous bursts robustly better than population mean field over some prediction span, Δt (Fig. 3G). Note that the prediction by mean field is not equal to the average performance of single neurons. How a single neuron can better predict the mean field than the mean field itself does is a nontrivial issue.

Fig. 3.

Presages of global bursts within single neurons. (A) Burst detection with the eCCM based on a single representative neuron. The black trace with the gray shade represents the median and quartiles of the estimated global states, <mml:math><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mi>b</mml:mi><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mi>t</mml:mi></mml:msub></mml:mrow></mml:math>b^t, in individual trials around burst peaks. The burst-detection threshold (dashed green line) was defined by the 95th percentile of the estimated global activity outside the bursting period. The red curve shows the fraction (“hit rate”) of successfully detected bursts in all of the burst trials. The curves are smoothed with 250-ms boxcar kernel for the visualization. Δt takes a negative value when predicting and a positive value when “postdicting” burst events. (B) Burst detection with the eCCM based on the mean-field activity in the population. (C) Burst detection with the momentary firing rate (FR) of the same representative single neuron as the one shown in A. (D) Burst detection with the momentary mean-field firing rate. (E) Burst predictability is a robust property of individual neurons. Dividing the data into two halves (the odd and even trials of spontaneous burst) shows that variability in burst-detection success rate is highly consistent over time. Each marker corresponds to a neuron. The filled markers represent the results using the mean-field activity. The success rate was averaged over the range of time span ?200 ms < Δt < ?50 ms. (F) Burst detection with the eCCM is more accurate than that based on the momentary firing rate. The comparison of success rates in burst detection with the eCCM and with the firing-rate-based method. (Inset) The control analysis in which we compared the success rates of the eCCM with that of the firing-rate-based method using multiple time bins, by matching the number of bins used in the eCCM. The gray bars show the fraction of cells in all of the preparations. The arrowheads indicate the medians. The colored lines show the fractions in the individual preparations. (G) A single cell can outperform the mean field at predicting the spontaneous network bursts, particularly when we use the information in the temporal patterns of neural activity rather than the momentary activity. The light- and dark-blue curves, respectively, show the success rates in burst detection with single neurons relative to that with the mean field, for each of the eCCM and the firing-rate-based analysis. For each method, the cells were sorted based on the success rate.

The fact that a single neuron can compare or even outperform the population mean at predicting the mean-field dynamics may appear surprising, considering that the single-cell activities fluctuate due to spiking much more than the mean-field activity (e.g., compare the black traces in Fig. 3 C and D). However, it is explained by the delay-embedding theorems (29, 30) that the global state information could be reconstructed from observation of time series in a single variable participating in the dynamical system (Fig. S2). The present result suggests that such a mathematical property can be demonstrated in a biological system. Importantly, to reconstruct the global state required, we need to observe multiple time points rather than the momentary state in the time series of the observed variable (which is why the theorem is called “delay-embedding” theorem). It suggests that the information about the global state is carried by the temporal patterns including multiple time points, rather than momentary snapshots of activity in neurons.

Fig. S2.

Delay-embedding theorem (known as Takens’ theorem). Suppose that we have the time series of three variables, <mml:math><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>z</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>(x(t),y(t),z(t)), generated by some differential equations:<mml:math><mml:mrow><mml:mrow><mml:mover accent="true"><mml:mi>x</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>z</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>x˙(t)=f(x(t),y(t),z(t)), <mml:math><mml:mrow><mml:mrow><mml:mover accent="true"><mml:mi>y</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>g</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>z</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>y˙(t)=g(x(t),y(t),z(t)), and <mml:math><mml:mrow><mml:mrow><mml:mover accent="true"><mml:mi>z</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>h</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>z</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>z˙(t)=h(x(t),y(t),z(t)). (A) The evolution of the global state <mml:math><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>z</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>(x(t),y(t),z(t)) is represented by a trajectory in the space of those three variables. (B) Consider observing the temporal sequence of a single variable, e.g., <mml:math><mml:mrow><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>x(t). (C) The attractor topology in the original global-state space is fully recovered in a delay coordinate of the observed variable, <mml:math><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi>τ</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi>τ</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>(x(t),x(t?τ),x(t?τ)) with an arbitrary unit delay, <mml:math><mml:mi>τ</mml:mi></mml:math>τ. Namely, we can construct a smooth one-to-one map from the reconstructed attractor to the original attractor.

To explore the importance of temporal patterns of activities in predicting bursts compared with the momentary firing rates, we compared the obtained prediction accuracy with an analogous index based on momentary firing rates, xi,t or bt (the red traces in Fig. 3 C and D, respectively; see also Methods). In single neurons, the eCCM tended to provide more accurate predictions than the momentary-activity-based method (P < 5 × 10?14 in all of the individual preparations, sign test with paired samples; Fig. 3F). This superiority of the eCCM to the momentary firing rate is not likely to be due to the difference in the sampled spike numbers because it remained even if we lengthened the bin size for computing activity so as to have the same total duration as used in the eCCM analysis [P < 6 × 10?3 in all of the preparations except for Chip 2427 (which had much lower firing rate than others), sign test with paired samples; Fig. 3F, Inset]. Finally, when we used the momentary firing rates instead of the eCCM based on temporal sequence, the fraction of cells whose burst detectability outperformed that of the mean field reduced from ~1/3 to ~1/5 (Fig. 3G). Together, these results suggest that the temporal patterns of neural activity, not only momentary activity, are crucial for early detection of synchronous bursts.

Burst Predictability Reflects Network Structures Realized in Nonbursting Periods.

Next, we examined whether the predictability of synchronous bursts corresponds to any specific network structure defined by synaptic interactions. The structural correspondences of burst predictability were investigated in three steps (Fig. 4): we first reanalyzed the nonbursting spontaneous activities to infer the directed causal network structure; next, the inferred causal networks were verified by comparing them to synaptic interactions measured by an additional electrical stimulation experiment with the same preparations; finally, we related each neuron’s network connectivity with its burst prediction accuracy.

Fig. 4.

Network structures in nonbursting periods explain the local burst predictability. (A) Causal network analysis. The network depicts the directed pairwise interactions among neurons. The filled (orange) and open (blue) markers represent the putative excitatory and inhibitory cells, respectively. The thickness of arrows indicates the absolute strength of causal coupling (“causality”). For clarity of illustration, only the top 5% strong couplings are shown, and the nodes are distributed by multidimensional scaling with the distances defined to be inversely proportional to the causality. The figure shows the result for Chip 1440. (B) Synaptically connected neuron pairs show larger causal interaction. The bar labels show the time span to be used for the causality analysis. Bars in dark- and light blue: average cross-embedding values for cells that showed short-latent (dark, latency <10 ms) and long (dark, latency> 10 ms) stimulus-triggered spike increase. The average causality index within time span [?100, 100] ms dissociated the postsynaptic cells from other cells (P < 0.005, Wilcoxon sign rank test, independent samples). The error bar shows the SEM across neurons. (C) Identification of synaptic connectivity by electrical stimulation experiment. The figure shows the peri-stimulus time histograms of three representative neurons (from top to bottom, putative direct, indirect, and nonpost cells, respectively). The asterisks indicate the earliest significant increase (P < 0.05) of spike count, compared with the spike-count distribution in the prestimulus period (from ?500 to ?100 ms). The figure shows the spike histogram smoothed with a boxcar kernel of 5-ms width for visualization purpose. (D) The relationship between burst predictability and the interaction strength in different neuron types (putative excitatory and inhibitory neurons). The neurons were classified into two groups depending on the strength of input or output causal couplings: the top half of neurons was labeled as “strong,” whereas the bottom half was labeled as weak. The error bar shows the SEM across neurons.

In the causal network analysis, we applied a previously proposed method that is capable of detecting nonlinear coupling (31) (Methods). Theoretically, the method detects causal interactions including indirect ones (32). Applying this method to all of the neuron pairs showing nonbursting spontaneous activities yielded a matrix that represents pairwise, directed interactions within each preparation (Fig. 4A).

The relevance of the inferred causal couplings to synaptic interactions among neurons was verified using an independent dataset in which synaptic interactions were directly measured (Fig. 4 B and C). The present CMOS-based MEA system allowed us to stimulate the neural tissue local to given electrodes at submillisecond accuracy, directly activating a single neuron’s soma at a high temporal precision with few artifact (26) (Fig. S3). A subset of neurons was electrically stimulated while the evoked activities in other neurons were monitored simultaneously. According to the evoked response latencies, we identified the short-latency (<10 ms) and long-latency (<mml:math><mml:mo>≥</mml:mo></mml:math>10 ms) downstream cells for each stimulated neuron (Fig. 4C), between which the latter was expected more likely to include multisynaptic interactions and thus weaker causal effects. As expected, we confirmed that the causality inferred based on nonbursting spontaneous data consistently reflected the identified short- and long-latency interactions (Fig. 4B). Specifically, the inferred causality differentiated the short- and long-latency downstream cells from the remaining population (P = 0.0001, sign test, data pooled across all of the preparations). This result also provides an empirical validation for the causality inference with nonlinear state-space reconstructions such as CCM (31), by directly relating the causality extracted from the dynamics to the one defined by the manipulation experiment. Based on these confirmations, we used the inferred causality as proxies for the effective synaptic interactions in the network during the nonbursting periods.

Fig. S3.

Microstimulation-based estimation of synaptic connectivity in a pairwise manner. (A) Axonal stimulation on an arbitrary neuron elicited bidirectional action potential propagation. (B) Raw data of neural responses at a putative presynaptic neuron and postsynaptic neuron. Data from 100 trials are superimposed. (C) Raster plot of B. Antidromic, direct action potentials exhibited precise temporal responses, while orthodromic, synaptic action potentials were elicited stochastically with significant temporal jitters. (D) Poststimulus spike histograms of C.

Finally, we compared these causal network structures and the burst predictabilities, finding that the burst predictability in each neuron displayed a positive correlation to the average inferred causality between it and the other neurons (Fig. 4 D and E). We also classified putative excitatory and inhibitory neurons based on the spike waveforms (SI Methods, Fig. S4). Interestingly, the excitatory neurons predicted bursts better than the inhibitory neurons when we compared the neurons showing strong causality (P < 0.002, both for the causality for input and output connections, rank sum test), but such cell-type dependence was unclear in the neurons having weak interactions (P > 0.5, Fig. 4D). These results suggest that the stronger network interaction via excitatory cells underlies the higher burst predictability. The above results together led us to conclude that the cell-to-cell variability in burst predictability is likely to reflect the heterogeneous network structures that shape nonbursting activity dynamics.

Fig. S4.

Identification of excitatory and inhibitory neurons. (A) Representative neurons in immunostaining with MAP2 and GABA. Action potentials below the insets were obtained at white rectangles, putatively from a neighboring neuron in a circle. The peak-to-peak time, Tpp, was defined as time duration from negative peak to positive peak of action potential. (B) Histogram of Tpp. Excitatory neurons (green) had larger Tpp than inhibitory neurons (magenta). K-means method to Tpp was used to separate excitatory and inhibitory neurons.

SI Methods

Cell Culturing.

All experimental protocol was approved by the Committee on the Ethics of Animal Experiments at the Research Center for Advanced Science and Technology, the University of Tokyo (Permit number: RAC130106).

A protocol for cell culturing had been reported previously (26). Briefly, E18 Wistar rat cortices were dissociated using trypsin and mechanical trituration. Next, 20–40 k/μL neurons and glia were seeded over an area of ~12 mm2 on top of the CMOS chip. Layers of poly (ethyleneimine) followed by laminin were used to adhere cells. Plating media consisted of Neurobasal-B27 supplemented with 10% horse serum and 0.5 mM GlutaMAX during the first 24 h. Growth media consisted of DMEM supplemented with 10% horse serum, 0.5 mM GlutaMAX, and 1 mM sodium pyruvate. Experiments were conducted inside an incubator to control of environmental conditions (36 °C and 5% CO2).

CMOS-Based Recording and Stimulation of Network Activity.

Cultured neuron activities were simultaneously recorded with high-density MEA as described before (26, 27). Cortical networks were grown over 11,011-electrode CMOS-based MEAs (52, 53), which provide enough spatial and temporal resolution to detect action potentials from any neuron lying on the array: 1.8 × 2.0-mm2 area containing 8.2 × 5.8-μm2 electrodes with 17.8-μm pitch, sampled at 20 kHz. Subsets of 126 electrodes can be read out (and stimulated) at one time, and electrode selection can be reconfigured within a few milliseconds. To identify the locations of neurons growing over the array, a sequence of about 100 recording configurations were scanned across the whole array while recording spontaneous activity. Locations of active somata were identified because action potentials could usually be detected from multiple nearby electrodes. Electrode selection was then reconfigured such that a single electrode was assigned to each identified soma, and spontaneous activities were measured simultaneously from all of these cells. The putative neuron types (excitatory or inhibitory) were estimated based on spike shapes (Fig. S3). We could recode from 93, 47, 98, 92, and 53 neurons in Chip 1437, 1440, 1444, 2427, and 2440, respectively.

Microstimulation-elicited network activities were then investigated to characterize a pairwise synaptic strength between neurons. An adequate stimulating electrode was explored such that a single target neuron was directly activated through axonal stimulation and that the directly evoked spikes were exclusively measured from the target cell. The directly evoked spikes could be easily distinguished from postsynaptic activations because they were very reliable (i.e., 100 spikes elicited out of 100 stimulation trials) and exhibited a small temporal jitter (Fig. S4). The microstimulation was a single, positive–negative, biphasic pulse with a charge-balanced amplitude of ±300–900 mV and a duration of 200 μs/phase, and was delivered 100 times every 3 s.

Burst Detection.

Burst in spontaneous activity was detected modifying a protocol that was previously established by the authors’ group (27). The previous study proposes to determine the threshold for burst detection based on the distribution of ISI of consecutive spikes (27). In the bursting neurons, the ISI distribution typically has a bimodal structure whose valley can be used as an objective criterion for burst detection. In this study, we first computed sequences of population firing rate that were normalized such that it ranges from 0 to 1. We defined this as the sequence of global state bt, where t is the time. This was to use a common burst-detection threshold across different preparations of neuron cultures, which generally vary in terms of the absolute firing rate. We used 5-ms bin for the firing rate computation. We next derived the distributions of inverse firing rate over the bins. This yielded an ISI distributions for each preparation, in which we confirmed that all of them had bimodal shapes (Fig. 1D, Inset). For each preparation, we selected the smallest ISI providing the valley of distribution as the burst-detection threshold. The burst periods were determined by the at least three consecutive sequences of bins that have average ISIs under this threshold, identifying 26 bursts on average for each preparation. We defined the peak burst timing by selecting the center of the bin having the smallest average ISI (i.e., the largest global state) within each burst period.

Estimation of Synaptic Connectivity from Electrical Stimulation.

Synaptic connectivity was estimated based on the evoked responses during electrical stimulation experiment. To reliably stimulate a single neuron, we searched a stimulation site around a target neuron in each MEA that could elicit an action potential exclusively at the target neuron. After the careful selection of stimulation sites, we could evoke the action potentials at almost 100% probability for each single stimulation, with very little jitters in the timings of action potentials in the stimulated neurons. We first aligned the spike raster to the timings of stimulation. We next computed the sequence of firing rate xi,t in a way described above. The firing rates before the stimulation (?500 ms < t < ?100 ms) were used to produce the reference probability distribution of each neuron’s firing rate, P(xi). Next, we computed the probability, P(xi > xi,t), for each time bin centered at t in a short poststimulation period (2.5 ms < t < 15 ms); the responses during 0 ms < t < 2.5 ms were omitted to eliminate the artifacts of electrical stimulation. We defined the smallest time t that satisfies P(xi > xi,t) < 0.01 in the poststimulation period as the latency of neuron i. The neurons that had at least one time bin satisfying this condition were defined as “downstream” cells of the stimulated neuron; the neurons that did not have any such time bin were defined as “non-downstream” cells.

According to the evoked response latencies, we identified the short- (<10 ms) and long-latency (<mml:math><mml:mo>≥</mml:mo></mml:math>10 ms) downstream cells for each stimulated neuron. The multisynaptic downstream cells for a stimulated neuron comprised relatively small fraction n (17% on average) of the entire population. This indicates that they are subsets of neurons receiving effectively strong input from the stimulated cell via relatively a small number of path length, although the interaction with longer path length, which was not detected here, could include the larger fraction of the cell population.

Causal Network Analysis.

Pairwise causal interaction among neurons was estimated based on a variant of CCM. CCM was developed by Sugihara et al. (31) as an extension of nonlinear forecasting method of time sequence based on nearest-neighbor models (20, 54). The method is capable of detecting relatively weak causal interactions in deterministic dynamical systems (which can include some stochastic components) if the system and observations are deterministic, the embedding dimension is sufficiently large, and data size is sufficiently large for the given embedding dimension. A variant of CCM that can be used for systems in which the variables have heterogeneous time scales (such as in neural system) was developed by the authors (32).

Suppose that we want to know the interaction from neuron x to another neuron y. We first reconstruct the state-space trajectories of each neuron <mml:math><mml:mi>x</mml:mi></mml:math>x in the delay coordinates, <mml:math><mml:mrow><mml:msubsup><mml:munder accentunder="true"><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mo>ˉ</mml:mo></mml:munder><mml:mi>t</mml:mi><mml:mrow><mml:mi>d</mml:mi><mml:mi>max</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi>τ</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>…</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>d</mml:mi><mml:mi>max</mml:mi><mml:mo>?</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>τ</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>xˉtdmax=(xt,xt?τ,…,xt?(dmax?1)τ), where dmax represents the maximum number of dimensions (number of delay coordinates) to be considered, <mml:math><mml:mi>t</mml:mi></mml:math>t is the time point, and <mml:math><mml:mi>τ</mml:mi></mml:math>τ is the unit delay length. We used <mml:math><mml:mi>τ</mml:mi></mml:math>τ = 5 ms dmax = 8 in this study. In the embedding-based analyses, we convolved the firing-rate sequence of each neuron with a Gaussian kernel that has half-width-of-half-height of 25 ms, and normalized such that each neuron trace ranges from 0 to 1. To avoid a known vulnerability of the state-space reconstruction to the time-scale heterogeneity (55), we projected delay vector <mml:math><mml:mrow><mml:msubsup><mml:munder accentunder="true"><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mo>ˉ</mml:mo></mml:munder><mml:mi>t</mml:mi><mml:mrow><mml:mi>d</mml:mi><mml:mi>max</mml:mi></mml:mrow></mml:msubsup></mml:mrow></mml:math>xˉtdmax to a randomized coordinate space by multiplying a square random matrix, <mml:math><mml:mi mathvariant="bold-italic">R</mml:mi></mml:math>R, from the left, to obtain a transformed vector: <mml:math><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mrow><mml:mi>d</mml:mi><mml:mi>max</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mi mathvariant="bold-italic">R</mml:mi><mml:mo>?</mml:mo><mml:msubsup><mml:munder accentunder="true"><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mo>ˉ</mml:mo></mml:munder><mml:mi>t</mml:mi><mml:mrow><mml:mi>d</mml:mi><mml:mi>max</mml:mi></mml:mrow></mml:msubsup></mml:mrow></mml:math>xtdmax=R?xˉtdmax. A <mml:math><mml:mi>d</mml:mi></mml:math>d-dimensional delay vector <mml:math><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:msubsup></mml:mrow></mml:math>xtd is constructed by selecting the first <mml:math><mml:mrow><mml:mi>d</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>≤</mml:mo><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mi>max</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>d(≤dmax) components of <mml:math><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mrow><mml:mi>d</mml:mi><mml:mi>max</mml:mi></mml:mrow></mml:msubsup></mml:mrow></mml:math>xtdmax. The causal interaction form neuron x to another neuron y is quantified based on the correlation coefficient, <mml:math><mml:mrow><mml:msub><mml:mi>ρ</mml:mi><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mrow><mml:mover accent="true"><mml:mi>y</mml:mi><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:math>ρyt,y^(xtd), between the true (<mml:math><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow></mml:math>yt) and forecast (<mml:math><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mi>y</mml:mi><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mi>t</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>y^t(xtd)) signals, where<mml:math display="block"><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mi>y</mml:mi><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mi>t</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munder><mml:mo>∑</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo><mml:mi mathvariant="normal">s</mml:mi><mml:mo>.</mml:mo><mml:mi mathvariant="normal">t</mml:mi><mml:mo>.</mml:mo><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo></mml:mrow><mml:mi>d</mml:mi></mml:msubsup><mml:mo>∈</mml:mo><mml:mi>B</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:munder><mml:mrow><mml:mi>w</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo></mml:mrow><mml:mi>d</mml:mi></mml:msubsup><mml:mo>?</mml:mo><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo></mml:mrow></mml:msub></mml:mrow></mml:mstyle><mml:mo>,</mml:mo></mml:mrow></mml:math>y^t(xtd)=∑t′s.t.xt′d∈B(xtd)w(|xt′d?xtd|)yt′,with the k-nearest-neighbor set <mml:math><mml:mrow><mml:mi>B</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>B(xtd) of <mml:math><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:msubsup></mml:mrow></mml:math>xtd in the delay-coordinate space. We set <mml:math><mml:mi>k</mml:mi></mml:math>k=4, weight <mml:math><mml:mrow><mml:mi>w</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo></mml:mrow><mml:mi>d</mml:mi></mml:msubsup><mml:mo>?</mml:mo><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>exp</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>?</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo></mml:mrow><mml:mi>d</mml:mi></mml:msubsup><mml:mo>?</mml:mo><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:mo>/</mml:mo><mml:mrow><mml:msub><mml:mi mathvariant="normal">Σ</mml:mi><mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo></mml:mrow><mml:mi mathvariant="normal">s</mml:mi><mml:mo>.</mml:mo><mml:mi mathvariant="normal">t</mml:mi><mml:mo>.</mml:mo><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo></mml:mrow><mml:mi>d</mml:mi></mml:msubsup><mml:mo>∈</mml:mo><mml:mi>B</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>?</mml:mo><mml:mi>exp</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>?</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo></mml:mrow><mml:mi>d</mml:mi></mml:msubsup><mml:mo>?</mml:mo><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mo>,</mml:mo></mml:mrow></mml:math>w(|xt′d?xtd|)=[exp(?|xt′d?xtd|)]/Σt′s.t.xt′d∈B(xtd)?exp(?|xt′d?xtd|), and <mml:math><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo></mml:mrow><mml:mi>d</mml:mi></mml:msubsup><mml:mo>?</mml:mo><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow></mml:math>|xt′d?xtd| as the square distance between <mml:math><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo></mml:mrow><mml:mi>d</mml:mi></mml:msubsup></mml:mrow></mml:math>xt′d and <mml:math><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:msubsup></mml:mrow></mml:math>xtd in the data analysis. Note that <mml:math><mml:mrow><mml:msub><mml:mi>ρ</mml:mi><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mrow><mml:mover accent="true"><mml:mi>y</mml:mi><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math>ρyt,y^(xtd)=1 means perfect prediction. We selected the optimal dimension d = d* so as to maximize the prediction performance, <mml:math><mml:mrow><mml:msub><mml:mi>ρ</mml:mi><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mrow><mml:mover accent="true"><mml:mi>y</mml:mi><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:math>ρyt,y^(xtd). Typical value of d* was distributed from 2 to 4 in the current data.

Discussion

In this study, we have focused on the relationship between local and global neural properties by analyzing how single-cell activity predicts the population mean-field dynamics. This could also be interpreted as an implementation of generalized Takens? theorem with general observation functions of a dynamical system (33), subject to limited data size and noise (34). The method revealed the heterogeneity of neurons also in the predictability of upcoming occurrence of a synchronous burst of a neural population, which is temporally distant on the order of 100 ms. The bursts were predicted accurately by dynamics of some single neurons. Remarkably, such neuron even outperformed the population mean-field dynamics itself in terms of prediction accuracy, particularly when we extracted the information embedded in the temporal patterns of neural activity rather than the momentary firing rates. From causal network analysis combined with the electrical stimulation experiment, the heterogeneity of burst predictability was explained by the structures of neural networks: the “hub”-like neurons having stronger synaptic interactions with other cells can better predict the upcoming global network burst than others. It means that, even without observing any burst events or measuring burst predictabilities in each neuron, we can predict to some extent which neurons should be used to forecast the bursts by looking at the causal network. In addition, the present results based on the cultured neurons indicate the heterogeneous relationships between single neurons and population activity are basic properties observed in the nervous network that is self-organized without external stimulus. We also found that the excitatory neurons in the network hub better predict the bursts than the inhibitory neurons. This could reflect the functional asymmetry between the cell types: the activity of specific excitatory neurons may be sufficient for a burst, thus making them good predictors. In contrast, inactivity among inhibitory neurons may be a necessary but not a sufficient condition for a bursting event and thus less useful as predictors.

Classic models of the synchronized population burst assume diverging interaction that propagates activity of a cell to the other cells (4, 12?14). A variant of this model features “avalanche”-like bursting sequences, which models synchronous burst with noise being amplified and propagated through the network (9, 11, 35???39). These models, focusing mainly on the stochastic aspects of burst initiation and propagation, have found successes in explaining a variety of statistical properties of bursting neural networks. On the other hand, it has been challenging to predict the occurrence of synchronous burst, partially because they can be triggered by complex and heterogeneous mechanisms (40). The present study focuses on deterministic aspects of the neural population dynamics, with which the burst occurrence is predictable to the extent based on a preceding single-neuron activity. The existence of specific neurons predicting the upcoming burst is also consistent with the previous idea of “leader neurons” that fire at the beginning of bursts (8, 41). Notably, however, the predictor of the global burst is not identical to the increased activities in those single neurons as expected from previous studies (15, 37, 41), because the magnitude of firing rate alone is not sufficient to explain the highly accurate predictability based on the eCCM analysis (Fig. 3E). Rather, the earliest presages are likely to be embedded in the dynamical patterns of those neurons. On the other hand, not everything is predictable in a deterministic manner before the burst occurrence. For example, the trial-to-trial fluctuation in burst size was not predicted from preburst dynamics (ρ = ?0.289 ~ ?0.077, P > 0.4, Spearman’s rank correlation between the odd and even trials), suggesting that the burst size is determined by stochastic mechanisms (11, 39).

Finally, the predictability of upcoming burst occurrence based on single-cell dynamics suggests a potentially effective methodology to capture the early warnings of population-state transition in a variety of networks, including epileptic seizures in the brain (5, 42), rumor propagation in social networks (43?45), or epidemic spreading in human networks (46, 47). In clinical purposes, for example, accurate prediction and characterization of pathologically synchronized neural firing is an important issue (5, 42). Although nonlinear features of field potential sequence have been proposed as useful markers of epileptic state (42, 48??51), the predictability of global dynamics based on single-neuron activity has not been thoroughly studied. In particular, the present study suggests that single neuron could even outperform the direct observation of global state itself in predicting the global-state dynamics. In principle, the same framework could be applied to study other network events than the synchronous burst, such as the changes in more complex correlation patterns among neurons. The success of the present state-space reconstruction-based method implies a new approach to detect the early warnings of transitions in global-network state based on local rather than global features. However, it should be noted with an emphasis that the in vivo neural dynamics are generally much more complex; not necessarily all of the in vivo dynamics show the same properties as what are observed in vitro. Nonetheless, our analytic protocol can be applied to in vivo experiments with a wide variety of systems. If some network events differed from others regarding the local early warnings detected via state-space reconstructions, it can imply potentially distinct properties of underlying dynamical systems.

Methods

Cell Culturing, Recording, and Causal Network Analysis.

All experimental protocol was approved by the Committee on the Ethics of Animal Experiments at the Research Center for Advanced Science and Technology, the University of Tokyo (Permit number: RAC130106). The cell culturing, recording, stimulation, and burst detection were based on the protocols reported previously (26, 27). The causal networks in the nonbursting period were estimated using the CCM (31). The details of protocols are described in SI Methods.

Burst Predictability Measured by Event-Targeted CCM.

The preset method borrows the technique used in CCM to quantify an early warning of a specific global event, rather than the average interactions among the variables throughout long observations. The method extends/integrates the original CCM and the related methods by (i) designing the analytic protocol to assess the predictability of a specific event (in our case, the spontaneous synchronous burst), (ii) relating the global state to local components while the original protocol only quantified the relationships between local variables, and (iii) quantifying on the predictability of temporally distant global state based on local variable dynamics. This is done by finding a mapping between the system’s global state bt and each neuron <mml:math><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>xt?Δt including temporal lag <mml:math><mml:mi mathvariant="normal">Δ</mml:mi></mml:math>Δt, instead of simultaneous prediction among individual neurons. For burst detection, we set t as the time corresponding to the peak of global state during a burst period <mml:math><mml:mrow><mml:mi>t</mml:mi><mml:mo>∈</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:msub><mml:mi>t</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>…</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>t</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo>}</mml:mo></mml:mrow></mml:mrow></mml:math>t∈{t1,…,tn}, where n is the number of burst trials in the experiment. We first reconstruct the state-space trajectory for neuron x and the normalized global state b in the (randomized) delay coordinates, xtd = (xt, xt-τ …, xt-(d-1)τ) and btd = (bt, bt-τ …, bt-(d-1)τ). It is mathematically shown that both of these trajectories reproduce the topological structure (the proximity of data points) of attractor dynamics that they participate, with which a (near-) future state of each variable is accurately predicted from the current state of other variables, if the dynamics (approximately) follows deterministic mechanisms (29??32). Practically, however, the accuracy of prediction is limited by various factors including data size, process/observation noises, as well as the uncertainty due to asymmetric causal interactions.

The forecast for the global state <mml:math><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mi>b</mml:mi><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>b^t based on a single neuron x with time lag <mml:math><mml:mrow><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:math>Δt is given by<mml:math display="block"><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mi>b</mml:mi><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munder><mml:mo>∑</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo><mml:mi mathvariant="normal">s</mml:mi><mml:mo>.</mml:mo><mml:mi mathvariant="normal">t</mml:mi><mml:mo>.</mml:mo><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup><mml:mo>∈</mml:mo><mml:mi>B</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:munder><mml:mrow><mml:mi>w</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup><mml:mo>?</mml:mo><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo></mml:mrow></mml:msub></mml:mrow></mml:mstyle><mml:mo>,</mml:mo></mml:mrow></mml:math>b^t(xt?Δtd)=∑t′s.t.xt′?Δtd∈B(xt?Δtd)w(|xt′?Δtd?xt?Δtd|)bt′,

with the k-nearest-neighbor set <mml:math><mml:mrow><mml:mi>B</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>B(xt?Δtd) of <mml:math><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup></mml:mrow></mml:math>xt?Δtd, in the same manner as in CCM. In the data analysis we used k = 4 and the same weight function,<mml:math display="block"><mml:mrow><mml:mi>w</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup><mml:mo>?</mml:mo><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>exp</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>?</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup><mml:mo>?</mml:mo><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msub><mml:mi mathvariant="normal">Σ</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo><mml:mi mathvariant="normal">s</mml:mi><mml:mo>.</mml:mo><mml:mi mathvariant="normal">t</mml:mi><mml:mo>.</mml:mo><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup><mml:mo>∈</mml:mo><mml:mi>B</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>?</mml:mo><mml:mi>exp</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>?</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup><mml:mo>?</mml:mo><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mrow></mml:math>w(|xt′?Δtd?xt?Δtd|)=exp(?|xt′?Δtd?xt?Δtd|)Σt′s.t.xt′?Δtd∈B(xt?Δtd)?exp(?|xt′?Δtd?xt?Δtd|).

The success rate in burst detection was defined by the ratio between the number of burst trials such that <mml:math><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mi>b</mml:mi><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>></mml:mo><mml:msub><mml:mi>θ</mml:mi><mml:mi>x</mml:mi></mml:msub></mml:mrow></mml:math>b^t>θx, where <mml:math><mml:mrow><mml:msub><mml:mi>θ</mml:mi><mml:mi>x</mml:mi></mml:msub></mml:mrow></mml:math>θx is the criterion for detecting an upcoming burst based on neuron x’s activity sequence. In the present study, we defined <mml:math><mml:mrow><mml:msub><mml:mi>θ</mml:mi><mml:mi>x</mml:mi></mml:msub></mml:mrow></mml:math>θx by the 95th percentile in the null-hypothesis distribution <mml:math><mml:mrow><mml:mi mathvariant="normal">P</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mi>b</mml:mi><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mtext>Null</mml:mtext></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mtext>Null</mml:mtext></mml:mrow></mml:msub><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>P(b^tNull(xtNull?Δtd)), where <mml:math><mml:mrow><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mtext>Null</mml:mtext></mml:mrow></mml:msub></mml:mrow></mml:math>tNull is randomly selected time outside the bursting period. Note that this detection criterion is different from the threshold used for defining the burst period (for details of burst period definition, see SI Methods). The burst predictability in each cell was quantified by the rate of trials in which the upcoming bursts were successfully predicted, where the success rate was summarized as an average over range of time span ?200 ms < Δt < ?50 ms.

Similarly, the forecast based on the history of the global state itself is given by the same protocol except for using b instead of x:<mml:math display="block"><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mi>b</mml:mi><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">b</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munder><mml:mo>∑</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo><mml:mi mathvariant="normal">s</mml:mi><mml:mo>.</mml:mo><mml:mi mathvariant="normal">t</mml:mi><mml:mo>.</mml:mo><mml:msubsup><mml:mi mathvariant="bold-italic">b</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup><mml:mo>∈</mml:mo><mml:mi>B</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">b</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:munder><mml:mrow><mml:mi>w</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">b</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup><mml:mo>?</mml:mo><mml:msubsup><mml:mi mathvariant="bold-italic">b</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>′</mml:mo></mml:mrow></mml:msub></mml:mrow></mml:mstyle><mml:mo>.</mml:mo></mml:mrow></mml:math>b^t(bt?Δtd)=∑t′s.t.bt′?Δtd∈B(bt?Δtd)w(|bt′?Δtd?bt?Δtd|)bt′.

This quantifies the self-predictability of burst based on the global state itself, through the success rate computed based on the corresponding threshold <mml:math><mml:mrow><mml:msub><mml:mi>θ</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:mrow></mml:math>θb, the 95th percentile of null-hypothesis distribution <mml:math><mml:mrow><mml:mi mathvariant="normal">P</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mi>b</mml:mi><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mtext>Null</mml:mtext></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msubsup><mml:mi mathvariant="bold-italic">b</mml:mi><mml:mrow><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mtext>Null</mml:mtext></mml:mrow></mml:msub><mml:mo>?</mml:mo><mml:mi mathvariant="normal">Δ</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mi>d</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>P(b^tNull(btNull?Δtd)).

Acknowledgments

The authors thank Dr. Urs Frey and Prof. Andreas Hierlemann for providing CMOS-based MEA with their technical support. This work was supported by Japan Science and Technology Agency PRESTO Grant JPMJPR16E6 (to S.T.); Asahi Glass Foundation (H.T.); Kayamori Foundation of Information and Science Advancement (H.T.); KAKENHI Grants 26242040, 16H01604, and 16K14191 (H.T.); Brain/MINDS from Japan Agency for Medical Research and Development (T.T.); and RIKEN Brain Science Institute (T.T.).

Footnotes

  • ?1To whom correspondence should be addressed. Email: satohiro.tajima{at}gmail.com.
  • Author contributions: S.T., T.M., H.T., and T.T. designed research; T.M., D.J.B., and H.T. contributed new reagents/analytic tools; S.T. and T.T. analyzed data; and S.T., T.M., H.T., and T.T. wrote the paper.

  • The authors declare no conflict of interest.

  • This article is a PNAS Direct Submission.

  • This article contains supporting information online at www.danielhellerman.com/lookup/suppl/doi:10.1073/pnas.1705981114/-/DCSupplemental.

Freely available online through the PNAS open access option.

References

  1. ?
    .
  2. ?
    .
  3. ?
    .
  4. ?
    .
  5. ?
    .
  6. ?
    .
  7. ?
    .
  8. ?
    .
  9. ?
    .
  10. ?
    .
  11. ?
    .
  12. ?
    .
  13. ?
    .
  14. ?
    .
  15. ?
    .
  16. ?
    .
  17. ?
    .
  18. ?
    .
  19. ?
    .
  20. ?
    .
  21. ?
    .
  22. ?
    .
  23. ?
    .
  24. ?
    .
  25. ?
    .
  26. ?
    .
  27. ?
    .
  28. ?
    .
  29. ?
    .
  30. ?
    .
  31. ?
    .
  32. ?
    .
  33. ?
    .
  34. ?
    .
  35. ?
    .
  36. ?
    .
  37. ?
    .
  38. ?
    .
  39. ?
    .
  40. ?
    .
  41. ?
    .
  42. ?
    .
  43. ?
    .
  44. ?
    .
  45. ?
    .
  46. ?
    .
  47. ?
    .
  48. ?
    .
  49. ?
    .
  50. ?
    .
  51. ?
    .
  52. ?
    .
  53. ?
    .
  54. ?
    .
  55. ?
    .

Online Impact

                                      1. 613261309 2018-02-21
                                      2. 6972481308 2018-02-21
                                      3. 2758991307 2018-02-21
                                      4. 5213301306 2018-02-21
                                      5. 6402651305 2018-02-21
                                      6. 975701304 2018-02-20
                                      7. 619701303 2018-02-20
                                      8. 6291841302 2018-02-20
                                      9. 8182271301 2018-02-20
                                      10. 7717531300 2018-02-20
                                      11. 2811781299 2018-02-20
                                      12. 9132041298 2018-02-20
                                      13. 285331297 2018-02-20
                                      14. 2838721296 2018-02-20
                                      15. 274321295 2018-02-20
                                      16. 2027431294 2018-02-20
                                      17. 2738641293 2018-02-20
                                      18. 9584601292 2018-02-20
                                      19. 9002021291 2018-02-20
                                      20. 7995901290 2018-02-20