# Locally embedded presages of global network bursts

1. aDepartment of Basic Neuroscience, University of Geneva, Centre Médical Universitaire, Genève 1211, Switzerland;
2. bPrecursory Research for Embryonic Science and Technology (PRESTO), Japan Science and Technology Agency, Kawaguchi, Saitama 332-0012, Japan;
3. cRIKEN Brain Science Institute, Wako, Saitama 351-0198, Japan;
4. dGraduate School of Information Science and Technology, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656, Japan;
5. eDepartment of Biosystems Science and Engineering, ETH Zurich, Basel 4058, Switzerland;
6. fResearch Center for Advanced Science and Technology, The University of Tokyo, Meguro-ku, Tokyo 153-8904, Japan
1. Edited by Terrence J. Sejnowski, Salk Institute for Biological Studies, La Jolla, CA, and approved July 28, 2017 (received for review April 14, 2017)

1. View larger version:
Fig. S1.

Population activity dynamics visualized with principal component analysis (PCA). The trajectories of population activity were plotted within the space of top three principal components (PCs one to three), by applying PCA to the multineuron time series for each preparation. The results for three representative preparations are shown.

2. View larger version:
Fig. 2.

eCCM analysis relates the global-network events to the preceding local states through the state-space reconstruction. (A) Forecasting by a local signal. In each box, the left and right trajectories depict the delay-coordinate reconstruction (see the text) of the mean field (global state) and of a single-neuron activity (local state), respectively. A representative neuron (Cell 1) is shown. Dark-blue dots: the time points of peak global activities during the bursts; magenta dots: the temporally preceding states from the individual global-activity peaks (here, shift time = ?100 ms); magenta dots: the neighboring states within the reconstructed state space for Cell 1 (10 nearest neighbors for each burst peak); cyan cross: the output of the present protocol (i.e., the predicted mean field). A single neuron’s ability at predicting each burst event is quantified in five steps: (i) selecting a time point t on a local-state trajectory (say, of neuron i) that corresponds to a peak population activity found in the global-state trajectory (which is a target bursting state to be predicted), (ii) tracing back the local-state trajectory for a given time span, Δt, (iii) collecting “neighbor” time points corresponding to states nearby t ? Δt in the local-state trajectory (avoiding points temporally too near), (iv) mapping the neighbor time points back into the global-state trajectory, and (v) forwarding the time with Δt on the global-state trajectory to obtain the global-state prediction (the cyan cross). When the global-state prediction deviates from the nonbursting-state distribution by having a significantly large value, we defined it as a successful detection of burst (refer to Methods for formal descriptions). The accuracy of burst prediction is higher when the traced-back states (the red dots) deviate more from the ones in the nonbursting period in the local trajectory. Although the combination of tracing time in the retrograde direction with Δt (step ii) and mutual mapping between global and local states (steps i and iv) are newly introduced features in this study, step v alone could be regarded as a variant of the previously proposed forecasting methods based on simplex projection (20). On the other hand, if steps iiiv alone are applied (i.e., without any constraint of the targeted event, unlike our current protocol with steps i and ii), it reduces to the CCM algorithm including time delays (23, 31) but applied to the cross-scale predictions. (B) Forecasting by a global signal. The same as in A, except that the global state itself is used instead of the local state.

3. View larger version:
Fig. 3.

Presages of global bursts within single neurons. (A) Burst detection with the eCCM based on a single representative neuron. The black trace with the gray shade represents the median and quartiles of the estimated global states, <mml:math><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mi>b</mml:mi><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mi>t</mml:mi></mml:msub></mml:mrow></mml:math>b^t, in individual trials around burst peaks. The burst-detection threshold (dashed green line) was defined by the 95th percentile of the estimated global activity outside the bursting period. The red curve shows the fraction (“hit rate”) of successfully detected bursts in all of the burst trials. The curves are smoothed with 250-ms boxcar kernel for the visualization. Δt takes a negative value when predicting and a positive value when “postdicting” burst events. (B) Burst detection with the eCCM based on the mean-field activity in the population. (C) Burst detection with the momentary firing rate (FR) of the same representative single neuron as the one shown in A. (D) Burst detection with the momentary mean-field firing rate. (E) Burst predictability is a robust property of individual neurons. Dividing the data into two halves (the odd and even trials of spontaneous burst) shows that variability in burst-detection success rate is highly consistent over time. Each marker corresponds to a neuron. The filled markers represent the results using the mean-field activity. The success rate was averaged over the range of time span ?200 ms < Δt < ?50 ms. (F) Burst detection with the eCCM is more accurate than that based on the momentary firing rate. The comparison of success rates in burst detection with the eCCM and with the firing-rate-based method. (Inset) The control analysis in which we compared the success rates of the eCCM with that of the firing-rate-based method using multiple time bins, by matching the number of bins used in the eCCM. The gray bars show the fraction of cells in all of the preparations. The arrowheads indicate the medians. The colored lines show the fractions in the individual preparations. (G) A single cell can outperform the mean field at predicting the spontaneous network bursts, particularly when we use the information in the temporal patterns of neural activity rather than the momentary activity. The light- and dark-blue curves, respectively, show the success rates in burst detection with single neurons relative to that with the mean field, for each of the eCCM and the firing-rate-based analysis. For each method, the cells were sorted based on the success rate.

4. View larger version:
Fig. S2.

Delay-embedding theorem (known as Takens’ theorem). Suppose that we have the time series of three variables, <mml:math><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>z</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>(x(t),y(t),z(t)), generated by some differential equations:<mml:math><mml:mrow><mml:mrow><mml:mover accent="true"><mml:mi>x</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>z</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>x˙(t)=f(x(t),y(t),z(t)), <mml:math><mml:mrow><mml:mrow><mml:mover accent="true"><mml:mi>y</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>g</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>z</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>y˙(t)=g(x(t),y(t),z(t)), and <mml:math><mml:mrow><mml:mrow><mml:mover accent="true"><mml:mi>z</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>h</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>z</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>z˙(t)=h(x(t),y(t),z(t)). (A) The evolution of the global state <mml:math><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>z</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>(x(t),y(t),z(t)) is represented by a trajectory in the space of those three variables. (B) Consider observing the temporal sequence of a single variable, e.g., <mml:math><mml:mrow><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>x(t). (C) The attractor topology in the original global-state space is fully recovered in a delay coordinate of the observed variable, <mml:math><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi>τ</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mi>τ</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math>(x(t),x(t?τ),x(t?τ)) with an arbitrary unit delay, <mml:math><mml:mi>τ</mml:mi></mml:math>τ. Namely, we can construct a smooth one-to-one map from the reconstructed attractor to the original attractor.

5. View larger version:
Fig. 4.

Network structures in nonbursting periods explain the local burst predictability. (A) Causal network analysis. The network depicts the directed pairwise interactions among neurons. The filled (orange) and open (blue) markers represent the putative excitatory and inhibitory cells, respectively. The thickness of arrows indicates the absolute strength of causal coupling (“causality”). For clarity of illustration, only the top 5% strong couplings are shown, and the nodes are distributed by multidimensional scaling with the distances defined to be inversely proportional to the causality. The figure shows the result for Chip 1440. (B) Synaptically connected neuron pairs show larger causal interaction. The bar labels show the time span to be used for the causality analysis. Bars in dark- and light blue: average cross-embedding values for cells that showed short-latent (dark, latency <10 ms) and long (dark, latency> 10 ms) stimulus-triggered spike increase. The average causality index within time span [?100, 100] ms dissociated the postsynaptic cells from other cells (P < 0.005, Wilcoxon sign rank test, independent samples). The error bar shows the SEM across neurons. (C) Identification of synaptic connectivity by electrical stimulation experiment. The figure shows the peri-stimulus time histograms of three representative neurons (from top to bottom, putative direct, indirect, and nonpost cells, respectively). The asterisks indicate the earliest significant increase (P < 0.05) of spike count, compared with the spike-count distribution in the prestimulus period (from ?500 to ?100 ms). The figure shows the spike histogram smoothed with a boxcar kernel of 5-ms width for visualization purpose. (D) The relationship between burst predictability and the interaction strength in different neuron types (putative excitatory and inhibitory neurons). The neurons were classified into two groups depending on the strength of input or output causal couplings: the top half of neurons was labeled as “strong,” whereas the bottom half was labeled as weak. The error bar shows the SEM across neurons.

6. View larger version:
Fig. S3.

Microstimulation-based estimation of synaptic connectivity in a pairwise manner. (A) Axonal stimulation on an arbitrary neuron elicited bidirectional action potential propagation. (B) Raw data of neural responses at a putative presynaptic neuron and postsynaptic neuron. Data from 100 trials are superimposed. (C) Raster plot of B. Antidromic, direct action potentials exhibited precise temporal responses, while orthodromic, synaptic action potentials were elicited stochastically with significant temporal jitters. (D) Poststimulus spike histograms of C.

7. View larger version:
Fig. S4.

Identification of excitatory and inhibitory neurons. (A) Representative neurons in immunostaining with MAP2 and GABA. Action potentials below the insets were obtained at white rectangles, putatively from a neighboring neuron in a circle. The peak-to-peak time, Tpp, was defined as time duration from negative peak to positive peak of action potential. (B) Histogram of Tpp. Excitatory neurons (green) had larger Tpp than inhibitory neurons (magenta). K-means method to Tpp was used to separate excitatory and inhibitory neurons.

#### Online Impact

• 3024201316 2018-02-21
• 4658931315 2018-02-21
• 3216561314 2018-02-21
• 1965251313 2018-02-21
• 970811312 2018-02-21
• 609011311 2018-02-21
• 3219131310 2018-02-21
• 613261309 2018-02-21
• 6972481308 2018-02-21
• 2758991307 2018-02-21
• 5213301306 2018-02-21
• 6402651305 2018-02-21
• 975701304 2018-02-20
• 619701303 2018-02-20
• 6291841302 2018-02-20
• 8182271301 2018-02-20
• 7717531300 2018-02-20
• 2811781299 2018-02-20
• 9132041298 2018-02-20
• 285331297 2018-02-20