Exception thrown when a function in spykeutils encounters a problem that is not covered by standard exceptions.
When using Spyke Viewer, these exceptions will be caught and shown in the GUI, while general exceptions will not be caught (and therefore be visible in the console) for easier debugging.
Return a list of analog signals for an analog signal array.
If signal_array is attached to a recording channel group with exactly is many channels as there are channels in signal_array, each created signal will be assigned the corresponding channel. If the attached recording channel group has only one recording channel, all created signals will be assigned to this channel. In all other cases, the created signal will not have a reference to a recording channel.
Note that while the created signals may have references to a segment and channels, the relationships in the other direction are not automatically created (the signals are not attached to the recording channel or segment). Other properties like annotations are not copied or referenced in the created analog signals.
Parameters: | signal_array (neo.core.AnalogSignalArray) – An analog signal array from which the neo.core.AnalogSignal objects are constructed. |
---|---|
Returns: | A list of analog signals, one for every channel in signal_array. |
Return type: | list |
Return a list of epochs for an epoch array.
Note that while the created epochs may have references to a segment, the relationships in the other direction are not automatically created (the events are not attached to the segment). Other properties like annotations are not copied or referenced in the created epochs.
Parameters: | epoch_array (neo.core.EpochArray) – A period array from which the Epoch objects are constructed. |
---|---|
Returns: | A list of events, one for of the events in epoch_array. |
Return type: | list |
Return a list of events for an event array.
Note that while the created events may have references to a segment, the relationships in the other direction are not automatically created (the events are not attached to the segment). Other properties like annotations are not copied or referenced in the created events.
Parameters: | event_array (neo.core.EventArray) – An event array from which the Event objects are constructed. |
---|---|
Returns: | A list of events, one for of the events in event_array. |
Return type: | list |
Return a list of spikes for a spike train.
Note that while the created spikes have references to the same segment and unit as the spike train, the relationships in the other direction are not automatically created (the spikes are not attached to the unit or segment). Other properties like annotations are not copied or referenced in the created spikes.
Parameters: |
|
---|---|
Returns: | A list of neo.core.Spike objects, one for every spike in spike_train. |
Return type: | list |
Return a spike train for a list of spikes.
All spikes must have an identical left sweep, the same unit and the same segment, otherwise a SpykeException is raised.
Note that while the created spike train has references to the same segment and unit as the spikes, the relationships in the other direction are not automatically created (the spike train is not attached to the unit or segment). Other properties like annotations are not copied or referenced in the created spike train.
Parameters: |
|
---|---|
Returns: | All elements of spikes as spike train. |
Return type: |
Return (cross-)correlograms from a dictionary of spike train lists for different units.
Parameters: |
|
---|---|
Returns: | Two values:
|
Return type: | dict, Quantity 1D |
Bases: exceptions.Exception
This is raised when a user cancels a progress process. It is used by ProgressIndicator and its descendants.
Bases: object
Base class for classes indicating progress of a long operation.
This class does not implement any of the methods and can be used as a dummy if no progress indication is needed.
Signal that the operation starts.
Parameters: | title (string) – The name of the whole operation. |
---|
Set status description.
Parameters: | new_status (string) – A description of the current status. |
---|
Decorator for functions that should ignore a raised CancelException and just return nothing in this case
Return a list of spike trains aligned to an event (the event will be time 0 on the returned trains).
Parameters: |
|
---|
Return dictionary of binned rates for a dictionary of spike train lists.
Deprecated since version 0.3.0.
Use tools.bin_spike_trains() instead.
Parameters: |
|
---|---|
Returns: | A dictionary (with the same indices as trains) of lists of spike train counts and the bin borders. |
Return type: | dict, Quantity 1D |
Return a superposition of a list of spike trains.
Parameters: | trains (iterable) – A list of neo.core.SpikeTrain objects |
---|---|
Returns: | A spike train object containing all spikes of the given spike trains. |
Return type: | neo.core.SpikeTrain |
Computes the maximum starting time and minimum end time that all given spike trains share. This yields the shortest interval shared by all spike trains.
Deprecated since version 0.3.0.
Use tools.minimum_spike_train_interval() instead.
Parameters: | trains (dict) – A dictionary of sequences of neo.core.SpikeTrain objects. |
---|---|
Returns: | Maximum shared start time and minimum shared stop time. |
Return type: | Quantity scalar, Quantity scalar |
Return the optimal kernel size for a spike density estimation of a spike train for a gaussian kernel. This function takes a single spike train, which can be a superposition of multiple spike trains (created with collapsed_spike_trains()) that should be included in a spike density estimation.
Implements the algorithm from (Shimazaki, Shinomoto. Journal of Computational Neuroscience. 2010).
Parameters: |
|
---|---|
Returns: | Best of the given kernel sizes |
Return type: | Quantity scalar |
Return dictionary of peri stimulus time histograms for a dictionary of spike train lists.
Parameters: |
|
---|---|
Returns: | A dictionary (with the same indices as trains) of arrays containing counts (or rates if rate_correction is True) and the bin borders. |
Return type: | dict, Quantity 1D |
Create a spike density estimation from a dictionary of lists of spike trains.
The spike density estimations give an estimate of the instantaneous rate. The density estimation is evaluated at 1024 equally spaced points covering the range of the input spike trains. Optionally finds optimal kernel size for given data using the algorithm from (Shimazaki, Shinomoto. Journal of Computational Neuroscience. 2010).
Parameters: |
|
---|---|
Returns: | Three values:
|
Return type: | dict, dict, Quantity 1D |
Bases: spykeutils.signal_processing.Kernel
Unnormalized: with and kernel size .
Normalized to unit area:
Bases: spykeutils.signal_processing.SymmetricKernel
Unnormalized: with kernel size (corresponds to the standard deviation of a Gaussian distribution).
Normalized to unit area:
Bases: object
Base class for kernels.
Calculates the boundary so that the integral from to encloses at least a certain fraction of the integral over the complete kernel.
Parameters: | fraction (float) – Fraction of the whole area which at least has to be enclosed. |
---|---|
Returns: | boundary |
Return type: | Quantity scalar |
Returns the factor needed to normalize the kernel to unit area.
Parameters: | kernel_size (Quantity scalar) – Controls the width of the kernel. |
---|---|
Returns: | Factor to normalize the kernel to unit width. |
Return type: | Quantity scalar |
Calculates the sum of all element pair distances for each pair of vectors.
If and are the -th and -th vector from vectors and the kernel, the resulting entry in the 2D array will be .
Parameters: |
|
---|---|
Return type: | Quantity 2D |
Bases: spykeutils.signal_processing.Kernel
Creates a kernel form a function. Please note, that not all methods for such a kernel are implemented.
Bases: spykeutils.signal_processing.SymmetricKernel
Unnormalized: with kernel size .
Normalized to unit area:
Bases: spykeutils.signal_processing.SymmetricKernel
Unnormalized: with kernel size corresponding to the half width.
Normalized to unit area:
Bases: spykeutils.signal_processing.Kernel
Base class for symmetric kernels.
Bases: spykeutils.signal_processing.SymmetricKernel
Unnormalized: with kernel size corresponding to the half width.
Normalized to unit area:
Returns a kernel of desired size.
Parameters: |
|
---|---|
Returns: | A Kernel with the desired kernel size. If obj is already a Kernel instance, a shallow copy of this instance with changed kernel size will be returned. If obj is a function it will be wrapped in a Kernel instance. |
Return type: |
Discretizes a kernel.
Parameters: |
|
---|---|
Return type: | Quantity 1D |
Smoothes a binned representation (e.g. of a spike train) by convolving with a kernel.
Parameters: |
|
---|---|
Returns: | The smoothed representation of binned. |
Return type: | Quantity 1D |
Convolves a neo.core.SpikeTrain with a kernel.
Parameters: |
|
---|---|
Returns: | The convolved spike train, the boundaries of the discretization bins |
Return type: | (Quantity 1D, Quantity 1D with the inverse units of sampling_rate) |
Generate a homogeneous Poisson spike train. The length is controlled with t_stop and max_spikes. Either one or both of these arguments have to be given.
Parameters: |
|
---|---|
Returns: | The generated spike train. |
Return type: |
Generate an inhomogeneous Poisson spike train. The length is controlled with t_stop and max_spikes. Either one or both of these arguments have to be given.
Parameters: |
|
---|---|
Returns: | The generated spike train. |
Return type: |
Calculates the Cauchy-Schwarz distance between two spike trains given a smoothing filter.
Let and with be the spike trains convolved with some smoothing filter and . Then, the Cauchy-Schwarz distance of the spike trains is defined as .
The Cauchy-Schwarz distance is closely related to the Schreiber et al. similarity measure by
This function numerically convolves the spike trains with the smoothing filter which can be quite slow and inaccurate. If the analytical result of the autocorrelation of the smoothing filter is known, one can use schreiber_similarity() for a more efficient and precise calculation.
Further information can be found in Paiva, A. R. C., Park, I., & Principe, J. (2010). Inner products for representation and learning in the spike train domain. Statistical Signal Processing for Neuroscience and Neurotechnology, Academic Press, New York.
Parameters: |
|
---|---|
Returns: | Matrix containing the Cauchy-Schwarz distance of all pairs of spike trains |
Return type: | 2-D array |
Calculates the event synchronization.
Let be the count of spikes in which occur shortly before an event in with a time difference of less than . Moreover, let and be the number of total spikes in the spike trains and . The event synchrony is then defined as .
The time maximum time lag can be determined automatically for each pair of spikes and by the formula
Further and more detailed information can be found in Quiroga, R. Q., Kreuz, T., & Grassberger, P. (2002). Event synchronization: a simple and fast method to measure synchronicity and time delay patterns. Physical Review E, 66(4), 041904.
Parameters: |
|
---|---|
Returns: | Matrix containing the event synchronization for all pairs of spike trains. |
Return type: | 2-D array |
Calculates the Hunter-Milton similarity measure.
If the kernel function is denoted as , a function can be defined with being the closest spike in spike train to the spike in spike train . With this the Hunter-Milton similarity measure is .
This implementation returns 0 if one of the spike trains is empty, but 1 if both are empty.
Further information can be found in
Parameters: |
|
---|---|
Returns: | Matrix containing the Hunter-Milton similarity for all pairs of spike trains. |
Return type: | 2-D array |
Calculates the norm distance between spike trains given a smoothing filter.
Let and with be the spike trains convolved with some smoothing filter. Then, the norm distance of the spike trains is defined as .
Further information can be found in Paiva, A. R. C., Park, I., & Principe, J. (2010). Inner products for representation and learning in the spike train domain. Statistical Signal Processing for Neuroscience and Neurotechnology, Academic Press, New York.
Parameters: |
|
---|---|
Returns: | Matrix containing the norm distance of all pairs of spike trains given the smoothing_filter. |
Return type: | Quantity 2D with units depending on the smoothing filter (usually temporal frequency units) |
Calculates the Schreiber et al. similarity measure between spike trains given a kernel.
Let and with be the spike trains convolved with some smoothing filter and . The autocorrelation of the smoothing filter corresponds to the kernel used to analytically calculate the Schreiber et al. similarity measure. It is defined as . It is closely related to the Cauchy-Schwarz distance by .
In opposite to cs_dist() which numerically convolves the spike trains with a smoothing filter, this function directly uses the kernel resulting from the smoothing filter’s autocorrelation. This allows a more accurate and faster calculation.
Further information can be found in:
Parameters: |
|
---|---|
Returns: | Matrix containing the Schreiber et al. similarity measure of all pairs of spike trains. |
Return type: | 2-D array |
Calculates the inner product of spike trains given a smoothing filter.
Let and with be the spike trains convolved with some smoothing filter. Then, the inner product of the spike trains is defined as .
Further information can be found in Paiva, A. R. C., Park, I., & Principe, J. (2010). Inner products for representation and learning in the spike train domain. Statistical Signal Processing for Neuroscience and Neurotechnology, Academic Press, New York.
Parameters: |
|
---|---|
Returns: | Matrix containing the inner product for each pair of spike trains with one spike train from a and the other one from b. |
Return type: | Quantity 2D with units depending on the smoothing filter (usually temporal frequency units) |
Calculates the spike train norm given a smoothing filter.
Let with be a spike train convolved with some smoothing filter. Then, the norm of the spike train is defined as .
Further information can be found in Paiva, A. R. C., Park, I., & Principe, J. (2010). Inner products for representation and learning in the spike train domain. Statistical Signal Processing for Neuroscience and Neurotechnology, Academic Press, New York.
Parameters: |
|
---|---|
Returns: | The norm of the spike train given the smoothing_filter. |
Return type: | Quantity scalar with units depending on the smoothing filter (usually temporal frequency units) |
Calculates the van Rossum distance.
It is defined as Euclidean distance of the spike trains convolved with a causal decaying exponential smoothing filter. A detailed description can be found in Rossum, M. C. W. (2001). A novel spike distance. Neural Computation, 13(4), 751-763. This implementation is normalized to yield a distance of 1.0 for the distance between an empty spike train and a spike train with a single spike. Divide the result by sqrt(2.0) to get the normalization used in the cited paper.
Given spike trains with spikes on average the run-time complexity of this function is . An implementation in would be possible but has a high constant factor rendering it slower in practical cases.
Parameters: |
|
---|---|
Returns: | Matrix containing the van Rossum distances for all pairs of spike trains. |
Return type: | 2-D array |
Calculates the van Rossum multi-unit distance.
The single-unit distance is defined as Euclidean distance of the spike trains convolved with a causal decaying exponential smoothing filter. A detailed description can be found in Rossum, M. C. W. (2001). A novel spike distance. Neural Computation, 13(4), 751-763. This implementation is normalized to yield a distance of 1.0 for the distance between an empty spike train and a spike train with a single spike. Divide the result by sqrt(2.0) to get the normalization used in the cited paper.
Given the - and -th spike train of a and respectively b let be the squared single-unit distance between these two spike trains. Then the multi-unit distance is with being equal to weighting. The weighting parameter controls the interpolation between a labeled line and a summed population coding.
More information can be found in Houghton, C., & Kreuz, T. (2012). On the efficient calculation of van Rossum distances. Network: Computation in Neural Systems, 23(1-2), 48-58.
Given spike trains in total with spikes on average the run-time complexity of this function is and memory will be needed.
Parameters: |
|
---|---|
Returns: | A 2D array with the multi-unit distance for each pair of trials. |
Return type: | 2D arrary |
Calculates the Victor-Purpura’s (VP) distance. It is often denoted as .
It is defined as the minimal cost of transforming spike train a into spike train b by using the following operations:
- Inserting or deleting a spike (cost 1.0).
- Shifting a spike from to (cost ).
A detailed description can be found in Victor, J. D., & Purpura, K. P. (1996). Nature and precision of temporal coding in visual cortex: a metric-space analysis. Journal of Neurophysiology.
Given the average number of spikes in a spike train and spike trains the run-time complexity of this function is and memory will be needed.
Parameters: |
|
---|---|
Returns: | Matrix containing the VP distance of all pairs of spike trains. |
Return type: | 2-D array |
Calculates the Victor-Purpura’s (VP) multi-unit distance.
It is defined as the minimal cost of transforming the spike trains a into spike trains b by using the following operations:
- Inserting or deleting a spike (cost 1.0).
- Shifting a spike from to (cost ).
- Moving a spike to another spike train (cost reassignment_cost).
A detailed description can be found in Aronov, D. (2003). Fast algorithm for the metric-space analysis of simultaneous responses of multiple single neurons. Journal of Neuroscience Methods.
Given the average number of spikes in a spike train and units with spike trains each the run-time complexity is . The space complexity is .
For calculating the distance between only two units one should use victor_purpura_dist() which is more efficient.
Parameters: |
|
---|---|
Returns: | A 2D array with the multi-unit distance for each pair of trials. |
Return type: | 2D arrary |
Functions for estimating the quality of spike sorting results. These functions estimate false positive and false negative fractions.
Return a dict of tuples (False positive rate, false negative rate) indexed by unit.
Deprecated since version 0.2.1.
Use overlap_fp_fn() instead.
Details for the calculation can be found in (Hill et al. The Journal of Neuroscience. 2011). This function works on prewhitened data, which means it assumes that all clusters have a uniform normal distribution. Data can be prewhitened using the noise covariance matrix.
The calculation for total false positive and false negative rates does not follow (Hill et al. The Journal of Neuroscience. 2011), where a simple addition of pairwise probabilities is proposed. Instead, the total error probabilities are estimated using all clusters at once.
Parameters: |
|
---|---|
Returns: | Two values:
|
Return type: | dict, dict |
Return the rate of false positives calculated from refractory period calculations for each unit. The equation used is described in (Hill et al. The Journal of Neuroscience. 2011).
Parameters: |
|
---|---|
Returns: | A dictionary of false positive rates indexed by unit. Note that values above 0.5 can not be directly interpreted as a false positive rate! These very high values can e.g. indicate that the generating processes are not independent. |
Return the refractory period violations in the given spike trains for the specified refractory period.
Parameters: |
|
---|---|
Returns: | Two values:
|
Return type: | int, dict |
Return dicts of tuples (False positive rate, false negative rate) indexed by unit. This function needs sklearn if covariances is not set to 'white'.
This function estimates the pairwise and total false positive and false negative rates for a number of waveform clusters. The results can be interpreted as follows: False positives are the fraction of spikes in a cluster that is estimated to belong to a different cluster (a specific cluster for pairwise results or any other cluster for total results). False negatives are the number spikes from other clusters that are estimated to belong to a given cluster (also expressed as fraction, this number can be larger than 1 in extreme cases).
Details for the calculation can be found in (Hill et al. The Journal of Neuroscience. 2011). The calculation for total false positive and false negative rates does not follow Hill et al., who propose a simple addition of pairwise probabilities. Instead, the total error probabilities are estimated using all clusters at once.
Parameters: |
|
---|---|
Returns: | Two values:
|
Return type: | dict, dict |
Returns the fraction of variance in each channel that is explained by the means.
Values below 0 or above 1 for large data sizes indicate that some assumptions were incorrect (e.g. about channel noise) and the results should not be trusted.
Parameters: |
|
---|---|
Return dict: | A dictionary of arrays, both indexed by unit. If noise is None, the dictionary contains the fraction of explained variance per channel without taking noise into account. If noise is given, it contains the fraction of variance per channel explained by the means and given noise level together. |
Return a spike amplitude histogram.
The resulting is useful to assess the drift in spike amplitude over a longer recording. It shows histograms (one for each trains entry, e.g. segment) of maximum and minimum spike amplitudes.
Parameters: |
|
---|---|
Returns: | A tuple with three values:
|
Return type: | (ndarray, list, list) |
Applies a function to all spike trains in a dictionary of spike train sequences.
Parameters: |
|
---|---|
Returns: | A new dictionary with the same keys as dictionary. |
Return type: | dict |
Creates binned representations of a spike trains.
Parameters: |
|
---|---|
Returns: | A dictionary (with the same indices as trains) of lists of spike train counts and the bin borders. |
Return type: | dict, Quantity 1D with time units |
Concatenates spike trains.
Parameters: | trains (sequence) – neo.core.SpikeTrain objects to concatenate. |
---|---|
Returns: | A spike train consisting of the concatenated spike trains. The spikes will be in the order of the given spike trains and t_start and t_stop will be set to the minimum and maximum value. |
Return type: | neo.core.SpikeTrain |
Extract spikes with waveforms from analog signals using a spike train. Spikes that are too close to the beginning or end of the shortest signal to be fully extracted are ignored.
Parameters: |
|
---|---|
Returns: | A list of neo.core.Spike objects, one for each time point in train. All returned spikes include their waveform property. |
Return type: | list |
Computes the minimum starting time and maximum end time of all given spike trains. This yields an interval containing the spikes of all spike trains.
Parameters: |
|
---|---|
Returns: | Minimum t_start time and maximum t_stop time as time scalars. |
Return type: | Quantity scalar, Quantity scalar |
Computes the maximum starting time and minimum end time that all given spike trains share. This yields the shortest interval shared by all spike trains.
Parameters: |
|
---|---|
Returns: | Maximum shared t_start time and minimum shared t_stop time as time scalars. |
Return type: | Quantity scalar, Quantity scalar |
Removes a Neo object from the hierarchy it is embedded in. Mostly downward links are removed (except for possible links in neo.core.Spike or neo.core.SpikeTrain objects). For example, when obj is a neo.core.Segment, the link from its parent neo.core.Block will be severed. Also, all links to the segment from its spikes and spike trains will be severed.
Parameters: |
|
---|