wyrm package¶
Submodules¶
wyrm.io module¶
Various input/output methods.
This module provides methods for loading and saving data from- and into various formats.
-
class
wyrm.io.
PyffComm
(host='localhost', port=12345)[source]¶ Bases:
object
Pyff communication object.
This class allows for communication with a running Pyff [1] instance. It uses the json protocol, so you have to start Pyff with the
--protocol=json
parameter.Receiving data from Pyff (i.e. the available feedbacks and variables) is not supported for now.
Examples
This is an example session, demonstrating how to load a feedback application, set a variable, start it, quit it and closing Pyff in the end.
>>> pyff = PyffComm() >>> pyff.send_init('TrivialPong') >>> pyff.set_variables({'FPS': 30}) >>> pyff.play() >>> pyff.quit() >>> pyff.quit_pyff()
References
[1] Bastian Venthur, Simon Scholler, John Williamson, Sven Dähne, Matthias S Treder, Maria T Kramarek, Klaus-Robert Müller and Benjamin Blankertz. Pyff—A Pythonic Framework for Feedback Applications and Stimulus Presentation in Neuroscience. Frontiers in Neuroscience. 2010. doi: 10.3389/fnins.2010.00179. -
send_control_signal
(variables)[source]¶ Send a control signal to the running feedback.
This method is used to send events to the feedback like a new classifier output.
Parameters: variables (dict) – the keys are the variable names and the values the values. Those variables sent by the control signal are not set directly in the feedback. If you want this behave use set_variables()
-
send_init
(fb)[source]¶ Load a Feedback.
This method sends Pyff the
send_init(feedback)
command which loads a feedback.Parameters: fb (string) – The name of the feedback.
-
send_interaction_signal
(cmd, data=None)[source]¶ Send interaction signal to Pyff.
Warning
This method is used internally to send low level JSON messages to Pyff. You should not use this method directly.
Parameters: - cmd (str)
- data (dict)
-
-
wyrm.io.
convert_mushu_data
(data, markers, fs, channels)[source]¶ Convert mushu data into wyrm’s
Data
format.This convenience method creates a continuous
Data
object from the parameters given. The timeaxis always starts from zero and its values are calculated from the sampling frequencyfs
and the length ofdata
. Thenames
andunits
attributes are filled with default vaules.Parameters: - data (2d array) – an 2 dimensional numpy array with the axes: (time, channel)
- markers (list of tuples: (float, str)) – a list of markers. Each element is a tuple of timestamp and string. The timestamp is the time in ms relative to the onset of the block of data. Note that negative values are allowed as well as values bigger than the length of the block of data returned. That is to be interpreted as a marker from the last block and a marker for a future block respectively.
- fs (float) – the sampling frequency, this number is used to calculate the timeaxis for the data
- channels (list or 1d array of strings) – the channel names
Returns: cnt (continuous
Data
object)Examples
Assuming that
amp
is an Amplifier instance fromlibmushu
, already configured but not started yet:>>> amp_fs = amp.get_sampling_frequency() >>> amp_channels = amp.get_channels() >>> amp.start() >>> while True: ... data, markers = amp.get_data() ... cnt = convert_mushu_data(data, markers, amp_fs, amp_channels) ... # some more code >>> amp.stop()
References
-
wyrm.io.
load
(filename)[source]¶ Load a
Data
object from a file.Parameters: filename (str) – the file to load the data from Returns: dat (Data) – the data loaded from the file See also
Examples
>>> io.save(dat, 'foo.npy') >>> dat2 = io.load('foo.npy')
-
wyrm.io.
load_bcicomp3_ds1
(dirname)[source]¶ Load the BCI Competition III Data Set 1.
This method loads the data set and converts it into Wyrm’s
Data
format. Before you use it, you have to download the training- and test data in Matlab format and unpack it into a directory.Note
If you need the true labels of the test sets, you’ll have to download them separately from http://bbci.de/competition/iii/results/index.html#labels
Parameters: dirname (str) – the directory where the Competition_train.mat
andCompetition_test.mat
are locatedReturns: epo_train, epo_test (epoched Data
objects)Examples
>>> epo_test, epo_train = load_bcicomp3_ds1('/home/foo/bcicomp3_dataset1/')
-
wyrm.io.
load_bcicomp3_ds2
(filename)[source]¶ Load the BCI Competition III Data Set 2.
This method loads the data set and converts it into Wyrm’s
Data
format. Before you use it, you have to download the data set in Matlab format and unpack it. The directory with the extracted files must contain theSubject_*.mat
- and theeloc64.txt
files.Note
If you need the true labels of the test sets, you’ll have to download them separately from http://bbci.de/competition/iii/results/index.html#labels
Parameters: filename (str) – The path to the matlab file to load Returns: cnt (continuous Data object) Examples
>>> dat = load_bcicomp3_ds2('/home/foo/data/Subject_A_Train.mat')
-
wyrm.io.
load_brain_vision_data
(vhdr)[source]¶ Load Brain Vision data from a file.
This methods loads the continuous EEG data, and returns a
Data
object of continuous data[time, channel]
, along with the markers and the sampling frequency. The EEG data is returned in micro Volt.Parameters: vhdr (str) – Path to a VHDR file Returns: dat (Data) – Continuous Data with the additional attributes .fs
for the sampling frequency and.marker
for a list of markers. Each marker is a tuple of(time in ms, marker)
.Raises: AssertionError
– If one of the consistency checks failsExamples
>>> dat = load_brain_vision_data('path/to/vhdr') >>> dat.fs 1000 >>> dat.data.shape (54628, 61)
-
wyrm.io.
load_mushu_data
(meta)[source]¶ Load saved EEG data in Mushu’s format.
This method loads saved data in Mushu’s format and returns a continuous
Data
object.Parameters: meta (str) – Path to .meta file. A Mushu recording consists of three different files: .eeg, .marker, and .meta. Returns: dat (Data) – Continuous Data object Examples
>>> dat = load_mushu_data('testrecording.meta')
-
wyrm.io.
save
(dat, filename)[source]¶ Save a
Data
object into a NumPy .npy file.Parameters: - dat (Data) – Data object
- filename (str) –
Filename of the file to save to. If the filename does not end
with
.npy
, the.npy
extension will be automatically appended.
See also
Examples
>>> io.save(dat, 'foo.npy') >>> dat2 = io.load('foo.npy')
wyrm.misc module¶
Miscellaneous Helper Methods.
-
class
wyrm.misc.
deprecated
(since, alternative=None)[source]¶ Bases:
object
Mark a method deprecated
This method is used internally to mark methods as deprecated. Deprecated methods will print a warning when used but will otherwise function as usual.
Parameters: - since (str) – The version number that introduced the deprecation of the method.
- alternative (str, optional) – The method that should be used instead.
wyrm.plot module¶
Plotting methods.
This module contains various plotting methods. There are two types of plotting methods: the Primitives and the Composites. The Primitives are the most basic and offer simple, single-plot representations. The Composites are composed of several primitives and offer more complex representations.
The primitive plots are those whose name begin with ax_
, (e.g.
ax_scalp
).
In order to get more reasonable defaults for colors etc. you can call
the modules beautify()
method:
from wyrm import plot
plot.beautify()
Warning
This module needs heavy reworking! We have yet to find a consistent way to handle primitive and composite plots, deal with the fact that some plots just manipulate axes, while others operate on figures and have to decide on which layer of matplotlib we want to deal with (i.e. pyplot, artist or even pylab).
The API of this module will change and you should not rely on any method here.
-
wyrm.plot.
ax_colorbar
(vmin, vmax, ax=None, label=None, ticks=None, colormap=None)[source]¶ Draw a color bar
Draws a color bar on an existing axes. The range of the colors is defined by
vmin
andvmax
.Note
Unlike the colorbar method from matplotlib, this method does not automatically create a new axis for the colorbar. It will paint in the currently active axis instead, overwriting any existing plots in that axis. Make sure to create a new axis for the colorbar.
Parameters: - vmin, vmax (float) – The minimum and maximum values for the colorbar.
- ax (Axes, optional) –
The axes to draw the scalp plot on. If not provided, the
currently activated axes (i.e.
gca()
) will be taken - label (string, optional) – The label for the colorbar
- ticks (list, optional) – The tick positions
- colormap (matplotlib.colors.colormap, optional) – A colormap to define the color transitions.
Returns: ax (Axes) – the axes on which the plot was drawn
-
wyrm.plot.
ax_scalp
(v, channels, ax=None, annotate=False, vmin=None, vmax=None, colormap=None)[source]¶ Draw a scalp plot.
Draws a scalp plot on an existing axes. The method takes an array of values and an array of the corresponding channel names. It matches the channel names with an internal list of known channels and their positions to project them correctly on the scalp.
Warning
The behaviour for unkown channels is undefined.
Parameters: - v (1d-array of floats) – The values for the channels
- channels (1d array of strings) –
The corresponding channel names for the values in
v
- ax (Axes, optional) –
The axes to draw the scalp plot on. If not provided, the
currently activated axes (i.e.
gca()
) will be taken - annotate (Boolean, optional) – Draw the channel names next to the channel markers.
- vmin, vmax (float, optional) –
The display limits for the values in
v
. If the data inv
contains values between -3..3 andvmin
andvmax
are set to -1 and 1, all values smaller than -1 and bigger than 1 will appear the same as -1 and 1. If not set, the maximum absolute value inv
is taken to calculate both values. - colormap (matplotlib.colors.colormap, optional) – A colormap to define the color transitions.
Returns: ax (Axes) – the axes on which the plot was drawn
See also
-
wyrm.plot.
beautify
()[source]¶ Set reasonable defaults matplotlib.
This method replaces matplotlib’s default rgb/cmyk colors with the colarized colors. It also does:
- re-orders the default color cycle
- sets the default linewidth
- replaces the defaault ‘RdBu’ cmap
- sets the default cmap to ‘RdBu’
Examples
You can safely call
beautify
right after you’ve imported theplot
module.>>> from wyrm import plot >>> plot.beautify()
-
wyrm.plot.
calc_centered_grid
(cols_list, hpad=0.05, vpad=0.05)[source]¶ Calculates a centered grid of Rectangles and their positions.
Parameters: - cols_list ([int]) – List of ints. Every entry represents a row with as many channels as the value.
- hpad (float, optional) – The amount of horizontal padding (default: 0.05).
- vpad (float, optional) – The amount of vertical padding (default: 0.05).
Returns: [[float, float, float, float]] – A list of all rectangle positions in the form of [xi, xy, width, height] sorted from top left to bottom right.
Examples
Calculates a centered grid with 3 rows of 4, 3 and 2 columns
>>> calc_centered_grid([4, 3, 2])
Calculates a centered grid with more padding
>>> calc_centered_grid([5, 4], hpad=.1, vpad=.75)
-
wyrm.plot.
get_channelpos
(channame)[source]¶ Return the x/y position of a channel.
This method calculates the stereographic projection of a channel from
CHANNEL_10_20
, suitable for a scalp plot.Parameters: channame (str) – Name of the channel, the search is case insensitive. Returns: x, y (float or None) – The projected point on the plane if the point is known, otherwise None
Examples
>>> plot.get_channelpos('C2') (0.1720792096741632, 0.0) >>> # the channels are case insensitive >>> plot.get_channelpos('c2') (0.1720792096741632, 0.0) >>> # lookup for an invalid channel >>> plot.get_channelpos('foo') None
-
wyrm.plot.
plot_channels
(dat, ncols=8, chanaxis=-1, otheraxis=-2)[source]¶ Plot all channels for a continuous or epo.
In case of an epoched Data object, the classwise average is calculated, and for each channel the respective classes are plotted.
Parameters: - dat (Data) – continous or epoched Data object
- ncols (int, optional) –
the number of colums in the grid. The number of rows is
calculated depending on
ncols
and the number of channels
-
wyrm.plot.
plot_scalp
(v, channels, levels=25, colormap=None, norm=None, ticks=None, annotate=True, position=None)[source]¶ Plots the values ‘v’ for channels ‘channels’ on a scalp.
Calculates the interpolation of the values v for the corresponding channels ‘channels’ and plots it as a contour plot on a scalp. The degree of gradients as well as the the appearance of the color bar can be adjusted.
Parameters: - v ([value]) – List containing the values of the channels.
- channels ([String]) – List containing the channel names.
- levels (int, optional) – The number of automatically created levels in the contour plot (default: 25).
- colormap (matplotlib.colors.colormap, optional) – A colormap to define the color transitions (default: a blue-white-red colormap).
- norm (matplotlib.colors.norm, optional) – A norm to define the min and max values (default: ‘None’, values from -10 to 10 are assumed).
- ticks (array([ints]), optional) – An array with values to define the ticks on the colorbar (default: ‘None’, 3 ticks at -10, 0 and 10 are displayed).
- annotate (Boolean, optional) – Flag to switch channel annotations on or off (default: True).
- position ([x, y, width, height], optional) – A Rectangle that limits the plot to its boundaries (default: None).
Returns: (Matplotlib.Axes, Matplotlib.Axes) – Returns a pair of Matplotlib.Axes. The first contains the plotted scalp, the second the corresponding colorbar.
Examples
Plots the values v for channels ‘channels’ on a scalp
>>> plot_scalp(v, channels)
This plot has finer gradients through increasing the levels to 50.
>>> plot_scalp(v, channels, levels=50)
This plot has a norm and ticks from 0 to 10
>>> n = matplotlib.colors.Normalize(vmin=0, vmax=10, clip=False) >>> t = np.linspace(0.0, 10.0, 3, endpoint=True) >>> plot_scalp(v, channels, norm=n, ticks=t)
-
wyrm.plot.
plot_scalp_ti
(v, channels, data, interval, scale_ti=0.1, levels=25, colormap=None, norm=None, ticks=None, annotate=True, position=None)[source]¶ Plots a scalp with channels on top
Plots the values v for channels ‘channels’ on a scalp as a contour plot. Additionaly plots the channels in channels_ti as a timeinterval on top of the scalp plot. The individual channels are placed over their position on the scalp.
Parameters: - v ([value]) – List containing the values of the channels.
- channels ([String]) – List containing the channel names.
- data (wyrm.types.Data) – Data object containing the continuous data for the overlaying timeinterval plots.
- interval ([begin, end)) – Tuple of ints to specify the range of the overlaying timeinterval plots.
- scale_ti (float, optional) – The percentage to scale the overlaying timeinterval plots (default: 0.1).
- levels (int, optional) – The number of automatically created levels in the contour plot (default: 25).
- colormap (matplotlib.colors.colormap, optional) – A colormap to define the color transitions (default: a blue-white-red colormap).
- norm (matplotlib.colors.norm, optional) – A norm to define the min and max values. If ‘None’, values from -10 to 10 are assumed (default: None).
- ticks (array([ints]), optional) – An array with values to define the ticks on the colorbar (default: None, 3 ticks at -10, 0 and 10 are displayed).
- annotate (Boolean, optional) – Flag to switch channel annotations on or off (default: True).
- position ([x, y, width, height], optional) – A Rectangle that limits the plot to its boundaries (default: None).
Returns: ((Matplotlib.Axes, Matplotlib.Axes), [Matplotlib.Axes]) – Returns a tuple of first a tuple with the plotted scalp and its colorbar, then a list of all on top plotted timeintervals.
-
wyrm.plot.
plot_spatio_temporal_r2_values
(dat)[source]¶ Calculate the signed r^2 values and plot them in a heatmap.
Parameters: dat (Data) – epoched data
-
wyrm.plot.
plot_tenten
(data, highlights=None, hcolors=None, legend=False, scale=True, reg_chans=None)[source]¶ Plots channels on a grid system.
Iterates over every channel in the data structure. If the channelname matches a channel in the tenten-system it will be plotted in a grid of rectangles. The grid is structured like the tenten-system itself, but in a simplified manner. The rows, in which channels appear, are predetermined, the channels are ordered automatically within their respective row. Areas to highlight can be specified, those areas will be marked with colors in every timeinterval plot.
Parameters: - data (wyrm.types.Data) – Data object containing the data to plot.
- highlights ([[int, int)]) – List of tuples containing the start point (included) and end point (excluded) of each area to be highlighted (default: None).
- hcolors ([colors], optional) – A list of colors to use for the highlight areas (default: None).
- legend (Boolean, optional) – Flag to switch plotting of the legend on or off (default: True).
- scale (Boolean, optional) – Flag to switch plotting of a scale in the top right corner of the grid (default: True)
- reg_chans ([regular expressions]) – A list of regular expressions. The plot will be limited to those channels matching the regular expressions.
Returns: [Matplotlib.Axes], Matplotlib.Axes – Returns the plotted timeinterval axes as a list of Matplotlib.Axes and the plotted scale as a single Matplotlib.Axes.
Examples
Plotting of all channels within a Data object
>>> plot_tenten(data)
Plotting of all channels with a highlighted area
>>> plot_tenten(data, highlights=[[200, 400]])
Plotting of all channels beginning with ‘A’
>>> plot_tenten(data, reg_chans=['A.*'])
-
wyrm.plot.
plot_timeinterval
(data, r_square=None, highlights=None, hcolors=None, legend=True, reg_chans=None, position=None)[source]¶ Plots a simple time interval.
Plots all channels of either continuous data or the mean of epoched data into a single timeinterval plot.
Parameters: - data (wyrm.types.Data) – Data object containing the data to plot.
- r_square ([values], optional) – List containing r_squared values to be plotted beneath the main plot (default: None).
- highlights ([[int, int)]) – List of tuples containing the start point (included) and end point (excluded) of each area to be highlighted (default: None).
- hcolors ([colors], optional) – A list of colors to use for the highlights areas (default: None).
- legend (Boolean, optional) – Flag to switch plotting of the legend on or off (default: True).
- reg_chans ([regular expression], optional) – A list of regular expressions. The plot will be limited to those channels matching the regular expressions. (default: None).
- position ([x, y, width, height], optional) – A Rectangle that limits the plot to its boundaries (default: None).
Returns: Matplotlib.Axes or (Matplotlib.Axes, Matplotlib.Axes) – The Matplotlib.Axes corresponding to the plotted timeinterval and, if provided, the Axes corresponding to r_squared values.
Examples
Plots all channels contained in data with a legend.
>>> plot_timeinterval(data)
Same as above, but without the legend.
>>> plot_timeinterval(data, legend=False)
Adds r-square values to the plot.
>>> plot_timeinterval(data, r_square=[values])
Adds a highlighted area to the plot.
>>> plot_timeinterval(data, highlights=[[200, 400]])
To specify the colors of the highlighted areas use ‘hcolors’.
>>> plot_timeinterval(data, highlights=[[200, 400]], hcolors=['red'])
-
wyrm.plot.
set_highlights
(highlights, hcolors=None, set_axes=None)[source]¶ Sets highlights in form of vertical boxes to axes.
Parameters: - highlights ([(start, end)]) – List of tuples containing the start point (included) and end point (excluded) of each area to be highlighted.
- hcolors ([colors], optional) – A list of colors to use for the highlight areas (e.g. ‘b’, ‘#eeefff’ or [R, G, B] for R, G, B = [0..1]. If left as None the colors blue, gree, red, cyan, magenta and yellow are used.
- set_axes ([matplotlib.axes.Axes], optional) – List of axes to highlights (default: None, all axes of the current figure will be highlighted).
- Examples
- ———
- To create two highlighted areas in all axes of the currently active
- figure. The first area from 200ms - 300ms in blue and the second
- area from 500ms - 600ms in green.
- >>> set_highlights([[200, 300], [500, 600]])
wyrm.processing module¶
Processing toolbox methods.
This module contains the processing methods.
-
wyrm.processing.
append
(dat, dat2, axis=0, extra=None)[source]¶ Append
dat2
todat
.This method creates a copy of
dat
(with all attributes), concatenatesdat.data
anddat2.data
alongaxis
as well asdat.axes[axis]
anddat2.axes[axis]
. If present, it will concatenate the attributes inextra
as well and return the result.It also performs checks if the dimensions and lengths of
data
andaxes
match and test ifunits
andnames
are equal.Since
append
cannot know how to deal with the various attributesdat
anddat2
might have, it only copies the attributes ofdat
and deals with the attributes it knows about, namely:data
,axes
,names
, andunits
.Warning
This method is really low level and stupid. It does not know about markers or timeaxes, etc. it just appends two data objects. If you want to append continuous or epoched data consider using
append_cnt()
andappend_epo()
.Parameters: - dat, dat2 (Data)
- axis (int, optional) – the axis along which to concatenate. The default axis (0) does the right thing for continuous and epoched data as it concatenates along the time- or the class-axis respectively.
- extra (list of strings, optional) –
a list of attributes in
dat
anddat2
to concatenate as well. Currently the attributes must have the typeslist
orndarray
.
Returns: dat (Data) – a copy of
dat
withdat2
appendedRaises: AssertionError
– if one of the following is true:- the dimensions of
.data
do not match names
are not equalunits
are not equaldata.shape[i]
are not equal for all i excepti == axis
axes[i]
are not equal for all i excepti == axis
- the dimensions of
TypeError
– if one of the following is true:- if one of the attributes in
extra
does not have the same type indat
anddat2
- if one of the attributes in
extra
has an unsupported type
- if one of the attributes in
Examples
>>> # concatenate two continuous data objects, and their markers, >>> # please note how the resulting marker is not correct, just >>> # appended >>> cnt.markers [[0, 'a'], [10, 'b']] >>> cnt2.markers [[20, 'c'], [30, 'd']] >>> cnt = append(cnt, cnt2, extra=['markers']) >>> cnt.markers [[0, 'a'], [10, 'b'], [20, 'c'], [30, 'd']]
See also
-
wyrm.processing.
append_cnt
(dat, dat2, timeaxis=-2, extra=None)[source]¶ Append two continuous data objects.
This method uses
append()
to append to continuous data objects. It also takes care that the resulting continuous will have a correct.axes[timeaxis]
. For that it uses the.fs
attribute and the length of the data to recalculate the timeaxis.If both
dat
anddat2
have themarkers
attribute, the markers will be treated properly (i.e. by moving the markers ofdat2
bydat
milliseconds to the right.Parameters: - dat, dat2 (Data)
- timeaxis (int, optional)
- extra (list of strings, optional)
Returns: dat (Data) – the resulting combination of
dat
anddat2
Raises: AssertionError
– if at least one of theData
parameters has not the.fs
attribute or if the.fs
attributes are not equal.See also
Examples
>>> cnt.axis[0] [0, 1, 2] >>> cnt2.axis[0] [0, 1, 2] >>> cnt.fs 1000 >>> cnt = append_cnt(cnt, cnt2) >>> cnt.axis[0] [0, 1, 2, 3, 4, 5]
-
wyrm.processing.
append_epo
(dat, dat2, classaxis=0, extra=None)[source]¶ Append two epoched data objects.
This method just calls
append()
. In addition to the errorsappend()
might throw, it will raise an error if theclass_names
are not equal if present in both objects.Parameters: - dat, dat2 (Data)
- classaxis (int, optional)
- extra (list of strings, optional)
Returns: dat (Data)
Raises: ValueError
– if both objects have aclass_names
attribute, they must be equalSee also
Examples
>>> epo = append_epo(epo, epo2)
-
wyrm.processing.
apply_csp
(*args, **kwargs)[source]¶ Apply the CSP filter.
Apply the spacial CSP filter to the epoched data.
Parameters: - epo (epoched
Data
object) – this method relies on theepo
to have three dimensions in the following order: class, time, channel - filt (2d array) –
the CSP filter (i.e. the
v
return value fromcalculate_csp()
) - columns (array of ints, optional) – the columns of the filter to use. The default is the first and the last one.
Returns: epo (epoched
Data
object) – The channels from the original have been replaced with the new virtual CSP channels.Examples
>>> w, a, d = calculate_csp(epo) >>> epo = apply_csp(epo, w)
See also
- epo (epoched
-
wyrm.processing.
apply_spatial_filter
(dat, w, prefix=None, postfix=None, chanaxis=-1)[source]¶ Apply spatial filter to
Data
object.This method applies the spatial filter
w
to a continuous or epochedData
object.Depending on the filter
w
, since the original channel names may become meaningless. For that you can either set aprefix
(e.g.CSP
) and the resulting channels will be renamed to prefix + channel number (e.g.CSP0
,CSP1
, etc.).Alternatively you can set a suffix e.g.
Laplace
and the resulting channels will be renamed to original channel name + suffix (e.g.Cz Laplace
, etc.)If neither pre- or postfix are set, the channel names will be kept.
Parameters: - dat (Data) – Data object
- w (2d array) – Spatial filter matrix
- prefix (str, optional) – the channel prefix
- postfix (str, optional) – the channel postfix
- chanaxis (int, optional) – the index of the channel axis
Returns: dat (Data) – The resulting spatial-filtered data
Examples
>>> w, _, _ = calculate_csp(epo) >>> epo_filtered = apply_spatial_filter(epo, w, prefix='CSP ') >>> epo_filtered.names[-1] = 'CSP Channel'
Raises: ValueError
(If prefix and postfix are not None)TypeError
(If prefix or postfix are not None and not str) –
See also
-
wyrm.processing.
calculate_cca
(dat_x, dat_y, timeaxis=-2)[source]¶ Calculate the Canonical Correlation Analysis (CCA).
This method calculates the canonical correlation coefficient and corresponding weights which maximize a correlation coefficient between linear combinations of the two specified multivariable signals.
Parameters: - dat_x, dat_y (continuous Data object) – these data should have the same length on the time axis.
- timeaxis (int, optional) –
the index of the time axis in
dat_x
anddat_y
.
Returns: - rho (float) – the canonical correlation coefficient.
- w_x, w_y (1d array) – the weights for mapping from the specified multivariable signals to canonical variables.
Raises: AssertionError
– If:dat_x
anddat_y
is not continuous Data object- the length of
dat_x
anddat_y
is different on thetimeaxis
Examples
Calculate the CCA of the specified multivariable signals.
>>> rho, w_x, w_y = calculate_cca(dat_x, dat_y) >>> # Calculate canonical variables via obtained weights >>> cv_x = apply_spatial_filter(dat_x, w_x, prefix="CCA ") >>> cv_y = apply_spatial_filter(dat_y, w_y, prefix="CCA ")
See also
References
-
wyrm.processing.
calculate_classwise_average
(dat, classaxis=0)[source]¶ Calculate the classwise average.
This method calculates the average continuous per class for all classes defined in the
dat
. In other words, if you have two different classes, with many continuous data per class, this method will calculate the average time course for each class and channel.Parameters: - dat (Data) –
an epoched Data object with a
.class_names
attribute. - classaxis (int, optional) – the axis along which to calculate the average
Returns: dat (Data) – copy of
dat
a witht theclassaxis
dimension reduced to the number of different classes.Raises: AssertionError
– if thedat
has no.class_names
attribute.Examples
Split existing continuous data into two classes and calculate the average for each class.
>>> mrk_def = {'std': ['S %2i' % i for i in range(2, 7)], ... 'dev': ['S %2i' % i for i in range(12, 17)] ... } >>> epo = misc.segment_dat(cnt, mrk_def, [0, 660]) >>> avg_epo = calculate_classwise_average(epo) >>> plot(avg_epo.data[0]) >>> plot(avg_epo.data[1])
- dat (Data) –
an epoched Data object with a
-
wyrm.processing.
calculate_csp
(epo, classes=None)[source]¶ Calculate the Common Spatial Pattern (CSP) for two classes.
This method calculates the CSP and the corresponding filters. Use the columns of the patterns and filters.
Examples
Calculate the CSP for the first two classes:
>>> w, a, d = calculate_csp(epo) >>> # Apply the first two and the last two columns of the sorted >>> # filter to the data >>> filtered = apply_spatial_filter(epo, w[:, [0, 1, -2, -1]]) >>> # You'll probably want to get the log-variance along the time >>> # axis, this should result in four numbers (one for each >>> # channel) >>> filtered = np.log(np.var(filtered, 0))
Select two classes manually:
>>> w, a, d = calculate_csp(epo, [2, 5])
Parameters: - epo (epoched Data object) –
this method relies on the
epo
to have three dimensions in the following order: class, time, channel - classes (list of two ints, optional) –
If
None
the first two different class indices found inepo.axes[0]
are chosen automatically otherwise the class indices can be manually chosen by settingclasses
Returns: - v (2d array) – the sorted spacial filters
- a (2d array) – the sorted spacial patterns. Column i of a represents the pattern of the filter in column i of v.
- d (1d array) – the variances of the components
Raises: AssertionError
– If:classes
is notNone
and has less than two elementsclasses
is notNone
and the first two elements are not found in theepo
classes
isNone
but there are less than two different classes in theepo
See also
References
- epo (epoched Data object) –
this method relies on the
-
wyrm.processing.
calculate_signed_r_square
(dat, classaxis=0)[source]¶ Calculate the signed r**2 values.
This method calculates the signed r**2 values over the epochs of the
dat
.Parameters: - dat (Data) – epoched data
- classaxis (int, optional) – the axis to be treatet as the classaxis
Returns: signed_r_square (ndarray) – the signed r**2 values, signed_r_square has one axis less than the
dat
parameter, theclassaxis
has been removedExamples
>>> dat.data.shape (400, 100, 64) >>> r = calculate_signed_r_square(dat) >>> r.shape (100, 64)
-
wyrm.processing.
calculate_spoc
(epo)[source]¶ Compute source power co-modulation analysis (SPoC)
Computes spatial filters that optimize the co-modulation (here covariance) between the epoch-wise variance (as a proxy for spectral power) and a given target signal.
This SPoc function returns a full set of components (i.e. filters and patterns) of which the first component maximizes the co-modulation (i.e. positive covariance) and the last component minimizes it (i.e. maximizes negative covariance).
Note
Since the covariance is optimized, it may be affected by outliers in the data (i.e. trials/epochs with very large variance that is due to artifacts). Please remove be sure to remove these epochs if possible before calling this function!
Parameters: epo (epoched Data oject) – this method relies on the epo
to have three dimensions in the following order: class, time, channel. The data in epo should be band-pass filtered for the frequency band of interest. The values of the target variable (i.e.epo.axes[0]
) must be present in epo.Returns: - v (2d array) –
The spatial filters optimized by the
SPoC_lambda
algorithm. Each column in the matrix is a filter. - a (2d array) –
The spatial activation patterns that correspond to the filters
in
v
. Each column is a spatial pattern. when visualizing the SPoC components as scalp maps, plot the spatial patterns and not the filters. See also [haufe2014]. - d (1d array) –
The lambda values that correspond to the filters/patterns in
v
anda
, sorted from largest (positive covariance) to smallest (negative covariance).
Examples
Split data in training and test set
Calculate SPoC:
>>> w, a, d = calculate_spoc(epo)
Identify the components with strongest co-modulation by checking the covariance values stored in
d
. If there is positive covariance with the target variable it will be the first, otherwise the last:>>> w = w[:, 0]
Apply the filter(s) to the test data:
>>> filtered = np.dot(data, w)
Notes
SPoC assumes that there is a linear relationship between a measured target signal and the dynamics of the spectral power of an oscillatory source that is hidden in the data. The target signal my be a stimulus property (e.g. intensity, frequency, color, ...), a behavioral measure (e.g. reaction times, ratings, ...) , or any other uni-variate signal of interest. The time-course of spectral power of the oscillatory source signal is approximated by variance across small time segments (epochs). Thus, if the power of a specific frequency band is investigated, the input signals must be band-passed filtered before they are segmented into epochs and given to this function. This method implements
SPoC_lambda
, presented in [daehne2014]. Thus, source activity is extracted from the input data via spatial filtering. The spatial filters are optimized such that the epoch-wise variance maximally covaries with the given target signalz
.See also
References
[daehne2014] S. Dähne, F. C. Meinecke, S. Haufe, J. Höhne, M. Tangermann, K. R. Müller, V. V. Nikulin “SPoC: a novel framework for relating the amplitude of neuronal oscillations to behaviorally relevant parameters”, NeuroImage, 86(0):111-122, 2014 [haufe2014] S. Haufe, F. Meinecke, K. Görgen, S. Dähne, J. Haynes, B. Blankertz, F. Biessmann, “On the interpretation of weight vectors of linear models in multivariate neuroimaging”, NeuroImage, 87:96-110, 2014 - v (2d array) –
The spatial filters optimized by the
-
wyrm.processing.
calculate_whitening_matrix
(dat)[source]¶ Calculate whitening matrix from continuous data.
When applying the whitening matrix to continuous data, the covariance matrix of the whitened data should have all 1s on the diagonal and 0s on the rest.
Parameters: cnt (Data) – continuous Data object Returns: A (2d array) – the shape of the array is (channel, channel)
Examples
>>> a = calculate_whitening_matrix(cnt) >>> cnt_whitened = apply_spatial_filter(cnt, a) >>> # rename channels accordingly
See also
-
wyrm.processing.
clear_markers
(dat, timeaxis=-2)[source]¶ Remove markers that are outside of the
dat
time interval.This method removes the markers that are out of the time interval described in the
dat
object.If the
dat
object has notmarkers
attribute or the markers are empty, simply a copy ofdat
is returned.If
dat.data
is empty, but has markers, all markers are removed.Parameters: - dat (Data)
- timeaxis (int, optional)
Returns: dat (Data) – a copy of the Data object, with the respective markers removed
Raises: AssertionError
– if the givendat
has notfs
attributeExamples
>>> dat.axes[0] array([-5., -4., -3., -2., -1., 0., 1., 2., 3., 4.]) >>> dat.fs 1000 >>> dat.markers [[-6, 'a'], [-5, 'b'], [0, 'c'], [4.9999, 'd'], [5, 'e']] >>> dat = clear_markers(dat) >>> dat.markers [[-5, 'b'], [0, 'c'], [4.9999, 'd']]
-
wyrm.processing.
correct_for_baseline
(dat, ival, timeaxis=-2)[source]¶ Subtract the baseline.
For each epoch and channel in the given dat, this method calculates the average value for the given interval and subtracts this value from the channel data within this epoch and channel.
This method generalizes to dats with more than 3 dimensions.
Parameters: - dat (Dat)
- ival (array of two floats) –
the start and stop borders in milli seconds. the left border is
included, the right border is not:
[start, stop)
.ival[0]
must fit intodat.axes[timeaxis]
andival[0] <= ival[1]
. - timeaxis (int, optional) – the axis along which to correct for the baseline
Returns: dat (Dat) – a copy of
dat
with the averages of the intervals subtracted.Examples
Remove the baselines for the interval
[100, 0)
>>> dat = correct_for_baseline(dat, [-100, 0])
Notes
The Algorithm calculates the average(s) along the
timeaxis
within the given interval. The resulting array has one dimension less than the original one (the elements ontimeaxis
where reduced).The resulting avgarray is then subtracted from the original data. To match the shape, a new axis is created on
timeaxis
of avgarray. And the shapes are then matched via numpy’s broadcasting.Raises: AssertionError
– If the left border ofival
is outside ofdat.axes[timeaxis]
or ifival[1] < ival[0]
.See also
numpy.average()
,numpy.expand_dims()
-
wyrm.processing.
create_feature_vectors
(dat, classaxis=0)[source]¶ Create feature vectors from epoched data.
This method flattens a
Data
objects down to 2 dimensions: the first one for the classes and the second for the feature vectors. All surplus dimensions of thedat
argument are clashed into the appropriate class.Parameters: - dat (Data)
- classaxis (int, optional) – the index of the class axis
Returns: dat (Data) – a copy of
dat
with reshaped to 2 dimensions and with the classaxis moved to dimension 0Examples
>>> dat.shape (300, 2, 64) >>> dat = create_feature_vectors(dat) >>> dat.shape (300, 128)
-
wyrm.processing.
filtfilt
(dat, b, a, timeaxis=-2)[source]¶ A forward-backward filter.
Filter data twice, once forward and once backwards, using the filter defined by the filter coefficients.
This method mainly delegates the call to
scipy.signal.filtfilt()
.Parameters: - dat (Data) – the data to be filtered
- b (1-d array) – the numerator coefficient vector
- a (1-d array) – the denominator coefficient vector
- timeaxis (int, optional) –
the axes in
data
to filter along to
Returns: dat (Data) – the filtered output
See also
Examples
Generate and use a Butterworth bandpass filter for complete (off-line data):
>>> # the sampling frequency of our data in Hz >>> dat.fs 100 >>> # calculate the nyquist frequency >>> fn = dat.fs / 2 >>> # the desired low and high frequencies in Hz >>> f_low, f_high = 2, 13 >>> # the order of the filter >>> butter_ord = 4 >>> # calculate the filter coefficients >>> b, a = signal.butter(butter_ord, [f_low / fn, f_high / fn], btype='band') >>> filtered = filtfilt(dat, b, a)
-
wyrm.processing.
jumping_means
(dat, ivals, timeaxis=-2)[source]¶ Calculate the jumping means.
Parameters: - dat (Data)
- ivals (array of [float, float]) –
the intervals for which to calculate the means. Start is
included end is not (like
[start, end)
). - timeaxis (int, optional) – the axis along which to calculate the jumping means
Returns: dat (Data) – copy of
dat
with the jumping means along thetimeaxis
.dat.name[timeaxis]
anddat.axes[timeaxis]
Are modified too to reflect the intervals used for the data points.
-
wyrm.processing.
lda_apply
(fv, clf)[source]¶ Apply feature vector to LDA classifier.
Parameters: - fv (
Data
object) – the feature vector must have a 2 dimensional data, the first dimension being the class axis. - clf ((1d array, float))
Returns: out (1d array) – The projection of the data on the hyperplane.
Examples
>>> clf = lda_train(fv_train) >>> out = lda_apply(fv_test, clf)
See also
- fv (
-
wyrm.processing.
lda_train
(fv, shrink=False)[source]¶ Train the LDA classifier.
Parameters: - fv (
Data
object) – the feature vector must have 2 dimensional data, the first dimension being the class axis. The unique class labels must be 0 and 1 otherwise aValueError
will be raised. - shrink (Boolean, optional) – use shrinkage
Returns: - w (1d array)
- b (float)
Raises: ValueError
(if the class labels are not exactly 0s and 1s) –Examples
>>> clf = lda_train(fv_train) >>> out = lda_apply(fv_test, clf)
See also
- fv (
-
wyrm.processing.
lfilter
(dat, b, a, zi=None, timeaxis=-2)[source]¶ Filter data using the filter defined by the filter coefficients.
This method mainly delegates the call to
scipy.signal.lfilter()
.Parameters: - dat (Data) – the data to be filtered
- b (1-d array) – the numerator coefficient vector
- a (1-d array) – the denominator coefficient vector
- zi (nd array, optional) –
the initial conditions for the filter delay. If zi is
None
or not given, initial rest is assumed. - timeaxis (int, optional) –
the axes in
data
to filter along to
Returns: dat (Data) – the filtered output
See also
lfilter_zi()
,filtfilt()
,scipy.signal.lfilter()
,scipy.signal.butter()
,scipy.signal.butterord()
Examples
Generate and use a Butterworth bandpass filter for complete (off-line data):
>>> # the sampling frequency of our data in Hz >>> dat.fs 100 >>> # calculate the nyquist frequency >>> fn = dat.fs / 2 >>> # the desired low and high frequencies in Hz >>> f_low, f_high = 2, 13 >>> # the order of the filter >>> butter_ord = 4 >>> # calculate the filter coefficients >>> b, a = signal.butter(butter_ord, [f_low / fn, f_high / fn], btype='band') >>> filtered = lfilter(dat, b, a)
Similar to the above this time in an on-line setting:
>>> # pre-calculate the filter coefficients and the initial filter >>> # state >>> b, a = signal.butter(butter_ord, [f_low / fn, f_high / fn], btype='band') >>> zi = proc.lfilter_zi(b, a, len(CHANNELS)) >>> while 1: ... data, markers = amp.get_data() ... # convert incoming data into ``Data`` object ... cnt = Data(data, ...) ... # filter the data, note how filter now also returns the ... # filter state which we feed back into the next call of ... # ``filter`` ... cnt, zi = lfilter(cnt, b, a, zi=zi) ... ...
-
wyrm.processing.
lfilter_zi
(b, a, n=1)[source]¶ Compute an initial state
zi
for thelfilter()
function.When
n == 1
(default), this method mainly delegates the call toscipy.signal.lfilter_zi()
and returns the resultzi
. Ifn > 1
,zi
is repeatedn
times. This is useful if you want to filter n-dimensional data like multi channel EEG.Parameters: - b, a (1-d array) – The IIR filter coefficients
- n (int, optional) –
The desired width of the output vector. If
n == 1
the output is simply the 1d zi vector. Forn > 1
, the zi vector is repeatedn
times.
Returns: zi (n-d array) – The initial state of the filter.
See also
lfilter()
,scipy.signal.lfilter_zi()
Examples
>>> # pre-calculate the filter coefficients and the initial filter >>> # state >>> b, a = signal.butter(butter_ord, [f_low / fn, f_high / fn], btype='band') >>> zi = proc.lfilter_zi(b, a, len(CHANNELS)) >>> while 1: ... data, markers = amp.get_data() ... # convert incoming data into ``Data`` object ... cnt = Data(data, ...) ... # filter the data, note how filter now also returns the ... # filter state which we feed back into the next call of ... # ``filter`` ... cnt, zi = lfilter(cnt, b, a, zi=zi) ... ...
-
wyrm.processing.
logarithm
(dat)[source]¶ Computes the element wise natural logarithm of
dat.data
.Calling this method is equivalent to calling
>>> dat.copy(data=np.log(dat.data))
Parameters: dat (Data) – a Data object Returns: dat (Data) – a copy of dat
with the element wise natural logarithms of the values in.data
See also
-
wyrm.processing.
rectify_channels
(dat)[source]¶ Calculate the absolute values in
dat.data
.Parameters: dat (Data) Returns: dat (Data) – a copy of dat
with all values absolute in.data
Examples
>>> print np.average(dat.data) 0.391987338917 >>> dat = rectify_channels(dat) >>> print np.average(dat.data) 22.40234266
-
wyrm.processing.
remove_channels
(*args, **kwargs)[source]¶ Remove channels from data.
This method just calls
select_channels()
with the same parameters and theinvert
parameter set toTrue
.Returns: dat (Data) – A copy of the dat with the channels removed. See also
select_channels()
- Select Channels
-
wyrm.processing.
remove_classes
(*args, **kwargs)[source]¶ Remove classes from an epoched Data object.
This method just calls
select_epochs()
with theinverse
parameter set toTrue
.Returns: dat (Data) – copy of Data object with the classes removed See also
-
wyrm.processing.
remove_epochs
(*args, **kwargs)[source]¶ Remove epochs from an epoched Data object.
This method just calls
select_epochs()
with theinverse
paramerter set toTrue
.Returns: dat (Data) – epoched Data object with the epochs removed See also
-
wyrm.processing.
rereference
(dat, chan, chanaxis=-1)[source]¶ Rereference all channels against a single channel
This method rereferences all channels against a single channel by subtracting the sample values of a reference channel. The channel that is used as reference will consequently have all values set to zero.
Parameters: - dat (Data) – continuous or epoched Data
- chan (str) – the name of the channel to use as reference. The name is case insensitive
- chanaxis (int, optional) – the axis that contains the channels
Returns: dat (Dat) – a copy of
dat
with the channels rereferencedExamples
Rereference the data against
chan0
>>> dat = rereference(dat, 'chan0')
Raises: ValueError
(ifchan
is not indat
)
-
wyrm.processing.
segment_dat
(dat, marker_def, ival, newsamples=None, timeaxis=-2)[source]¶ Convert a continuous data object to an epoched one.
Given a continuous data object, a definition of classes, and an interval, this method looks for markers as defined in
marker_def
and slices the dat according to the time interval given withival
along thetimeaxis
. The returneddat
object stores those slices and the class each slice belongs to.Epochs that are too close to the borders and thus too short are ignored.
If the segmentation does not result in any epochs (i.e. the markers in
marker_def
could not be found indat
, the resulting dat.data will be an empty array.This method is also suitable for online processing, please read the documentation for the
newsamples
parameter and have a look at the Examples below.Parameters: dat (Data) – the data object to be segmented
marker_def (dict) – The keys are class names, the values are lists of markers
ival ([int, int]) – The interval in milliseconds to cut around the markers. I.e. to get the interval starting with the marker plus the remaining 100ms define the interval like [0, 100]. The start point is included, the endpoint is not (like:
[start, end)
). To get 200ms before the marker until 100ms after the marker do:[-200, 100]
Only negative or positive values are possible (i.e.[-500, -100]
)newsamples (int, optional) – consider the last
newsamples
samples as new data and only return epochs which are possible with the old and the new data (i.e. don’t include epochs which where possible without the new data).If this parameter is
None
(default)segment_dat
will always process the wholedat
, this is what you want for offline experiments where you process the whole data from a file at once. In online experiments however one usually gets the data incrementally, stores it in a ringbuffer to get the last n milliseconds. Consequentlysegment_dat
gets overlapping data in each iteration (the amount of overlap is exactly the data - the new samples. To make sure each epoch appears only once within all iterations,segment_dat
needs to know the number of new samples.timeaxis (int, optional) – the axis along which the segmentation will take place
Returns: dat (Data) – a copy of the resulting epoched data.
Raises: AssertionError
–- if
dat
has not.fs
or.markers
attribute or ifival[0] > ival[1]
. - if
newsamples
is notNone
or positive
Examples
Offline Experiment
>>> # Define the markers belonging to class 1 and 2 >>> md = {'class 1': ['S1', 'S2'], ... 'class 2': ['S3', 'S4'] ... } >>> # Epoch the data -500ms and +700ms around the markers defined in >>> # md >>> epo = segment_dat(cnt, md, [-500, 700])
Online Experiment
>>> # Define the markers belonging to class 1 and 2 >>> md = {'class 1': ['S1', 'S2'], ... 'class 2': ['S3', 'S4'] ... } >>> # define the interval to epoch around a marker >>> ival = [0, 300] >>> while 1: ... dat, mrk = amp.get_data() ... newsamples = len(dat) ... # the ringbuffer shall keep the last 2000 milliseconds, ... # which is way bigger than our ival... ... ringbuffer.append(dat, mrk) ... cnt, mrk = ringbuffer.get() ... # cnt contains now data up to 2000 millisecons, to make sure ... # we don't see old markers again and again until they where ... # pushed out of the ringbuffer, we need to tell segment_dat ... # how many samples of cnt are actually new ... epo = segment_dat(cnt, md, ival, newsamples=newsamples)
-
wyrm.processing.
select_channels
(dat, regexp_list, invert=False, chanaxis=-1)[source]¶ Select channels from data.
The matching is case-insensitive and locale-aware (as in
re.IGNORECASE
andre.LOCALE
). The regular expression always has to match the whole channel name stringParameters: dat (Data)
regexp_list (list of regular expressions) – The regular expressions provided, are used directly by Python’s
re
module, so all regular expressions which are understood by this module are allowed.Internally the
re.match()
method is used, additionally to check for a match (which also matches substrings), it is also checked if the whole string matched the pattern.invert (Boolean, optional) – If True the selection is inverted. Instead of selecting specific channels, you are removing the channels. (default: False)
chanaxis (int, optional) – the index of the channel axis in
dat
(default: -1)
Returns: dat (Data) – A copy of
dat
with the channels, matched by the list of regular expressions.Examples
Select all channels Matching ‘af.*’ or ‘fc.*’
>>> dat_new = select_channels(dat, ['af.*', 'fc.*'])
Remove all channels Matching ‘emg.*’ or ‘eog.*’
>>> dat_new = select_channels(dat, ['emg.*', 'eog.*'], invert=True)
Even if you only provide one Regular expression, it has to be in an array:
>>> dat_new = select_channels(dat, ['af.*'])
See also
remove_channels()
- Remove Channels
re()
- Python’s Regular Expression module for more information about regular expressions.
-
wyrm.processing.
select_classes
(dat, indices, invert=False, classaxis=0)[source]¶ Select classes from an epoched data object.
This method selects the classes with the specified indices.
Parameters: - dat (Data) – epoched Data object
- indices (array of ints) – The indices of the classes to select.
- invert (Boolean, optional) –
if true keep all classes except the ones defined by
indices
. - classaxis (int, optional) – the axis along which the classes are selected
Returns: dat (Data) – a copy of the epoched data with only the selected classes included.
Raises: AssertionError
– ifdat
has no.class_names
attribute.See also
Examples
Get the classes 1 and 2.
>>> dat.axes[0] [0, 0, 1, 2, 2] >>> dat = select_classes(dat, [1, 2]) >>> dat.axes[0] [1, 2, 2]
Remove class 2
>>> dat.axes[0] [0, 0, 1, 2, 2] >>> dat = select_classes(dat, [2], invert=True) >>> dat.axes[0] [0, 0, 1]
-
wyrm.processing.
select_epochs
(dat, indices, invert=False, classaxis=0)[source]¶ Select epochs from an epoched data object.
This method selects the epochs with the specified indices.
Parameters: - dat (Data) –
epoched Data object with an
.class_names
attribute - indices (array of ints) – The indices of the elements to select.
- invert (Boolean, optional) –
if true keep all elements except the ones defined by
indices
. - classaxis (int, optional) – the axis along which the epochs are selected
Returns: dat (Data) – a copy of the epoched data with only the selected epochs included.
Raises: AssertionError
– ifdat
has no.class_names
attribute.See also
Examples
Get the first three epochs.
>>> dat.axes[0] [0, 0, 1, 2, 2] >>> dat = select_epochs(dat, [0, 1, 2]) >>> dat.axes[0] [0, 0, 1]
Remove the fourth epoch
>>> dat.axes[0] [0, 0, 1, 2, 2] >>> dat = select_epochs(dat, [3], invert=True) >>> dat.axes[0] [0, 0, 1, 2]
- dat (Data) –
epoched Data object with an
-
wyrm.processing.
select_ival
(dat, ival, timeaxis=-2)[source]¶ Select interval from data.
This method selects the time segment(s) defined by
ival
. It will also automatically remove markers outside of the desired interval in the returned Data object.Parameters: - dat (Data)
- ival (list of two floats) –
Start and end in milliseconds. Start is included end is excluded
(like
[stard, end)
] - timeaxis (int, optional) – the axis along which the intervals are selected
Returns: dat (Data) – a copy of
dat
with the selected time intervals.Raises: AssertionError
– if the given interval does not fit intodat.axes[timeaxis]
orival[0] > ival[1]
.Examples
Select the first 200ms of the epoched data:
>>> dat.fs 100. >>> dat2 = select_ival(dat, [0, 200]) >>> print dat2.t[0], dat2.t[-1] 0. 199.
-
wyrm.processing.
sort_channels
(dat, chanaxis=-1)[source]¶ Sort channels.
This method sorts the channels in the
dat
according to the 10-20 system, from frontal to occipital and within the rows from left to right. The method uses theCHANNEL_10_20
list and relies on the elements in that list to be sorted correctly. This method will put unknown channel names to the back of the resulting list.The channel matching is case agnostic.
Parameters: - dat (
Data
object) - chanaxis (int, optional) –
the index of the channel axis in
dat
Returns: dat (
Data
object) – a copy of thedat
parameter with the channels and data sorted.Examples
>>> dat.axes[-1] array(['PPO4' 'CP4' 'PCP1' 'F5' 'C3' 'C4' 'O1' 'PPO2' 'FFC2' 'FAF5' 'PO1' 'TP10' 'FAF1' 'FFC6' 'FFC1' 'PO10' 'O10' 'C1' 'Cz' 'F2' 'CFC1' 'CCP2' 'F4' 'PO9' 'CFC6' 'TP7' 'FC6' 'AF8' 'Fz' 'AF4' 'PCP9' 'F6' 'FT10' 'FAF6' 'PO5' 'O2' 'OPO2' 'AF5' 'C2' 'P4' 'TP9' 'PCP7' 'FT8' 'A2' 'PO6' 'FC3' 'PPO1' 'CCP8' 'OPO1' 'AFp2' 'OI2' 'OI1' 'FCz' 'CCP6' 'CCP1' 'CPz' 'POz' 'FFC3' 'FFC7' 'FC2' 'F1' 'FT9' 'P2' 'P10' 'T9' 'FC1' 'C5' 'T7' 'CFC4' 'P6' 'F8' 'TP8' 'CFC5' 'PCP8' 'CFC9' 'AF7' 'FC5' 'I1' 'CFC8' 'FFC8' 'Oz' 'Pz' 'PCP4' 'FAF2' 'PCP5' 'CP1' 'PCP3' 'P1' 'Iz' 'CCP5' 'PO2' 'PCP2' 'PO4' 'Fpz' 'F7' 'PO8' 'AFz' 'F10' 'FFC10' 'CCP3' 'PPO8' 'T10' 'AF6' 'F9' 'PPO5' 'CP6' 'I2' 'PPO7' 'FC4' 'CCP4' 'PO7' 'A1' 'CP2' 'CFC3' 'T8' 'PPO3' 'Fp2' 'PCP6' 'AFp1' 'C6' 'FFC9' 'FT7' 'AF3' 'Fp1' 'CFC10' 'CCP7' 'CFC7' 'PO3' 'P7' 'P9' 'FFC4' 'P5' 'CFC2' 'F3' 'CP3' 'PPO6' 'P3' 'O9' 'PCP10' 'P8' 'CP5' 'FFC5'], dtype='|S5') >>> dat = sort_channels(dat) >>> dat.axes[-1] array(['Fpz', 'Fp1', 'AFp1', 'AFp2', 'Fp2', 'AF7', 'AF5', 'AF3', 'AFz', 'AF4', 'AF6', 'AF8', 'FAF5', 'FAF1', 'FAF2', 'FAF6', 'F9', 'F7', 'F5', 'F3', 'F1', 'Fz', 'F2', 'F4', 'F6', 'F8', 'F10', 'FFC9', 'FFC7', 'FFC5', 'FFC3', 'FFC1', 'FFC2', 'FFC4', 'FFC6', 'FFC8', 'FFC10', 'FT9', 'FT7', 'FC5', 'FC3', 'FC1', 'FCz', 'FC2', 'FC4', 'FC6', 'FT8', 'FT10', 'CFC9', 'CFC7', 'CFC5', 'CFC3', 'CFC1', 'CFC2', 'CFC4', 'CFC6', 'CFC8', 'CFC10', 'T9', 'T7', 'C5', 'C3', 'C1', 'Cz', 'C2', 'C4', 'C6', 'T8', 'T10', 'A1', 'CCP7', 'CCP5', 'CCP3', 'CCP1', 'CCP2', 'CCP4', 'CCP6', 'CCP8', 'A2', 'TP9', 'TP7', 'CP5', 'CP3', 'CP1', 'CPz', 'CP2', 'CP4', 'CP6', 'TP8', 'TP10', 'PCP9', 'PCP7', 'PCP5', 'PCP3', 'PCP1', 'PCP2', 'PCP4', 'PCP6', 'PCP8', 'PCP10', 'P9', 'P7', 'P5', 'P3', 'P1', 'Pz', 'P2', 'P4', 'P6', 'P8', 'P10', 'PPO7', 'PPO5', 'PPO3', 'PPO1', 'PPO2', 'PPO4', 'PPO6', 'PPO8', 'PO9', 'PO7', 'PO5', 'PO3', 'PO1', 'POz', 'PO2', 'PO4', 'PO6', 'PO8', 'PO10', 'OPO1', 'OPO2', 'O9', 'O1', 'O2', 'O10', 'Oz', 'OI1', 'OI2', 'I1', 'Iz', 'I2'], dtype='|S5')
- dat (
-
wyrm.processing.
spectrogram
(cnt)[source]¶ Calculate the spectrogram of a continuous data object.
See also
-
wyrm.processing.
spectrum
(dat, timeaxis=-2)[source]¶ Calculate the spectrum of a data object.
This method performs a fast fourier transform on the data along the timeaxis and returns a new Data object which is transformed into the frequency domain. The values are the amplitudes of of the respective frequencies.
Parameters: - dat (Data) – Data object with .fs attribute
- timeaxis (int, optional) – axis to perform the fft along
Returns: dat (Data) – Data object with the timeaxis transformed into the frequency domain. The values of the spectrum are the amplitudes of the respective frequencies.
Examples
>>> # dat can be continuous or epoched >>> dat.axes ['time', 'channel'] >>> spm = spectrum(dat) >>> spm.axes ['frequency', 'channel']
Raises: AssertionError
– if thedat
parameter has no.fs
attributeSee also
-
wyrm.processing.
square
(dat)[source]¶ Computes the element wise square of
dat.data
.Calling this method is equivalent to calling
>>> dat.copy(data=np.square(dat.data))
Parameters: dat (Data) – a Data object Returns: dat (Data) – a copy of dat
with the element wise squares of the values in.data
See also
-
wyrm.processing.
stft
(x, width)[source]¶ Short time fourier transform of a real sequence.
This method performs a discrete short time Fourier transform. It uses a sliding window to perform discrete Fourier transforms on the data in the Window. The results are returned in an array.
This method uses a Hanning window on the data in the window before calculating the Fourier transform.
The sliding windows are overlapping by
width / 2
.Parameters: - x (ndarray)
- width (int) – the width of the sliding window in samples
Returns: fourier (2d complex array) – the dimensions are time, frequency; the frequencies are evenly binned from 0 to f_nyquist
See also
spectrum()
,spectrogram()
,scipy.hanning()
,scipy.fftpack.rfft()
-
wyrm.processing.
subsample
(dat, freq, timeaxis=-2)[source]¶ Subsample the data to
freq
Hz.This method subsamples data along
timeaxis
by taking everyn
th element starting with the first one andn
beingdat.fs / freq
. Please note thatfreq
must be a whole number divisor ofdat.fs
.Note
Note that this method does not low-pass filter the data before sub-sampling.
Note
If you use this method in an on-line setting (i.e. where you process the data in chunks and not as a whole), you should make sure that
subsample
does not drop “half samples” by ensuring the source data’s length is in multiples of the target data’s sample length.Let’s assume your source data is sampled in 1kHz and you want to subsample down to 100Hz. One sample of the source data is 1ms long, while the target samples will be 10ms long. In order to ensure that
subsample
does not eat fractions of samples at the end of your data, you have to make sure that your source data is multiples of 10ms (i.e. 1010, 1020, etc) long. You might want to usewyrm.types.BlockBuffer
for this (see Examples below).Parameters: - dat (Data) –
Data object with
.fs
attribute - freq (float) – the target frequency in Hz
- timeaxis (int, optional) – the axis along which to subsample
Returns: dat (Data) – copy of
dat
with subsampled frequencySee also
Examples
Load some EEG data with 1kHz, bandpass filter it and downsample it to 100Hz.
>>> dat = load_brain_vision_data('some/path') >>> dat.fs 1000.0 >>> fn = dat.fs / 2 # nyquist frequ >>> b, a = butter(4, [8 / fn, 40 / fn], btype='band') >>> dat = lfilter(dat, b, a) >>> dat = subsample(dat, 100) >>> dat.fs 100.0
Online Experiment
>>> bbuffer = BlockBuffer(10) # 10 ms is the target block size >>> while 1: ... cnt = ... # get 1kHz continous data from your amp ... # put the data into the block buffer ... # bbget will onlry return the data in multiples of 10ms or ... # nothing ... bbuffer.append(cnt) ... cnt = bbuffer.get() ... if not cnt: ... continue ... # filter, etc ... subsample(cnt, 100)
Raises: AssertionError
– * iffreq
is not a whole number divisor ofdat.fs
* ifdat
has no.fs
attribute * ifdat.data.shape[timeaxis] != len(dat.axes[timeaxis])
- dat (Data) –
Data object with
-
wyrm.processing.
swapaxes
(dat, ax1, ax2)[source]¶ Swap axes of a Data object.
This method swaps two axes of a Data object by swapping the appropriate
.data
,.names
,.units
, and.axes
.Parameters: - dat (Data)
- ax1, ax2 (int) – the indices of the axes to swap
Returns: dat (Data) – a copy of
dat
with the appropriate axes swapped.Examples
>>> dat.names ['time', 'channels'] >>> dat = swapaxes(dat, 0, 1) >>> dat.names ['channels', 'time']
See also
numpy.swapaxes()
-
wyrm.processing.
variance
(dat, timeaxis=-2)[source]¶ Compute the variance along the
timeaxis
ofdat
.This method reduces the dimensions of dat.data by one.
Parameters: dat (Data) Returns: dat (Data) – copy of dat
with with the variance along thetimeaxis
removed andtimeaxis
removed.Examples
>>> epo.names ['class', 'time', 'channel'] >>> var = variance(cnt) >>> var.names ['class', 'channel']
wyrm.types module¶
Data type definitions.
This module provides the basic data types for Wyrm, like the
Data
and RingBuffer
classes.
-
class
wyrm.types.
BlockBuffer
(samples=50)[source]¶ Bases:
object
A buffer that returns data chunks in multiples of a block length.
This buffer is a first-in-first-out (FIFO) buffer that returns data in multiples of a desired block length. The block length is defined in samples.
Parameters: samples (int, optional) – the desired block length in samples Examples
>>> bbuffer = BlockBuffer(10) >>> ... >>> while 1: ... cnt = some_aquisition_method() ... # How to use the BlockBuffer ... bbuffer.append(cnt) ... cnt = bbuffer.get() ... if not cnt: ... continue ... # after here cnt is guaranteed to be in multiples of 10 samples
-
class
wyrm.types.
Data
(data, axes, names, units)[source]¶ Bases:
object
Generic, self-describing data container.
This data structure is very generic on purpose. The goal here was to provide something which can fit the various different known and yet unknown requirements for BCI algorithms.
At the core of
Data
is its n-dimensional.data
attribute which holds the actual data. Along with the data, there is meta information about each axis of the data, contained in.axes
,.names
, and.units
.Most toolbox methods rely on a convention how specific data should be structured (i.e. they assume that the channels are always in the last dimension). You don’t have to follow this convention (or sometimes it might not even be possible when trying out new things), and all methods, provide an optional parameter to tell them on which axis they should work on.
- Continuous Data:
- Continuous Data is usually EEG data and consists of a 2d array
[time, channel]
. Whenever you have continuous data, time and channel should be the last two dimensions. - Epoched Data:
- Epoched data can be seen as an array of (non-epoched) data. The
epoch should always be the first dimension. Most commonly used is
epoched continuous EEG data which looks like this:
[class, time, channel]
. - Feature Vector:
- Similar to Epoched Data, with classes in the first dimension.
Data.__eq__()
andData.__ne__()
functions are provided to test for equality of two Data objects (via==
and!=
). This method only checks for the known attributes and does not guaranty correct result if the Data object contains custom attributes. It is mainly used in unittests.Parameters: - data (ndarray)
- axes (nlist of 1darrays)
- names (nlist of strings)
- units (nlist of strings)
-
data
¶ ndarray
n-dimensional data array if the array is empty (i.e.
data.size == 0
), theData
object is assumed to be empty
-
axes
¶ nlist of 1-darrays
each element of corresponds to a dimension of
.data
(i.e. the first one in.axes
to the first dimension in.data
and so on). The 1-dimensional arrays contain the description of the data along the appropriate axis in.data
. For example if.data
contains Continuous Data, then.axes[0]
should be an array of timesteps and.axes[1]
an array of channel names
-
names
¶ nlist of strings
the human readable description of each axis, like ‘time’, or ‘channel’
-
units
¶ nlist of strings
the human readable description of the unit used for the data in
.axes
-
__bool__
()[source]¶ Return truth value of the object instance.
This method returns False if the __nonzero__ value is 0 else True.
Returns: truth (Bool) – False
if__nonzero__()
was0
, elseTrue
.See also
-
__eq__
(other)[source]¶ Test for equality.
Don’t trust this method it only checks for known attributes and assumes equality if those are equal. This method is heavily used in unittests.
Parameters: other (Data) Returns: equal (Boolean) – True if self
andother
are equal, False if not.
-
__ne__
(other)[source]¶ Test for inequality.
If
__eq__()
is implemented and__ne__()
is not, strange comparisons evaluate to True like:>>> d1 == d2 and d1 != d2
This method just returns the negation of
__eq__()
. So the same restrictions of__eq__()
about its reliability apply.Parameters: other (Data) Returns: equal (Boolean) – True if self
andother
are not equal, False otherwise.
-
__nonzero__
()[source]¶ Return the truth value for the object instance.
Similar to Python’s built in types we return
False
if the data instance is empty andTrue
otherwise. Please note that we only check for the size of.data
and ignore other attributes like.markers
which might not be empty.Examples
Easy checking if a data object contains data or not:
>>> if not cnt: ... continue
is equivalent to:
>>> if cnt.data.size == 0: ... continue
Returns: nonzero (int) – self.data.size
-
__str__
()[source]¶ Human readable representation for a data object.
Returns: str (str) – a human readable representation of the data object
-
copy
(**kwargs)[source]¶ Return a memory efficient deep copy of
self
.It first creates a shallow copy of
self
, sets the attributes inkwargs
if necessary and returns a deep copy of the resulting object.Parameters: kwargs (dict, optional) – if provided copy
will try to overwrite the name, value pairs after the shallow- and before the deep copy. If nokwargs
are provided, it will just return the deep copy.Returns: dat (Data) – a deep copy of self
.Examples
>>> # perform an ordinary deep copy of dat >>> dat2 = dat.copy() >>> # perform a deep copy but overwrite .axes first >>> dat.axes ['time', 'channels'] >>> dat3 = dat.copy(axes=['foo'], ['bar']) >>> dat3.axes ['foo', 'bar'] >>> dat.axes ['time', 'channel']
-
class
wyrm.types.
RingBuffer
(length_ms)[source]¶ Bases:
object
Circular Buffer implementation.
This implementation has a guaranteed upper bound for read and write operations as well as a constant memory usage, which is the size of the maximum length of the buffer in memory.
Reading and writing will take at most the time it takes to copy a continuous chunk of length
MAXLEN
in memory. E.g. for the extreme case of storing the last 60 seconds of 64bit data, sampled with 1kHz and 128 channels (~60MB), reading a full buffer will take ~25ms, as well as writing when storing more than than 60 seconds at once. Writing will be usually much faster, as one stores usually only a few milliseconds of data per run. In that case writing will be a fraction of a millisecond.Parameters: length_ms (int) – the length of the ring buffer in milliseconds -
length_ms
¶ int
the length of the ring buffer in milliseconds
-
length
¶ int
the length of the ring buffer in samples
-
data
¶ ndarray
the contents of the ring buffer, you should not read or write this attribute directly but via the
RingBuffer.get()
andRingBuffer.append()
methods
-
markers
¶ array of [int, str]
the markers belonging to the data currently in the ring buffer
-
full
¶ boolean
indicates if the buffer has at least
length
elements stored
-
idx
¶ int
the starting position of the oldest data in the ring buffer
Examples
>>> rb = RingBuffer(length) >>> while True: ... rb.append(amp.get_data()) ... buffered = rb.get() ... # do something with buffered
-