From 0.1.x to 0.2.x (v2)¶
BlueETL 0.2 introduces some breaking changes to support the analysis of multiple reports.
In this section it’s described what should be done to adapt the existing configurations and user code.
Note that there aren’t breaking changes in the core functionalities.
Configuration¶
If you prefer to migrate the configuration manually instead, follow these steps:
The specification
version: 2
should be added at the top level of the file.The section
extraction
should be moved toanalysis.spikes.extraction
.The section
analysis.features
should be moved toanalysis.spikes.features
.Any custom key should be moved into an optional dict:
custom
if the parameters are global, oranalysis.spikes.custom
if the parameters are specific to the spikes analysis.The following sub-section should be added to
analysis.spikes.extraction
:
report:
type: spikes
You can see an example of configuration in the old and new format here:
Output directory¶
If the output directory defined in the configuration is a relative path, it’s now relative to the configuration file itself.
Previously, using a relative path was not recommended, since it was relative to the working directory.
This means that you can decide to always set output: output
, and have a directory layout like the following:
analyses
├── analysis_config_01
│ ├── config.yaml
│ ├── output
│ │ └── spikes
│ │ ├── config
│ │ │ ├── analysis_config.cached.yaml
│ │ │ ├── checksums.cached.yaml
│ │ │ └── simulations_config.cached.yaml
│ │ ├── features
│ │ │ ├── baseline_PSTH.parquet
│ │ │ ├── decay.parquet
│ │ │ ├── latency.parquet
│ │ └── repo
│ │ ├── neuron_classes.parquet
│ │ ├── neurons.parquet
│ │ ├── report.parquet
│ │ ├── simulations.parquet
│ │ ├── trial_steps.parquet
│ │ └── windows.parquet
├── analysis_config_02
│ ├── config.yaml
│ ├── output
│ │ └── spikes
...
Analysis¶
Initialization¶
Instead of code like this:
import logging
import numpy as np
from blueetl.analysis import Analyzer
from blueetl.utils import load_yaml
logging.basicConfig(level=logging.INFO)
np.random.seed(0)
config = load_yaml("analysis_config.yaml")
a = Analyzer(config)
a.extract_repo()
a.calculate_features()
you can use this:
from blueetl.analysis import run_from_file
ma = run_from_file("analysis_config.yaml", loglevel="INFO")
a = ma.spikes
where ma
is an instance of MultiAnalyzer
and a
is an instance of SingleAnalyzer
.
If you need to work with multiple analysis, using the instance of MultiAnalyzer
may be more convenient.
Deprecation of spikes¶
Instead of accessing the spikes
DataFrame with:
a.repo.spikes.df
you should use the generic report
attribute, valid for any type of report:
a.repo.report.df
The old name spikes is kept for backward compatibility, but it should be considered deprecated and it will be removed later.
Accessing the custom config¶
If you stored any custom configuration, you can get the values from the dictionaries:
ma.global_config.custom
ma.spikes.analysis_config.custom
Using call_by_simulation¶
The function call_by_simulation
has been moved from bluepy.features
to bluepy.parallel
.