Skip to content

Binaural Analysis

This section provides an overview of the binaural analysis tools available in Soundscapy. It includes a brief description of each tool, as well as information on how to access and use them.

Binaural

Bases: Signal

Binaural signal class for analysis of binaural signals. A signal consisting of 2D samples (array) and a sampling frequency (fs).

Subclasses the Signal class from python acoustics. Also adds attributes for the recording name. Adds the ability to do binaural analysis using the acoustics, scikit-maad and mosqito libraries. Optimised for batch processing with analysis settings predefined in a yaml file and passed to the class via the AnalysisSettings class.

See Also

acoustics.Signal : Base class for binaural signal

calibrate_to

calibrate_to(decibel, inplace=False)

Calibrate two channel signal to predefined Leq/dB levels.

PARAMETER DESCRIPTION
decibel

Value(s) to calibrate to in dB (Leq) Can also handle np.ndarray and pd.Series of length 2. If only one value is passed, will calibrate both channels to the same value.

TYPE: float, list or tuple of float

inplace

Whether to perform inplace or not, by default False

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
Binaural

Calibrated Binaural signal

RAISES DESCRIPTION
ValueError

If decibel is not a (float, int) or a list or tuple of length 2.

See Also

acoustics.Signal.calibrate_to : Base method for calibration. Cannot handle 2ch calibration

Source code in soundscapy/analysis/_Binaural.py
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
def calibrate_to(self, decibel: Union[float, list, tuple], inplace: bool = False):
    """Calibrate two channel signal to predefined Leq/dB levels.

    Parameters
    ----------
    decibel : float, list or tuple of float
        Value(s) to calibrate to in dB (Leq)
        Can also handle np.ndarray and pd.Series of length 2.
        If only one value is passed, will calibrate both channels to the same value.
    inplace : bool, optional
        Whether to perform inplace or not, by default False

    Returns
    -------
    Binaural
        Calibrated Binaural signal

    Raises
    ------
    ValueError
        If decibel is not a (float, int) or a list or tuple of length 2.

    See Also
    --------
    acoustics.Signal.calibrate_to : Base method for calibration. Cannot handle 2ch calibration
    """
    if isinstance(decibel, (np.ndarray, pd.Series)):  # Force into tuple
        decibel = tuple(decibel)
    if isinstance(decibel, (list, tuple)):
        if len(decibel) == 2:  # Per-channel calibration (recommended)
            decibel = np.array(decibel)
            decibel = decibel[..., None]
            return super().calibrate_to(decibel, inplace)
        elif (
            len(decibel) == 1
        ):  # if one value given in tuple, assume same for both channels
            decibel = decibel[0]
        else:
            raise ValueError(
                "decibel must either be a single value or a 2 value tuple"
            )
    if isinstance(decibel, (int, float)):  # Calibrate both channels to same value
        return super().calibrate_to(decibel, inplace)
    else:
        raise ValueError("decibel must be a single value or a 2 value tuple")

from_wav classmethod

from_wav(filename, calibrate_to=None, normalize=False)

Load a wav file and return a Binaural object

Overrides the Signal.from_wav method to return a Binaural object instead of a Signal object.

PARAMETER DESCRIPTION
filename

Filename of wav file to load

TYPE: (Path, str)

calibrate_to

Value(s) to calibrate to in dB (Leq) Can also handle np.ndarray and pd.Series of length 2. If only one value is passed, will calibrate both channels to the same value.

TYPE: float, list or tuple of float DEFAULT: None

normalize

Whether to normalize the signal, by default False

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
Binaural

Binaural signal object of wav recording

See Also

acoustics.Signal.from_wav : Base method for loading wav files

Source code in soundscapy/analysis/_Binaural.py
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
@classmethod
def from_wav(
    cls,
    filename: Union[Path, str],
    calibrate_to: Union[float, list, tuple] = None,
    normalize: bool = False,
):
    """Load a wav file and return a Binaural object

    Overrides the Signal.from_wav method to return a
    Binaural object instead of a Signal object.

    Parameters
    ----------
    filename : Path, str
        Filename of wav file to load
    calibrate_to : float, list or tuple of float, optional
        Value(s) to calibrate to in dB (Leq)
        Can also handle np.ndarray and pd.Series of length 2.
        If only one value is passed, will calibrate both channels to the same value.
    normalize : bool, optional
        Whether to normalize the signal, by default False

    Returns
    -------
    Binaural
        Binaural signal object of wav recording

    See Also
    --------
    acoustics.Signal.from_wav : Base method for loading wav files
    """
    s = super().from_wav(filename, normalize)
    if calibrate_to is not None:
        s.calibrate_to(calibrate_to, inplace=True)
    return cls(s, s.fs, recording=filename.stem)

maad_metric

maad_metric(metric, channel=('Left', 'Right'), as_df=True, verbose=False, analysis_settings=None, func_args={})

Run a metric from the scikit-maad library

Currently only supports running all of the alpha indices at once.

PARAMETER DESCRIPTION
metric

The metric to run

TYPE: (all_temporal_alpha_indices, all_spectral_alpha_indices) DEFAULT: "all_temporal_alpha_indices"

channel

Which channels to process, by default None

TYPE: (tuple, list or str) DEFAULT: ('Left', 'Right')

as_df

Whether to return a dataframe or not, by default True If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.

TYPE: bool DEFAULT: True

verbose

Whether to print status updates, by default False

TYPE: bool DEFAULT: False

analysis_settings

Settings for analysis, by default None

Any settings given here will override those in the other options. Can pass any args or *kwargs to the underlying python acoustics method.

TYPE: AnalysisSettings DEFAULT: None

RETURNS DESCRIPTION
dict or DataFrame

Dictionary of results if as_df is False, otherwise a pandas DataFrame

RAISES DESCRIPTION
ValueError

If metric name is not recognised.

Source code in soundscapy/analysis/_Binaural.py
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
def maad_metric(
    self,
    metric: str,
    channel: Union[int, tuple, list, str] = ("Left", "Right"),
    as_df: bool = True,
    verbose: bool = False,
    analysis_settings: AnalysisSettings = None,
    func_args={},
):
    """Run a metric from the scikit-maad library

    Currently only supports running all of the alpha indices at once.

    Parameters
    ----------
    metric : {"all_temporal_alpha_indices", "all_spectral_alpha_indices"}
        The metric to run
    channel : tuple, list or str, optional
        Which channels to process, by default None
    as_df: bool, optional
        Whether to return a dataframe or not, by default True
        If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.
    verbose : bool, optional
        Whether to print status updates, by default False
    analysis_settings : AnalysisSettings, optional
        Settings for analysis, by default None

        Any settings given here will override those in the other options.
        Can pass any *args or **kwargs to the underlying python acoustics method.
    Returns
    -------
    dict or pd.DataFrame
        Dictionary of results if as_df is False, otherwise a pandas DataFrame


    Raises
    ------
    ValueError
        If metric name is not recognised.
    """
    if analysis_settings:
        if metric in {"all_temporal_alpha_indices", "all_spectral_alpha_indices"}:
            run, channel = analysis_settings.parse_maad_all_alpha_indices(metric)
        else:
            raise ValueError(f"Metric {metric} not recognised")
        if run is False:
            return None
    channel = ("Left", "Right") if channel is None else channel
    s = self._get_channel(channel)
    if s.channels == 1:
        return maad_metric_1ch(s, metric, as_df, verbose, func_args)
    else:
        return maad_metric_2ch(s, metric, channel, as_df, verbose, func_args)

mosqito_metric

mosqito_metric(metric, statistics=(5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew'), label=None, channel=('Left', 'Right'), as_df=True, return_time_series=False, parallel=True, verbose=False, analysis_settings=None, func_args={})

Run a metric from the mosqito library

PARAMETER DESCRIPTION
metric

TYPE: {"loudness_zwtv", "sharpness_din_from_loudness", "sharpness_din_perseg",

statistics

List of level statistics to calculate (e.g. L_5, L_90, etc.), by default (5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew")

TYPE: tuple or list DEFAULT: (5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew')

label

Label to use for the metric, by default None If None, will pull from default label for that metric given in sq_metrics.DEFAULT_LABELS

TYPE: str DEFAULT: None

channel

Which channels to process, by default ("Left", "Right")

TYPE: tuple or list of str or str DEFAULT: ('Left', 'Right')

as_df

Whether to return a dataframe or not, by default True If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.

TYPE: bool DEFAULT: True

return_time_series

Whether to return the time series of the metric, by default False Cannot return time series if as_df is True

TYPE: bool DEFAULT: False

parallel

Whether to run the channels in parallel, by default True If False, will run each channel sequentially. If being run as part of a larger parallel analysis (e.g. processing many recordings at once), this will automatically be set to False.

TYPE: bool DEFAULT: True

verbose

Whether to print status updates, by default False

TYPE: bool DEFAULT: False

analysis_settings

Settings for analysis, by default None

Any settings given here will override those in the other options. Can pass any args or *kwargs to the underlying python acoustics method.

TYPE: AnalysisSettings DEFAULT: None

RETURNS DESCRIPTION
dict or DataFrame

Dictionary of results if as_df is False, otherwise a pandas DataFrame

See Also

binaural.mosqito_metric_2ch : Method for running metrics on 2 channels binaural.mosqito_metric_1ch : Method for running metrics on 1 channel

Source code in soundscapy/analysis/_Binaural.py
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
def mosqito_metric(
    self,
    metric: str,
    statistics: Union[tuple, list] = (
        5,
        10,
        50,
        90,
        95,
        "avg",
        "max",
        "min",
        "kurt",
        "skew",
    ),
    label: str = None,
    channel: Union[int, tuple, list, str] = ("Left", "Right"),
    as_df: bool = True,
    return_time_series: bool = False,
    parallel: bool = True,
    verbose: bool = False,
    analysis_settings: AnalysisSettings = None,
    func_args={},
):
    """Run a metric from the mosqito library

    Parameters
    ----------
    metric : {"loudness_zwtv", "sharpness_din_from_loudness", "sharpness_din_perseg",
    "sharpness_tv", "roughness_dw"}
        Metric to run from mosqito library.

        In the case of "sharpness_din_from_loudness", the "loudness_zwtv" metric
        will be calculated first and then the sharpness will be calculated from that.
        This is because the sharpness_from loudness metric requires the loudness metric to be
        calculated. Loudness will be returned as well
    statistics : tuple or list, optional
        List of level statistics to calculate (e.g. L_5, L_90, etc.),
            by default (5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew")
    label : str, optional
        Label to use for the metric, by default None
        If None, will pull from default label for that metric given in sq_metrics.DEFAULT_LABELS
    channel : tuple or list of str or str, optional
        Which channels to process, by default ("Left", "Right")
    as_df: bool, optional
        Whether to return a dataframe or not, by default True
        If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.
    return_time_series: bool, optional
        Whether to return the time series of the metric, by default False
        Cannot return time series if as_df is True
    parallel : bool, optional
        Whether to run the channels in parallel, by default True
        If False, will run each channel sequentially.
        If being run as part of a larger parallel analysis (e.g. processing many recordings at once), this will
        automatically be set to False.
    verbose : bool, optional
        Whether to print status updates, by default False
    analysis_settings : AnalysisSettings, optional
        Settings for analysis, by default None

        Any settings given here will override those in the other options.
        Can pass any *args or **kwargs to the underlying python acoustics method.
    Returns
    -------
    dict or pd.DataFrame
        Dictionary of results if as_df is False, otherwise a pandas DataFrame

    See Also
    --------
    binaural.mosqito_metric_2ch : Method for running metrics on 2 channels
    binaural.mosqito_metric_1ch : Method for running metrics on 1 channel
    """
    if analysis_settings:
        (
            run,
            channel,
            statistics,
            label,
            parallel,
            func_args,
        ) = analysis_settings.parse_mosqito(metric)
        if run is False:
            return None

    channel = ("Left", "Right") if channel is None else channel
    s = self._get_channel(channel)

    if s.channels == 1:
        return mosqito_metric_1ch(
            s, metric, statistics, label, as_df, return_time_series, func_args
        )
    else:
        return mosqito_metric_2ch(
            s,
            metric,
            statistics,
            label,
            channel,
            as_df,
            return_time_series,
            parallel,
            verbose,
            func_args,
        )

process_all_metrics

process_all_metrics(analysis_settings, parallel=True, verbose=False)

Run all metrics specified in the AnalysisSettings object

PARAMETER DESCRIPTION
analysis_settings

Analysis settings object

TYPE: AnalysisSettings

parallel

Whether to run the channels in parallel for binaural.mosqito_metric_2ch, by default True If False, will run each channel sequentially. If being run as part of a larger parallel analysis (e.g. processing many recordings at once), this will automatically be set to False. Applies only to binaural.mosqito_metric_2ch. The other metrics are considered fast enough not to bother.

TYPE: bool DEFAULT: True

verbose

Whether to print status updates, by default False

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
DataFrame

MultiIndex Dataframe of results. Index includes "Recording" and "Channel" with a column for each metric.

Source code in soundscapy/analysis/_Binaural.py
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
def process_all_metrics(
    self,
    analysis_settings: AnalysisSettings,
    parallel: bool = True,
    verbose: bool = False,
):
    """Run all metrics specified in the AnalysisSettings object

    Parameters
    ----------
    analysis_settings : AnalysisSettings
        Analysis settings object
    parallel : bool, optional
        Whether to run the channels in parallel for `binaural.mosqito_metric_2ch`, by default True
        If False, will run each channel sequentially.
        If being run as part of a larger parallel analysis (e.g. processing many recordings at once), this will
        automatically be set to False.
        Applies only to `binaural.mosqito_metric_2ch`. The other metrics are considered fast enough not to bother.
    verbose : bool, optional
        Whether to print status updates, by default False

    Returns
    -------
    pd.DataFrame
        MultiIndex Dataframe of results.
        Index includes "Recording" and "Channel" with a column for each metric.
    """
    return process_all_metrics(self, analysis_settings, parallel, verbose)

pyacoustics_metric

pyacoustics_metric(metric, statistics=(5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew'), label=None, channel=('Left', 'Right'), as_df=True, return_time_series=False, verbose=False, analysis_settings=None, func_args={})

Run a metric from the python acoustics library

PARAMETER DESCRIPTION
metric

The metric to run.

TYPE: (LZeq, Leq, LAeq, LCeq, SEL) DEFAULT: "LZeq"

statistics

List of level statistics to calulate (e.g. L_5, L_90, etc.), by default ( 5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew", )

TYPE: tuple or list DEFAULT: (5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew')

label

Label to use for the metric, by default None If None, will pull from default label for that metric given in sq_metrics.DEFAULT_LABELS

TYPE: str DEFAULT: None

channel

Which channels to process, by default None If None, will process both channels

TYPE: tuple, list, or str DEFAULT: ('Left', 'Right')

as_df

Whether to return a dataframe or not, by default True If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.

TYPE: bool DEFAULT: True

return_time_series

Whether to return the time series of the metric, by default False Cannot return time series if as_df is True

TYPE: bool DEFAULT: False

verbose

Whether to print status updates, by default False

TYPE: bool DEFAULT: False

analysis_settings

Settings for analysis, by default None

Any settings given here will override those in the other options. Can pass any args or *kwargs to the underlying python acoustics method.

TYPE: AnalysisSettings DEFAULT: None

RETURNS DESCRIPTION
dict or DataFrame

Dictionary of results if as_df is False, otherwise a pandas DataFrame

See Also

metrics.pyacoustics_metric acoustics.standards_iso_tr_25417_2007.equivalent_sound_pressure_level : Base method for Leq calculation acoustics.standards.iec_61672_1_2013.sound_exposure_level : Base method for SEL calculation acoustics.standards.iec_61672_1_2013.time_weighted_sound_level : Base method for Leq level time series calculation

Source code in soundscapy/analysis/_Binaural.py
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
def pyacoustics_metric(
    self,
    metric: str,
    statistics: Union[tuple, list] = (
        5,
        10,
        50,
        90,
        95,
        "avg",
        "max",
        "min",
        "kurt",
        "skew",
    ),
    label: str = None,
    channel: Union[str, int, list, tuple] = ("Left", "Right"),
    as_df: bool = True,
    return_time_series: bool = False,
    verbose: bool = False,
    analysis_settings: AnalysisSettings = None,
    func_args={},
):
    """Run a metric from the python acoustics library

    Parameters
    ----------
    metric : {"LZeq", "Leq", "LAeq", "LCeq", "SEL"}
        The metric to run.
    statistics : tuple or list, optional
        List of level statistics to calulate (e.g. L_5, L_90, etc.),
            by default ( 5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew", )
    label : str, optional
        Label to use for the metric, by default None
        If None, will pull from default label for that metric given in sq_metrics.DEFAULT_LABELS
    channel : tuple, list, or str, optional
        Which channels to process, by default None
        If None, will process both channels
    as_df: bool, optional
        Whether to return a dataframe or not, by default True
        If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.
    return_time_series: bool, optional
        Whether to return the time series of the metric, by default False
        Cannot return time series if as_df is True
    verbose : bool, optional
        Whether to print status updates, by default False
    analysis_settings : AnalysisSettings, optional
        Settings for analysis, by default None

        Any settings given here will override those in the other options.
        Can pass any *args or **kwargs to the underlying python acoustics method.
    Returns
    -------
    dict or pd.DataFrame
        Dictionary of results if as_df is False, otherwise a pandas DataFrame

    See Also
    --------
    metrics.pyacoustics_metric
    acoustics.standards_iso_tr_25417_2007.equivalent_sound_pressure_level : Base method for Leq calculation
    acoustics.standards.iec_61672_1_2013.sound_exposure_level : Base method for SEL calculation
    acoustics.standards.iec_61672_1_2013.time_weighted_sound_level : Base method for Leq level time series calculation
    """
    if analysis_settings:
        (
            run,
            channel,
            statistics,
            label,
            func_args,
        ) = analysis_settings.parse_pyacoustics(metric)
        if run is False:
            return None

    channel = ("Left", "Right") if channel is None else channel
    s = self._get_channel(channel)

    if s.channels == 1:
        return pyacoustics_metric_1ch(
            s,
            metric,
            statistics,
            label,
            as_df,
            return_time_series,
            verbose,
            func_args,
        )

    else:
        return pyacoustics_metric_2ch(
            s,
            metric,
            statistics,
            label,
            channel,
            as_df,
            return_time_series,
            verbose,
            func_args,
        )

AnalysisSettings

AnalysisSettings(data, run_stats=True, force_run_all=False, filepath=None)

Bases: dict

Dict of settings for analysis methods. Each library has a dict of metrics, each of which has a dict of settings.

Source code in soundscapy/analysis/_AnalysisSettings.py
31
32
33
34
35
36
37
38
39
40
41
42
43
def __init__(
    self,
    data,
    run_stats=True,
    force_run_all=False,
    filepath: Union[str, Path] = None,
):
    super().__init__(data)
    self.run_stats = run_stats
    self.force_run_all = force_run_all
    self.filepath = filepath
    runtime = strftime("%Y-%m-%d %H:%M:%S", localtime())
    super().__setitem__("runtime", runtime)

default classmethod

default(run_stats=True, force_run_all=False)

Generate a default settings object.

PARAMETER DESCRIPTION
run_stats

whether to include all stats listed or just return the main metric, by default True

This can simplify the results dataframe if you only want the main metric. For example, rather than including L_5, L_50, etc. will only include LEq

TYPE: bool DEFAULT: True

force_run_all

whether to force all metrics to run regardless of what is set in their options, by default False

Use Cautiously. This can be useful if you want to run all metrics, but don't want to change the yaml file. Warning: If both mosqito:loudness_zwtv and mosqito:sharpness_din_from_loudness are present in the settings file, this will result in the loudness calc being run twice.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
AnalysisSettings

AnalysisSettings object

Source code in soundscapy/analysis/_AnalysisSettings.py
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
@classmethod
def default(cls, run_stats=True, force_run_all=False):
    """Generate a default settings object.

    Parameters
    ----------
    run_stats : bool, optional
        whether to include all stats listed or just return the main metric, by default True

        This can simplify the results dataframe if you only want the main metric.
        For example, rather than including L_5, L_50, etc. will only include LEq
    force_run_all : bool, optional
        whether to force all metrics to run regardless of what is set in their options, by default False

        Use Cautiously. This can be useful if you want to run all metrics, but don't want to change the yaml file.
        Warning: If both mosqito:loudness_zwtv and mosqito:sharpness_din_from_loudness are present in the settings
        file, this will result in the loudness calc being run twice.

    Returns
    -------
    AnalysisSettings
        AnalysisSettings object
    """
    import soundscapy

    root = Path(soundscapy.__path__[0])
    return cls(
        AnalysisSettings.from_yaml(
            Path(root, "analysis", "default_settings.yaml"),
            run_stats,
            force_run_all,
        )
    )

from_yaml classmethod

from_yaml(filename, run_stats=True, force_run_all=False)

Generate a settings object from a yaml file.

PARAMETER DESCRIPTION
filename

filename of the yaml file

TYPE: Path object or str

run_stats

whether to include all stats listed or just return the main metric, by default True

This can simplify the results dataframe if you only want the main metric. For example, rather than including L_5, L_50, etc. will only include LEq

TYPE: bool DEFAULT: True

force_run_all

whether to force all metrics to run regardless of what is set in their options, by default False

Use Cautiously. This can be useful if you want to run all metrics, but don't want to change the yaml file. Warning: If both mosqito:loudness_zwtv and mosqitsharpness_din_from_loudness are present in the settings file, this will result in the loudness calc being run twice.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
AnalysisSettings

AnalysisSettings object

Source code in soundscapy/analysis/_AnalysisSettings.py
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
@classmethod
def from_yaml(cls, filename: Union[Path, str], run_stats=True, force_run_all=False):
    """Generate a settings object from a yaml file.

    Parameters
    ----------
    filename : Path object or str
        filename of the yaml file
    run_stats : bool, optional
        whether to include all stats listed or just return the main metric, by default True

        This can simplify the results dataframe if you only want the main metric.
        For example, rather than including L_5, L_50, etc. will only include LEq
    force_run_all : bool, optional
        whether to force all metrics to run regardless of what is set in their options, by default False

        Use Cautiously. This can be useful if you want to run all metrics, but don't want to change the yaml file.
        Warning: If both mosqito:loudness_zwtv and mosqitsharpness_din_from_loudness are present in the settings file, this will result in the loudness calc being run twice.
    Returns
    -------
    AnalysisSettings
        AnalysisSettings object
    """
    with open(filename, "r") as f:
        return cls(
            yaml.load(f, Loader=yaml.Loader), run_stats, force_run_all, filename
        )

parse_maad_all_alpha_indices

parse_maad_all_alpha_indices(metric)

Generate relevant settings for the maad all_alpha_indices methods.

PARAMETER DESCRIPTION
metric

metric to prepare for

TYPE: str

RETURNS DESCRIPTION
run

Whether to run the metric

TYPE: bool

channel

channel(s) to run the metric on

TYPE: tuple or list of str, or str

Source code in soundscapy/analysis/_AnalysisSettings.py
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
def parse_maad_all_alpha_indices(self, metric: str):
    """Generate relevant settings for the maad all_alpha_indices methods.

    Parameters
    ----------
    metric : str
        metric to prepare for

    Returns
    -------
    run: bool
        Whether to run the metric
    channel: tuple or list of str, or str
        channel(s) to run the metric on
    """
    assert metric in [
        "all_temporal_alpha_indices",
        "all_spectral_alpha_indices",
    ], "metric must be all_temporal_alpha_indices or all_spectral_alpha_indices."

    lib_settings = self["scikit-maad"].copy()
    run = lib_settings[metric]["run"] or self.force_run_all
    channel = lib_settings[metric]["channel"].copy()
    return run, channel

parse_mosqito

parse_mosqito(metric)

Generate relevant settings for a mosqito metric.

PARAMETER DESCRIPTION
metric

metric to prepare for

TYPE: str

RETURNS DESCRIPTION
run

Whether to run the metric

TYPE: bool

channel

channel(s) to run the metric on

TYPE: tuple or list of str, or str

statistics

statistics to run the metric on. If run_stats is False, will only return the main statistic

TYPE: tuple or list of str, or str

label

label to use for the metric

TYPE: str

func_args

arguments to pass to the underlying metric function from MoSQITo

TYPE: dict

Source code in soundscapy/analysis/_AnalysisSettings.py
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
def parse_mosqito(self, metric: str):
    """Generate relevant settings for a mosqito metric.

    Parameters
    ----------
    metric : str
        metric to prepare for

    Returns
    -------
    run: bool
        Whether to run the metric
    channel: tuple or list of str, or str
        channel(s) to run the metric on
    statistics: tuple or list of str, or str
        statistics to run the metric on.
        If run_stats is False, will only return the main statistic
    label: str
        label to use for the metric
    func_args: dict
        arguments to pass to the underlying metric function from MoSQITo
    """
    assert metric in [
        "loudness_zwtv",
        "sharpness_din_from_loudness",
        "sharpness_din_perseg",
        "sharpness_din_tv",
        "roughness_dw",
    ], f"Metric {metric} not found."
    run, channel, statistics, label, func_args = self._parse_method(
        "MoSQITo", metric
    )
    try:
        parallel = self["MoSQITo"][metric]["parallel"]
    except KeyError:
        parallel = False
    # Check for sub metric
    # if sub metric is present, don't run this metric
    if (
        metric == "loudness_zwtv"
        and "sharpness_din_from_loudness" in self["MoSQITo"].keys()
        and self["MoSQITo"]["sharpness_din_from_loudness"]["run"]
        and self.force_run_all is False
    ):
        run = False
    return run, channel, statistics, label, parallel, func_args

parse_pyacoustics

parse_pyacoustics(metric)

Generate relevant settings for a pyacoustics metric.

PARAMETER DESCRIPTION
metric

metric to prepare for

TYPE: str

RETURNS DESCRIPTION
run

Whether to run the metric

TYPE: bool

channel

channel(s) to run the metric on

TYPE: tuple or list of str, or str

statistics

statistics to run the metric on. If run_stats is False, will only return the main statistic

TYPE: tuple or list of str, or str

label

label to use for the metric

TYPE: str

func_args

arguments to pass to the underlying metric function from python acoustics

TYPE: dict

Source code in soundscapy/analysis/_AnalysisSettings.py
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
def parse_pyacoustics(self, metric: str):
    """Generate relevant settings for a pyacoustics metric.

    Parameters
    ----------
    metric : str
        metric to prepare for

    Returns
    -------
    run: bool
        Whether to run the metric
    channel: tuple or list of str, or str
        channel(s) to run the metric on
    statistics: tuple or list of str, or str
        statistics to run the metric on.
        If run_stats is False, will only return the main statistic
    label: str
        label to use for the metric
    func_args: dict
        arguments to pass to the underlying metric function from python acoustics
    """
    return self._parse_method("PythonAcoustics", metric)

reload

reload()

Reload the settings from the yaml file.

Source code in soundscapy/analysis/_AnalysisSettings.py
107
108
109
def reload(self):
    """Reload the settings from the yaml file."""
    return self.from_yaml(self.filepath, self.run_stats, self.force_run_all)

to_yaml

to_yaml(filename)

Save settings to a yaml file.

PARAMETER DESCRIPTION
filename

filename of the yaml file

TYPE: Path object or str

Source code in soundscapy/analysis/_AnalysisSettings.py
111
112
113
114
115
116
117
118
119
120
def to_yaml(self, filename: Union[Path, str]):
    """Save settings to a yaml file.

    Parameters
    ----------
    filename : Path object or str
        filename of the yaml file
    """
    with open(filename, "w") as f:
        yaml.dump(self, f)

get_default_yaml

get_default_yaml(save_as='default_settings.yaml')

Retrieves the default settings for analysis from the GitHub repository and saves them to a file.

PARAMETER DESCRIPTION
save_as

The name of the file to save the default settings to. Defaults to "default_settings.yaml".

TYPE: str DEFAULT: 'default_settings.yaml'

Source code in soundscapy/analysis/_AnalysisSettings.py
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
def get_default_yaml(save_as="default_settings.yaml"):
    """
    Retrieves the default settings for analysis from the GitHub repository
    and saves them to a file.

    Parameters
    ----------
    save_as : str, optional
        The name of the file to save the default settings to. Defaults to
        "default_settings.yaml".
    """
    print("Downloading default settings from GitHub...")
    urllib.request.urlretrieve(
        "https://raw.githubusercontent.com/MitchellAcoustics/Soundscapy/main/soundscapy/analysis/default_settings.yaml",
        save_as,
    )

Binaural Metrics

add_results

add_results(results_df, metric_results)

Add results to MultiIndex dataframe

PARAMETER DESCRIPTION
results_df

MultiIndex dataframe to add results to

TYPE: DataFrame

metric_results

MultiIndex dataframe of results to add

TYPE: DataFrame

RETURNS DESCRIPTION
DataFrame

Index includes "Recording" and "Channel" with a column for each index.

Source code in soundscapy/analysis/binaural.py
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
def add_results(results_df: pd.DataFrame, metric_results: pd.DataFrame):
    """Add results to MultiIndex dataframe

    Parameters
    ----------
    results_df : pd.DataFrame
        MultiIndex dataframe to add results to
    metric_results : pd.DataFrame
        MultiIndex dataframe of results to add

    Returns
    -------
    pd.DataFrame
        Index includes "Recording" and "Channel" with a column for each index.
    """
    # TODO: Add check for whether all of the recordings have rows in the dataframe
    # If not, add new rows first

    if not set(metric_results.columns).issubset(set(results_df.columns)):
        # Check if results_df already has the columns in results
        results_df = results_df.join(metric_results)
    else:
        results_df.update(metric_results, errors="ignore")
    return results_df

maad_metric_2ch

maad_metric_2ch(b, metric, channel_names=('Left', 'Right'), as_df=False, verbose=False, func_args={})

Run a metric from the scikit-maad library (or suite of indices) on a binaural signal.

Currently only supports running all the alpha indices at once.

PARAMETER DESCRIPTION
b

Binaural signal to calculate the alpha indices for

TYPE: Binaural

metric

Metric to calculate

TYPE: (all_temporal_alpha_indices, all_spectral_alpha_indices) DEFAULT: "all_temporal_alpha_indices"

channel_names

Custom names for the channels, by default ("Left", "Right").

TYPE: tuple or list DEFAULT: ('Left', 'Right')

as_df

Whether to return a pandas DataFrame, by default False If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.

TYPE: bool DEFAULT: False

verbose

Whether to print status updates, by default False

TYPE: bool DEFAULT: False

func_args

Additional arguments to pass to the metric function, by default {}

TYPE: dict DEFAULT: {}

RETURNS DESCRIPTION
dict or DataFrame

Dictionary of results if as_df is False, otherwise a pandas DataFrame

See Also

scikit-maad library sq_metrics.maad_metric_1ch

Source code in soundscapy/analysis/binaural.py
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
def maad_metric_2ch(
    b,
    metric: str,
    channel_names: Union[tuple, list] = ("Left", "Right"),
    as_df: bool = False,
    verbose: bool = False,
    func_args={},
):
    """Run a metric from the scikit-maad library (or suite of indices) on a binaural signal.

    Currently only supports running all the alpha indices at once.

    Parameters
    ----------
    b : Binaural
        Binaural signal to calculate the alpha indices for
    metric : {"all_temporal_alpha_indices", "all_spectral_alpha_indices"}
        Metric to calculate
    channel_names : tuple or list, optional
        Custom names for the channels, by default ("Left", "Right").
    as_df : bool, optional
        Whether to return a pandas DataFrame, by default False
        If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.
    verbose : bool, optional
        Whether to print status updates, by default False
    func_args : dict, optional
        Additional arguments to pass to the metric function, by default {}

    Returns
    -------
    dict or pd.DataFrame
        Dictionary of results if as_df is False, otherwise a pandas DataFrame

    See Also
    --------
    scikit-maad library
    `sq_metrics.maad_metric_1ch`

    """
    if b.channels != 2:
        raise ValueError("Must be 2 channel signal. Use `maad_metric_1ch` instead.")
    if verbose:
        print(f" - Calculating scikit-maad {metric}")
    res_l = maad_metric_1ch(b[0], metric, as_df=False)
    res_r = maad_metric_1ch(b[1], metric, as_df=False)
    res = {channel_names[0]: res_l, channel_names[1]: res_r}
    if not as_df:
        return res
    try:
        rec = b.recording
    except AttributeError:
        rec = 0
    df = pd.DataFrame.from_dict(res, orient="index")
    df["Recording"] = rec
    df["Channel"] = df.index
    df.set_index(["Recording", "Channel"], inplace=True)
    return df

mosqito_metric_2ch

mosqito_metric_2ch(b, metric, statistics=(5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew'), label=None, channel_names=('Left', 'Right'), as_df=False, return_time_series=False, parallel=True, verbose=False, func_args={})

function for calculating metrics from Mosqito.

PARAMETER DESCRIPTION
b

Binaural signal to calculate the sound quality indices for

TYPE: Binaural

metric

TYPE: {"loudness_zwtv", "sharpness_din_from_loudness", "sharpness_din_perseg",

statistics

List of level statistics to calculate (e.g. L_5, L_90, etc.), by default (5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew")

TYPE: tuple or list DEFAULT: (5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew')

label

Label to use for the metric in the results dictionary, by default None If None, will pull from default label for that metric given in DEFAULT_LABELS

TYPE: str DEFAULT: None

channel_names

Custom names for the channels, by default ("Left", "Right")

TYPE: tuple or list DEFAULT: ('Left', 'Right')

as_df

Whether to return a pandas DataFrame, by default False If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.

TYPE: bool DEFAULT: False

return_time_series

Whether to return the time series of the metric, by default False Only works for metrics that return a time series array. Cannot be returned in a dataframe. Will raise a warning if both as_df and return_time_series are True and will only return the DataFrame with the other stats.

TYPE: bool DEFAULT: False

parallel

Whether to run the channels in parallel, by default True If False, will run each channel sequentially. If being run as part of a larger parallel analysis (e.g. processing many recordings at once), this will automatically be set to False.

TYPE: bool DEFAULT: True

verbose

Whether to print status updates, by default False

TYPE: bool DEFAULT: False

func_args

Additional arguments to pass to the metric function, by default {}

TYPE: dict DEFAULT: {}

RETURNS DESCRIPTION
dict or DataFrame

Dictionary of results if as_df is False, otherwise a pandas DataFrame

Source code in soundscapy/analysis/binaural.py
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
def mosqito_metric_2ch(
    b,
    metric: str,
    statistics: Union[tuple, list] = (
        5,
        10,
        50,
        90,
        95,
        "avg",
        "max",
        "min",
        "kurt",
        "skew",
    ),
    label: str = None,
    channel_names: Union[tuple, list] = ("Left", "Right"),
    as_df: bool = False,
    return_time_series: bool = False,
    parallel: bool = True,
    verbose: bool = False,
    func_args={},
):
    """function for calculating metrics from Mosqito.

    Parameters
    ----------
    b : Binaural
        Binaural signal to calculate the sound quality indices for
    metric : {"loudness_zwtv", "sharpness_din_from_loudness", "sharpness_din_perseg",
    "sharpness_din_tv", "roughness_dw"}
        Metric to calculate
    statistics : tuple or list, optional
        List of level statistics to calculate (e.g. L_5, L_90, etc.),
            by default (5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew")
    label : str, optional
        Label to use for the metric in the results dictionary, by default None
        If None, will pull from default label for that metric given in DEFAULT_LABELS
    channel_names : tuple or list, optional
        Custom names for the channels, by default ("Left", "Right")
    as_df : bool, optional
        Whether to return a pandas DataFrame, by default False
        If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.
    return_time_series : bool, optional
        Whether to return the time series of the metric, by default False
        Only works for metrics that return a time series array.
        Cannot be returned in a dataframe. Will raise a warning if both `as_df`
        and `return_time_series` are True and will only return the DataFrame with the other stats.
    parallel : bool, optional
        Whether to run the channels in parallel, by default True
        If False, will run each channel sequentially.
        If being run as part of a larger parallel analysis (e.g. processing many recordings at once), this will
        automatically be set to False.
    verbose : bool, optional
        Whether to print status updates, by default False
    func_args : dict, optional
        Additional arguments to pass to the metric function, by default {}

    Returns
    -------
    dict or pd.DataFrame
        Dictionary of results if as_df is False, otherwise a pandas DataFrame
    """
    if b.channels != 2:
        raise ValueError("Must be 2 channel signal. Use `mosqito_metric_1ch` instead.")
    if verbose:
        if metric == "sharpness_din_from_loudness":
            print(
                " - Calculating MoSQITo metrics: `sharpness_din` from `loudness_zwtv`"
            )
        else:
            print(f" - Calculating MoSQITo metric: {metric}")

    # Make sure we're not already running in a parallel process
    # (e.g. if called from `parallel_process`)
    if mp.current_process().daemon:
        parallel = False
    if parallel:
        res = _parallel_mosqito_metric_2ch(
            b, metric, statistics, label, channel_names, return_time_series
        )

    else:
        res_l = mosqito_metric_1ch(
            b[0],
            metric,
            statistics,
            label,
            as_df=False,
            return_time_series=return_time_series,
            func_args=func_args,
        )

        res_r = mosqito_metric_1ch(
            b[1],
            metric,
            statistics,
            label,
            as_df=False,
            return_time_series=return_time_series,
            func_args=func_args,
        )

        res = {channel_names[0]: res_l, channel_names[1]: res_r}
    if not as_df:
        return res
    try:
        rec = b.recording
    except AttributeError:
        rec = 0
    df = pd.DataFrame.from_dict(res, orient="index")
    df["Recording"] = rec
    df["Channel"] = df.index
    df.set_index(["Recording", "Channel"], inplace=True)
    return df

prep_multiindex_df

prep_multiindex_df(dictionary, label='Leq', incl_metric=True)

df help to prepare a MultiIndex dataframe from a dictionary of results

PARAMETER DESCRIPTION
dictionary

Dict of results with recording name as key, channels {"Left", "Right"} as second key, and Leq metric as value

TYPE: dict

label

Name of metric included, by default "Leq"

TYPE: str DEFAULT: 'Leq'

incl_metric

Whether to include the metric value in the resulting dataframe, by default True If False, will only set up the DataFrame with the proper MultiIndex

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
DataFrame

Index includes "Recording" and "Channel" with a column for each index if incl_metric.

Source code in soundscapy/analysis/binaural.py
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
def prep_multiindex_df(dictionary: dict, label: str = "Leq", incl_metric: bool = True):
    """df help to prepare a MultiIndex dataframe from a dictionary of results

    Parameters
    ----------
    dictionary : dict
        Dict of results with recording name as key, channels {"Left", "Right"} as second key, and Leq metric as value
    label : str, optional
        Name of metric included, by default "Leq"
    incl_metric : bool, optional
        Whether to include the metric value in the resulting dataframe, by default True
        If False, will only set up the DataFrame with the proper MultiIndex
    Returns
    -------
    pd.DataFrame
        Index includes "Recording" and "Channel" with a column for each index if `incl_metric`.

    """
    new_dict = {}
    for outerKey, innerDict in dictionary.items():
        for innerKey, values in innerDict.items():
            new_dict[(outerKey, innerKey)] = values
    idx = pd.MultiIndex.from_tuples(new_dict.keys())
    df = pd.DataFrame(new_dict.values(), index=idx, columns=[label])
    df.index.names = ["Recording", "Channel"]
    if not incl_metric:
        df = df.drop(columns=[label])
    return df

process_all_metrics

process_all_metrics(b, analysis_settings, parallel=True, verbose=False)

Loop through all metrics included in analysis_settings and add results to results_df

PARAMETER DESCRIPTION
b

Binaural signal to process

TYPE: Binaural

analysis_settings

Settings for analysis, including run tag for whether to run a metric

TYPE: AnalysisSettings

parallel

Whether to run the channels in parallel for binaural.mosqito_metric_2ch, by default True If False, will run each channel sequentially. If being run as part of a larger parallel analysis (e.g. processing many recordings at once), this will automatically be set to False. Applies only to binaural.mosqito_metric_2ch. The other metrics are considered fast enough not to bother.

TYPE: bool DEFAULT: True

verbose

Whether to print status updates, by default False

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
DataFrame

MultiIndex DataFrame with results from all metrics for one Binaural recording

Source code in soundscapy/analysis/binaural.py
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
def process_all_metrics(
    b,
    analysis_settings,
    parallel: bool = True,
    verbose: bool = False,
):
    """Loop through all metrics included in `analysis_settings` and add results to `results_df`

    Parameters
    ----------
    b : Binaural
        Binaural signal to process
    analysis_settings : AnalysisSettings
        Settings for analysis, including `run` tag for whether to run a metric
    parallel : bool, optional
        Whether to run the channels in parallel for `binaural.mosqito_metric_2ch`, by default True
        If False, will run each channel sequentially.
        If being run as part of a larger parallel analysis (e.g. processing many recordings at once), this will
        automatically be set to False.
        Applies only to `binaural.mosqito_metric_2ch`. The other metrics are considered fast enough not to bother.
    verbose : bool, optional
        Whether to print status updates, by default False

    Returns
    -------
    pd.DataFrame
        MultiIndex DataFrame with results from all metrics for one Binaural recording
    """
    if verbose:
        print(f"Processing {b.recording}")

    idx = pd.MultiIndex.from_tuples(((b.recording, "Left"), (b.recording, "Right")))
    results_df = pd.DataFrame(index=idx)
    results_df.index.names = ["Recording", "Channel"]

    # Count number of metrics to run
    metric_count = 0
    for library in analysis_settings.keys():
        if library not in ["PythonAcoustics", "scikit-maad", "MoSQITo"]:
            pass
        else:
            for metric in analysis_settings[library].keys():
                if analysis_settings[library][metric]["run"]:
                    metric_count += 1

    # Loop through options in analysis_settings
    for library in analysis_settings.keys():
        # Python Acoustics metrics
        if library == "PythonAcoustics":
            for metric in analysis_settings[library].keys():
                results_df = pd.concat(
                    (
                        results_df,
                        b.pyacoustics_metric(
                            metric, verbose=verbose, analysis_settings=analysis_settings
                        ),
                    ),
                    axis=1,
                )
        # MoSQITO metrics
        elif library == "MoSQITo":
            for metric in analysis_settings[library].keys():
                results_df = pd.concat(
                    (
                        results_df,
                        b.mosqito_metric(
                            metric,
                            parallel=parallel,
                            verbose=verbose,
                            analysis_settings=analysis_settings,
                        ),
                    ),
                    axis=1,
                )
        # scikit-maad metrics
        elif library == "scikit-maad":
            for metric in analysis_settings[library].keys():
                results_df = pd.concat(
                    (
                        results_df,
                        b.maad_metric(
                            metric, verbose=verbose, analysis_settings=analysis_settings
                        ),
                    ),
                    axis=1,
                )

    return results_df

pyacoustics_metric_2ch

pyacoustics_metric_2ch(b, metric, statistics=(5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew'), label=None, channel_names=('Left', 'Right'), as_df=False, return_time_series=False, verbose=False, func_args={})

Run a metric from the python acoustics library on a Binaural object.

PARAMETER DESCRIPTION
b

Binaural signal to calculate the metric for

TYPE: Binaural

metric

The metric to run

TYPE: (LZeq, Leq, LAeq, LCeq, SEL) DEFAULT: "LZeq"

statistics

List of level statistics to calculate (e.g. L_5, L_90, etc), by default (5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew")

TYPE: tuple or list DEFAULT: (5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew')

label

Label to use for the metric in the results dictionary, by default None If None, will pull from default label for that metric given in DEFAULT_LABELS

TYPE: str DEFAULT: None

channel_names

Custom names for the channels, by default ("Left", "Right")

TYPE: tuple or list DEFAULT: ('Left', 'Right')

as_df

Whether to return a pandas DataFrame, by default False If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.

TYPE: bool DEFAULT: False

return_time_series

Whether to return the time series of the metric, by default False Cannot return time series if as_df is True

TYPE: bool DEFAULT: False

verbose

Whether to print status updates, by default False

TYPE: bool DEFAULT: False

func_args

Arguments to pass to the metric function, by default {}

TYPE: dict DEFAULT: {}

RETURNS DESCRIPTION
dict or DataFrame

Dictionary of results if as_df is False, otherwise a pandas DataFrame

See Also

sq_metrics.pyacoustics_metric_1ch

Source code in soundscapy/analysis/binaural.py
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
def pyacoustics_metric_2ch(
    b,
    metric: str,
    statistics: Union[tuple, list] = (
        5,
        10,
        50,
        90,
        95,
        "avg",
        "max",
        "min",
        "kurt",
        "skew",
    ),
    label: str = None,
    channel_names: Union[tuple, list] = ("Left", "Right"),
    as_df: bool = False,
    return_time_series: bool = False,
    verbose: bool = False,
    func_args={},
):
    """Run a metric from the python acoustics library on a Binaural object.

    Parameters
    ----------
    b : Binaural
        Binaural signal to calculate the metric for
    metric : {"LZeq", "Leq", "LAeq", "LCeq", "SEL"}
        The metric to run
    statistics : tuple or list, optional
        List of level statistics to calculate (e.g. L_5, L_90, etc),
            by default (5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew")
    label : str, optional
        Label to use for the metric in the results dictionary, by default None
        If None, will pull from default label for that metric given in DEFAULT_LABELS
    channel_names : tuple or list, optional
        Custom names for the channels, by default ("Left", "Right")
    as_df : bool, optional
        Whether to return a pandas DataFrame, by default False
        If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.
    return_time_series : bool, optional
        Whether to return the time series of the metric, by default False
        Cannot return time series if as_df is True
    verbose : bool, optional
        Whether to print status updates, by default False
    func_args : dict, optional
        Arguments to pass to the metric function, by default {}

    Returns
    -------
    dict or pd.DataFrame
        Dictionary of results if as_df is False, otherwise a pandas DataFrame

    See Also
    --------
    sq_metrics.pyacoustics_metric_1ch
    """
    if b.channels != 2:
        raise ValueError(
            "Must be 2 channel signal. Use `pyacoustics_metric_1ch instead`."
        )

    if verbose:
        print(f" - Calculating Python Acoustics metrics: {metric}")
    res_l = pyacoustics_metric_1ch(
        b[0],
        metric,
        statistics,
        label,
        as_df=False,
        return_time_series=return_time_series,
        func_args=func_args,
    )

    res_r = pyacoustics_metric_1ch(
        b[1],
        metric,
        statistics,
        label,
        as_df=False,
        return_time_series=return_time_series,
        func_args=func_args,
    )

    res = {channel_names[0]: res_l, channel_names[1]: res_r}
    if not as_df:
        return res
    try:
        rec = b.recording
    except AttributeError:
        rec = 0
    df = pd.DataFrame.from_dict(res, orient="index")
    df["Recording"] = rec
    df["Channel"] = df.index
    df.set_index(["Recording", "Channel"], inplace=True)
    return df

maad_metric_1ch

maad_metric_1ch(s, metric, as_df=False, verbose=False, func_args={})

Run a metric from the scikit-maad library (or suite of indices) on a single channel signal.

Currently only supports running all of the alpha indices at once.

PARAMETER DESCRIPTION
s

Single channel signal to calculate the alpha indices for

TYPE: Signal or Binaural (single channel)

metric

Metric to calculate

TYPE: (all_temporal_alpha_indices, all_spectral_alpha_indices) DEFAULT: "all_temporal_alpha_indices"

as_df

Whether to return a pandas DataFrame, by default False If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.

TYPE: bool DEFAULT: False

verbose

Whether to print status updates, by default False

TYPE: bool DEFAULT: False

**func_args

Additional keyword arguments to pass to the metric function, by default {}

TYPE: dict DEFAULT: {}

RETURNS DESCRIPTION
dict or DataFrame

Dictionary of results if as_df is False, otherwise a pandas DataFrame

See Also

maad.features.all_spectral_alpha_indices maad.features.all_temporal_alpha_indices

Source code in soundscapy/analysis/metrics.py
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
def maad_metric_1ch(
    s, metric: str, as_df: bool = False, verbose: bool = False, func_args={}
):
    """Run a metric from the scikit-maad library (or suite of indices) on a single channel signal.

    Currently only supports running all of the alpha indices at once.

    Parameters
    ----------
    s : Signal or Binaural (single channel)
        Single channel signal to calculate the alpha indices for
    metric : {"all_temporal_alpha_indices", "all_spectral_alpha_indices"}
        Metric to calculate
    as_df : bool, optional
        Whether to return a pandas DataFrame, by default False
        If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.
    verbose : bool, optional
        Whether to print status updates, by default False
    **func_args : dict, optional
        Additional keyword arguments to pass to the metric function, by default {}

    Returns
    -------
    dict or pd.DataFrame
        Dictionary of results if as_df is False, otherwise a pandas DataFrame

    See Also
    --------
    maad.features.all_spectral_alpha_indices
    maad.features.all_temporal_alpha_indices
    """
    # Checks and status
    if s.channels != 1:
        raise ValueError("Signal must be single channel")
    if verbose:
        print(f" - Calculating scikit-maad {metric}")
    # Start the calc
    if metric == "all_spectral_alpha_indices":
        Sxx, tn, fn, ext = spectrogram(
            s, s.fs, **func_args
        )  # spectral requires the spectrogram
        res = all_spectral_alpha_indices(Sxx, tn, fn, extent=ext, **func_args)[0]

    elif metric == "all_temporal_alpha_indices":
        res = all_temporal_alpha_indices(s, s.fs, **func_args)
    else:
        raise ValueError(f"Metric {metric} not recognized.")
    if not as_df:
        return res.to_dict("records")[0]
    try:
        res["Recording"] = s.recording
        res.set_index(["Recording"], inplace=True)
        return res
    except AttributeError:
        return res

mosqito_metric_1ch

mosqito_metric_1ch(s, metric, statistics=(5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew'), label=None, as_df=False, return_time_series=False, func_args={})

Calculating a metric and accompanying statistics from Mosqito.

PARAMETER DESCRIPTION
s

Single channel signal to calculate the sound quality indices for

TYPE: Signal or Binaural (single channel)

metric

TYPE: {"loudness_zwtv", "sharpness_din_from_loudness", "sharpness_din_perseg", "sharpness_din_tv",

statistics

List of level statistics to calculate (e.g. L_5, L_90, etc.), by default (5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew")

TYPE: tuple or list DEFAULT: (5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew')

label

Label to use for the metric in the results dictionary, by default None If None, will pull from default label for that metric given in DEFAULT_LABELS

TYPE: str DEFAULT: None

as_df

Return the results as a dataframe, by default False

TYPE: bool DEFAULT: False

return_time_series

Return the time series array of the metric, by default False Only works for metrics that return a time series array. Cannot be returned in a dataframe. Will raise a warning if both as_df and return_time_series are True and will only return the DataFrame with the other stats.

TYPE: bool DEFAULT: False

verbose

Whether to print status updates, by default False

TYPE: bool

**func_args

Additional keyword arguments to pass to the metric function, by default {}

TYPE: dict DEFAULT: {}

RETURNS DESCRIPTION
dict

dictionary of the calculated statistics. key is metric name + statistic (e.g. LZeq_5, LZeq_90, etc) value is the calculated statistic

RAISES DESCRIPTION
ValueError

Signal must be single channel. Can be a slice of a multichannel signal.

ValueError

Metric is not recognized. Must be one of {"loudness_zwtv", "sharpness_din_from_loudness", "sharpness_din_perseg", "roughness_dw"}

Warning
See Also

mosqito.sq_metrics.loudness_zwtv : MoSQito Loudness calculation mosqito.sq_metrics.roughness_dw : MoSQito Roughness calculation mosqito.sq_metrics.sharpness_din_from_loudness : MoSQito Sharpness calculation

Source code in soundscapy/analysis/metrics.py
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
def mosqito_metric_1ch(
    s,
    metric: str,
    statistics: Union[tuple, list] = (
        5,
        10,
        50,
        90,
        95,
        "avg",
        "max",
        "min",
        "kurt",
        "skew",
    ),
    label=None,
    as_df: bool = False,
    return_time_series: bool = False,
    func_args={},
):
    """Calculating a metric and accompanying statistics from Mosqito.

    Parameters
    ----------
    s : Signal or Binaural (single channel)
        Single channel signal to calculate the sound quality indices for
    metric : {"loudness_zwtv", "sharpness_din_from_loudness", "sharpness_din_perseg", "sharpness_din_tv",
    "roughness_zwtv"}
        Metric to calculate
    statistics : tuple or list
        List of level statistics to calculate (e.g. L_5, L_90, etc.),
            by default (5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew")
    label : str, optional
        Label to use for the metric in the results dictionary, by default None
        If None, will pull from default label for that metric given in DEFAULT_LABELS
    as_df : bool, optional
        Return the results as a dataframe, by default False
    return_time_series : bool, optional
        Return the time series array of the metric, by default False
        Only works for metrics that return a time series array.
        Cannot be returned in a dataframe. Will raise a warning if both `as_df`
        and `return_time_series` are True and will only return the DataFrame with the other stats.
    verbose : bool, optional
        Whether to print status updates, by default False
    **func_args : dict, optional
        Additional keyword arguments to pass to the metric function, by default {}

    Returns
    -------
    dict
        dictionary of the calculated statistics.
        key is metric name + statistic (e.g. LZeq_5, LZeq_90, etc)
        value is the calculated statistic

    Raises
    ------
    ValueError
        Signal must be single channel. Can be a slice of a multichannel signal.
    ValueError
        Metric is not recognized. Must be one of {"loudness_zwtv", "sharpness_din_from_loudness",
        "sharpness_din_perseg", "roughness_dw"}
    Warning

    See Also
    --------
    mosqito.sq_metrics.loudness_zwtv : MoSQito Loudness calculation
    mosqito.sq_metrics.roughness_dw : MoSQito Roughness calculation
    mosqito.sq_metrics.sharpness_din_from_loudness : MoSQito Sharpness calculation
    """
    # Checks and warnings
    if s.channels != 1:
        raise ValueError("Signal must be single channel")
    try:
        label = label or DEFAULT_LABELS[metric]
    except KeyError as e:
        raise ValueError(f"Metric {metric} not recognized.") from e
    if as_df and return_time_series:
        warnings.warn(
            "Cannot return both a dataframe and time series. Returning dataframe only."
        )
        return_time_series = False

    # Start the calc
    res = {}
    if metric == "loudness_zwtv":
        N, N_spec, bark_axis, time_axis = loudness_zwtv(s, s.fs, **func_args)
        res = _stat_calcs(label, N, res, statistics)
        if return_time_series:
            res[f"{label}_ts"] = (time_axis, N)
    elif metric == "roughness_dw":
        R, R_spec, bark_axis, time_axis = roughness_dw(s, s.fs, **func_args)
        res = _stat_calcs(label, R, res, statistics)
        if return_time_series:
            res[f"{label}_ts"] = (time_axis, R)
    elif metric == "sharpness_din_from_loudness":
        # The `sharpness_din_from_loudness` requires the loudness to be calculated first.
        field_type = func_args.get("field_type", "free")
        N, N_spec, bark_axis, time_axis = loudness_zwtv(s, s.fs, field_type=field_type)
        res = _stat_calcs("N", N, res, statistics)
        if return_time_series:
            res["N_ts"] = time_axis, N

        # Calculate the sharpness_din_from_loudness metric
        func_args.pop("field_type", None)
        S = sharpness_din_from_loudness(N, N_spec, **func_args)
        res = _stat_calcs(label, S, res, statistics)
        if return_time_series:
            res[f"{label}_ts"] = (time_axis, S)
    elif metric == "sharpness_din_perseg":
        S, time_axis = sharpness_din_perseg(s, s.fs, **func_args)
        res = _stat_calcs(label, S, res, statistics)
        if return_time_series:
            res[f"{label}_ts"] = (time_axis, S)
    elif metric == "sharpness_din_tv":
        S, time_axis = sharpness_din_tv(s, s.fs, **func_args)
        res = _stat_calcs(label, S, res, statistics)
        if return_time_series:
            res[f"{label}_ts"] = (time_axis, S)
    else:
        raise ValueError(f"Metric {metric} not recognized.")

    # Return the results in the requested format
    if not as_df:
        return res
    try:
        rec = s.recording
        return pd.DataFrame(res, index=[rec])
    except AttributeError:
        return pd.DataFrame(res, index=[0])

pyacoustics_metric_1ch

pyacoustics_metric_1ch(s, metric, statistics=(5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew'), label=None, as_df=False, return_time_series=False, verbose=False, func_args={})

Run a metric from the pyacoustics library on a single channel object.

PARAMETER DESCRIPTION
s

Single channel signal to calculate the metric for

TYPE: Signal or Binaural (single channel slice)

metric

The metric to run

TYPE: (LZeq, Leq, LAeq, LCeq, SEL) DEFAULT: "LZeq"

statistics

List of level statistics to calculate (e.g. L_5, L_90, etc), by default (5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew")

TYPE: tuple or list DEFAULT: (5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew')

label

Label to use for the metric in the results dictionary, by default None If None, will pull from default label for that metric given in DEFAULT_LABELS

TYPE: str DEFAULT: None

as_df

Whether to return a pandas DataFrame, by default False If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.

TYPE: bool DEFAULT: False

return_time_series

Whether to return the time series of the metric, by default False Cannot return time series if as_df is True

TYPE: bool DEFAULT: False

verbose

Whether to print status updates, by default False

TYPE: bool DEFAULT: False

**func_args

Additional keyword arguments to pass to the metric function, by default {}

TYPE: dict DEFAULT: {}

RETURNS DESCRIPTION
dict

dictionary of the calculated statistics. key is metric name + statistic (e.g. LZeq_5, LZeq_90, etc) value is the calculated statistic

RAISES DESCRIPTION
ValueError

Metric must be one of {"LZeq", "Leq", "LAeq", "LCeq", "SEL"}

See Also

acoustics

Source code in soundscapy/analysis/metrics.py
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
def pyacoustics_metric_1ch(
    s,
    metric: str,
    statistics: Union[list, tuple] = (
        5,
        10,
        50,
        90,
        95,
        "avg",
        "max",
        "min",
        "kurt",
        "skew",
    ),
    label: str = None,
    as_df: bool = False,
    return_time_series: bool = False,
    verbose: bool = False,
    func_args={},
):
    """Run a metric from the pyacoustics library on a single channel object.

    Parameters
    ----------
    s : Signal or Binaural (single channel slice)
        Single channel signal to calculate the metric for
    metric : {"LZeq", "Leq", "LAeq", "LCeq", "SEL"}
        The metric to run
    statistics : tuple or list, optional
        List of level statistics to calculate (e.g. L_5, L_90, etc),
            by default (5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew")
    label : str, optional
        Label to use for the metric in the results dictionary, by default None
        If None, will pull from default label for that metric given in DEFAULT_LABELS
    as_df : bool, optional
        Whether to return a pandas DataFrame, by default False
        If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.
    return_time_series : bool, optional
        Whether to return the time series of the metric, by default False
        Cannot return time series if as_df is True
    verbose : bool, optional
        Whether to print status updates, by default False
    **func_args : dict, optional
        Additional keyword arguments to pass to the metric function, by default {}

    Returns
    -------
    dict
        dictionary of the calculated statistics.
        key is metric name + statistic (e.g. LZeq_5, LZeq_90, etc)
        value is the calculated statistic

    Raises
    ------
    ValueError
        Metric must be one of {"LZeq", "Leq", "LAeq", "LCeq", "SEL"}

    See Also
    --------
    acoustics
    """
    if s.channels != 1:
        raise ValueError("Signal must be single channel")
    try:
        label = label or DEFAULT_LABELS[metric]
    except KeyError as e:
        raise ValueError(f"Metric {metric} not recognized.") from e
    if as_df and return_time_series:
        warnings.warn(
            "Cannot return both a dataframe and time series. Returning dataframe only."
        )

        return_time_series = False
    if verbose:
        print(f" - Calculating Python Acoustics: {metric} {statistics}")
    res = {}
    if metric in {"LZeq", "Leq", "LAeq", "LCeq"}:
        if metric in {"LZeq", "Leq"}:
            weighting = "Z"
        elif metric == "LAeq":
            weighting = "A"
        elif metric == "LCeq":
            weighting = "C"
        if "avg" in statistics or "mean" in statistics:
            stat = "avg" if "avg" in statistics else "mean"
            res[f"{label}"] = s.weigh(weighting).leq()
            statistics = list(statistics)
            statistics.remove(stat)
        if len(statistics) > 0:
            res = _stat_calcs(
                label, s.weigh(weighting).levels(**func_args)[1], res, statistics
            )

        if return_time_series:
            res[f"{label}_ts"] = s.weigh(weighting).levels(**func_args)
    elif metric == "SEL":
        res[f"{label}"] = s.sound_exposure_level()
    else:
        raise ValueError(f"Metric {metric} not recognized.")
    if not as_df:
        return res
    try:
        rec = s.recording
        return pd.DataFrame(res, index=[rec])
    except AttributeError:
        return pd.DataFrame(res, index=[0])

Parallel Processing

load_analyse_binaural

load_analyse_binaural(wav_file, levels, analysis_settings, verbose=True)

Load and analyse binaural file

PARAMETER DESCRIPTION
wav_file

Path to wav file

TYPE: str

levels

List of levels to analyse

TYPE: list

analysis_settings

Analysis settings

TYPE: AnalysisSettings

verbose

Print progress, by default True

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
results

Dictionary with results

TYPE: dict

Source code in soundscapy/analysis/parallel_processing.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
def load_analyse_binaural(wav_file, levels, analysis_settings, verbose=True):
    """Load and analyse binaural file

    Parameters
    ----------
    wav_file : str
        Path to wav file
    levels : list
        List of levels to analyse
    analysis_settings : AnalysisSettings
        Analysis settings
    verbose : bool, optional
        Print progress, by default True

    Returns
    -------
    results : dict
        Dictionary with results
    """
    print(f"Processing {wav_file.stem}")
    b = Binaural.from_wav(wav_file)
    decibel = (levels[b.recording]["Left"], levels[b.recording]["Right"])
    b.calibrate_to(decibel, inplace=True)
    return process_all_metrics(b, analysis_settings, parallel=False, verbose=verbose)

parallel_process

parallel_process(wav_files, results_df, levels, analysis_settings, verbose=True)

Parallel processing of binaural files

PARAMETER DESCRIPTION
wav_files

List of wav files

TYPE: list

results_df

Results dataframe

TYPE: DataFrame

levels

Dictionary with levels

TYPE: dict

analysis_settings

Analysis settings

TYPE: AnalysisSettings

verbose

Print progress, by default True

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
results_df

Results dataframe

TYPE: DataFrame

Source code in soundscapy/analysis/parallel_processing.py
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
def parallel_process(wav_files, results_df, levels, analysis_settings, verbose=True):
    """
    Parallel processing of binaural files

    Parameters
    ----------
    wav_files : list
        List of wav files
    results_df : pandas.DataFrame
        Results dataframe
    levels : dict
        Dictionary with levels
    analysis_settings : AnalysisSettings
        Analysis settings
    verbose : bool, optional
        Print progress, by default True

    Returns
    -------
    results_df : pandas.DataFrame
        Results dataframe
    """
    # Parallel processing with Pool.apply_async() without callback function

    pool = mp.Pool(mp.cpu_count() - 1)
    results = []
    result_objects = [
        pool.apply_async(
            load_analyse_binaural,
            args=(wav_file, levels, analysis_settings, verbose),
        )
        for wav_file in wav_files
    ]
    with tqdm(total = len(result_objects), desc="Processing files") as pbar:
        for r in result_objects:
            r.wait()
            results.append(r.get())
            pbar.update()
    # results = [r.get() for r in result_objects]

    pool.close()
    pool.join()

    for r in results:
        results_df = add_results(results_df, r)

    return results_df