Skip to content

Binaural Analysis

This section provides an overview of the binaural analysis tools available in Soundscapy. It includes a brief description of each tool, as well as information on how to access and use them.

Binaural Metrics

Provides tools for working with binaural audio signals.

The main class, Binaural, extends the Signal class from the Acoustic Toolbox library to provide specialized functionality for binaural recordings. It supports various psychoacoustic metrics and analysis techniques using libraries such as mosqito, maad, and acoustic_toolbox.

CLASS DESCRIPTION
Binaural : A class for processing and analyzing binaural audio signals.
Notes

This module requires the following external libraries: - acoustics - mosqito - maad - acoustic_toolbox

Examples:

>>> # xdoctest: +SKIP
>>> from soundscapy.audio import Binaural
>>> signal = Binaural.from_wav("audio.wav")
>>> results = signal.process_all_metrics(analysis_settings)

Binaural

Bases: Signal

A class for processing and analyzing binaural audio signals.

This class extends the Signal class from the acoustic_toolbox library to provide specialized functionality for binaural recordings. It supports various psychoacoustic metrics and analysis techniques using libraries such as mosqito, maad, and acoustic_toolbox.

ATTRIBUTE DESCRIPTION
fs

Sampling frequency of the signal.

TYPE: float

recording

Name or identifier of the recording.

TYPE: str

Notes

This class only supports 2-channel (stereo) audio signals.

METHOD DESCRIPTION
__array_finalize__

Finalize the new Binaural object.

__new__

Create a new Binaural object.

acoustics_metric

Run a metric from the acoustic_toolbox library.

calibrate_to

Calibrate the binaural signal to predefined Leq/dB levels.

from_wav

Load a wav file and return a Binaural object.

fs_resample

Resample the signal to a new sampling frequency.

maad_metric

Run a metric from the scikit-maad library.

mosqito_metric

Run a metric from the mosqito library.

process_all_metrics

Process all metrics specified in the analysis settings.

pyacoustics_metric

Run a metric from the pyacoustics library (deprecated).

__array_finalize__

__array_finalize__(obj)

Finalize the new Binaural object.

This method is called for all new Binaural objects.

PARAMETER DESCRIPTION
obj

The object from which the new object was created.

TYPE: Binaural or None

Source code in soundscapy/audio/binaural.py
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
def __array_finalize__(self, obj: "Binaural | None") -> None:
    """
    Finalize the new Binaural object.

    This method is called for all new Binaural objects.

    Parameters
    ----------
    obj : Binaural or None
        The object from which the new object was created.

    """
    if obj is None:
        return
    self.fs = getattr(obj, "fs", None)
    self.recording = getattr(obj, "recording", "Rec")

__new__

__new__(data, fs, recording='Rec')

Create a new Binaural object.

PARAMETER DESCRIPTION
data

The audio data.

TYPE: array_like

fs

Sampling frequency of the signal.

TYPE: float

recording

Name or identifier of the recording. Default is "Rec".

TYPE: str DEFAULT: 'Rec'

RETURNS DESCRIPTION
Binaural

A new Binaural object.

RAISES DESCRIPTION
ValueError

If the input signal is not 2-channel.

Source code in soundscapy/audio/binaural.py
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
def __new__(
    cls, data: np.ndarray, fs: float | None, recording: str = "Rec"
) -> "Binaural":
    """
    Create a new Binaural object.

    Parameters
    ----------
    data : array_like
        The audio data.
    fs : float
        Sampling frequency of the signal.
    recording : str, optional
        Name or identifier of the recording. Default is "Rec".

    Returns
    -------
    Binaural
        A new Binaural object.

    Raises
    ------
    ValueError
        If the input signal is not 2-channel.

    """
    obj = super().__new__(cls, data, fs).view(cls)
    obj.recording = recording
    if obj.channels != ALLOWED_BINAURAL_CHANNELS:
        logger.error(
            f"Attempted to create Binaural object with {obj.channels} channels"
        )
        msg = "Binaural class only supports 2 channels."
        raise ValueError(msg)
    logger.debug(f"Created new Binaural object: {recording}, fs={fs}")
    return obj

acoustics_metric

acoustics_metric(metric, statistics=(5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew'), label=None, channel=('Left', 'Right'), metric_settings=None, func_args=None, *, as_df=True, return_time_series=False)

Run a metric from the acoustic_toolbox library.

PARAMETER DESCRIPTION
metric

The metric to run.

TYPE: (LZeq, Leq, LAeq, LCeq, SEL) DEFAULT: "LZeq"

statistics

List of level statistics to calculate (e.g. L_5, L_90, etc.). Default is (5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew").

TYPE: tuple or list DEFAULT: (5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew')

label

Label to use for the metric. If None, will pull from default label for that metric.

TYPE: str DEFAULT: None

channel

Which channels to process. Default is ("Left", "Right").

TYPE: tuple, list, or str DEFAULT: ('Left', 'Right')

as_df

Whether to return a dataframe or not. Default is True. If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.

TYPE: bool DEFAULT: True

return_time_series

Whether to return the time series of the metric. Default is False. Cannot return time series if as_df is True.

TYPE: bool DEFAULT: False

metric_settings

Settings for metric analysis. Default is None.

TYPE: MetricSettings DEFAULT: None

func_args

Any settings given here will override those in the other options. Can pass any args or *kwargs to the underlying acoustic_toolbox method.

TYPE: dict DEFAULT: None

RETURNS DESCRIPTION
dict or DataFrame

Dictionary of results if as_df is False, otherwise a pandas DataFrame.

See Also

metrics.acoustics_metric acoustic_toolbox.standards_iso_tr_25417_2007.equivalent_sound_pressure_level : Base method for Leq calculation. acoustic_toolbox.standards.iec_61672_1_2013.sound_exposure_level : Base method for SEL calculation. acoustic_toolbox.standards.iec_61672_1_2013.time_weighted_sound_level : Base method for Leq level time series calculation.

Source code in soundscapy/audio/binaural.py
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
def acoustics_metric(
    self,
    metric: Literal["LZeq", "Leq", "LAeq", "LCeq", "SEL"],
    statistics: tuple | list = (
        5,
        10,
        50,
        90,
        95,
        "avg",
        "max",
        "min",
        "kurt",
        "skew",
    ),
    label: str | None = None,
    channel: str | int | list | tuple = ("Left", "Right"),
    metric_settings: MetricSettings | None = None,
    func_args: dict | None = None,
    *,
    as_df: bool = True,
    return_time_series: bool = False,
) -> dict | pd.DataFrame | None:
    """
    Run a metric from the acoustic_toolbox library.

    Parameters
    ----------
    metric : {"LZeq", "Leq", "LAeq", "LCeq", "SEL"}
        The metric to run.
    statistics : tuple or list, optional
        List of level statistics to calculate (e.g. L_5, L_90, etc.).
        Default is (5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew").
    label : str, optional
        Label to use for the metric.
        If None, will pull from default label for that metric.
    channel : tuple, list, or str, optional
        Which channels to process. Default is ("Left", "Right").
    as_df : bool, optional
        Whether to return a dataframe or not. Default is True.
        If True, returns a MultiIndex Dataframe with
        ("Recording", "Channel") as the index.
    return_time_series : bool, optional
        Whether to return the time series of the metric. Default is False.
        Cannot return time series if as_df is True.
    metric_settings : MetricSettings, optional
        Settings for metric analysis. Default is None.
    func_args : dict, optional
        Any settings given here will override those in the other options.
        Can pass any *args or **kwargs to the underlying acoustic_toolbox method.

    Returns
    -------
    dict or pd.DataFrame
        Dictionary of results if as_df is False, otherwise a pandas DataFrame.

    See Also
    --------
    metrics.acoustics_metric
    acoustic_toolbox.standards_iso_tr_25417_2007.equivalent_sound_pressure_level :
        Base method for Leq calculation.
    acoustic_toolbox.standards.iec_61672_1_2013.sound_exposure_level :
        Base method for SEL calculation.
    acoustic_toolbox.standards.iec_61672_1_2013.time_weighted_sound_level :
        Base method for Leq level time series calculation.

    """
    if func_args is None:
        func_args = {}
    if metric_settings:
        logger.debug("Using provided analysis settings")
        if not metric_settings.run:
            logger.info(f"Metric {metric} is disabled in analysis settings")
            return None

        channel = metric_settings.channel
        statistics = metric_settings.statistics
        label = metric_settings.label
        func_args = metric_settings.func_args

    channel = ("Left", "Right") if channel is None else channel
    s = self._get_channel(channel)

    if s.channels == 1:
        logger.debug("Processing single channel")
        return acoustics_metric_1ch(
            s, metric, statistics, label, as_df, return_time_series, func_args
        )
    logger.debug("Processing both channels")
    return acoustics_metric_2ch(
        s,
        metric,
        statistics,
        label,
        channel,
        as_df,
        return_time_series,
        func_args,
    )

calibrate_to

calibrate_to(decibel, inplace=False)

Calibrate the binaural signal to predefined Leq/dB levels.

This method allows calibration of both channels either to the same level or to different levels for each channel.

PARAMETER DESCRIPTION
decibel

Target calibration value(s) in dB (Leq). If a single value is provided, both channels will be calibrated to this level. If two values are provided, they will be applied to the left and right channels respectively.

TYPE: float or List[float] or Tuple[float, float]

inplace

If True, modify the signal in place. If False, return a new calibrated signal. Default is False.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
Binaural

Calibrated Binaural signal. If inplace is True, returns self.

RAISES DESCRIPTION
ValueError

If decibel is not a float, or a list/tuple of two floats.

Examples:

>>> # xdoctest: +SKIP
>>> signal = Binaural.from_wav("audio.wav")
>>> # Calibrate left channel to 60 dB and right to 62 dB
>>> calibrated_signal = signal.calibrate_to([60, 62])
Source code in soundscapy/audio/binaural.py
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
def calibrate_to(
    self,
    decibel: float | list[float] | tuple[float, float] | np.ndarray | pd.Series,
    inplace: bool = False,  # noqa: FBT001, FBT002 TODO(MitchellAcoustics): Change to keyword-only in acoustic_toolbox.Signal
) -> "Binaural":
    """
    Calibrate the binaural signal to predefined Leq/dB levels.

    This method allows calibration of both channels either to the same level
    or to different levels for each channel.

    Parameters
    ----------
    decibel : float or List[float] or Tuple[float, float]
        Target calibration value(s) in dB (Leq).
        If a single value is provided, both channels will be calibrated
        to this level.
        If two values are provided, they will be applied to the left and right
        channels respectively.
    inplace : bool, optional
        If True, modify the signal in place.
        If False, return a new calibrated signal.
        Default is False.

    Returns
    -------
    Binaural
        Calibrated Binaural signal. If inplace is True, returns self.

    Raises
    ------
    ValueError
        If decibel is not a float, or a list/tuple of two floats.

    Examples
    --------
    >>> # xdoctest: +SKIP
    >>> signal = Binaural.from_wav("audio.wav")
    >>> # Calibrate left channel to 60 dB and right to 62 dB
    >>> calibrated_signal = signal.calibrate_to([60, 62])

    """
    logger.info(f"Calibrating Binaural signal to {decibel} dB")
    if isinstance(decibel, np.ndarray | pd.Series):  # Force into tuple
        decibel = tuple(decibel)
    if isinstance(decibel, list | tuple):
        if (
            len(decibel) == ALLOWED_BINAURAL_CHANNELS
        ):  # Per-channel calibration (recommended)
            logger.debug(
                "Calibrating channels separately: "
                f"Left={decibel[0]}dB, Right={decibel[1]}dB"
            )
            decibel = np.array(decibel)
            decibel = decibel[..., None]
            return super().calibrate_to(decibel, inplace)  # type: ignore[reportReturnType]
        if (
            len(decibel) == 1
        ):  # if one value given in tuple, assume same for both channels
            logger.debug(f"Calibrating both channels to {decibel[0]}dB")
            decibel = decibel[0]
        else:
            logger.error(f"Invalid calibration value: {decibel}")
            msg = "decibel must either be a single value or a 2 value tuple"
            raise TypeError(msg)
    if isinstance(decibel, int | float):  # Calibrate both channels to same value
        logger.debug(f"Calibrating both channels to {decibel}dB")
        return super().calibrate_to(decibel, inplace)  # type: ignore[reportReturnType]
    logger.error(f"Invalid calibration value: {decibel}")
    msg = "decibel must be a single value or a 2 value tuple"
    raise TypeError(msg)

from_wav classmethod

from_wav(filename, normalize=False, calibrate_to=None, resample=None, recording=None)

Load a wav file and return a Binaural object.

Overrides the Signal.from_wav method to return a Binaural object instead of a Signal object.

PARAMETER DESCRIPTION
filename

Filename of wav file to load.

TYPE: Path or str

calibrate_to

Value(s) to calibrate to in dB (Leq). Can also handle np.ndarray and pd.Series of length 2. If only one value is passed, will calibrate both channels to the same value.

TYPE: float or List or Tuple DEFAULT: None

normalize

Whether to normalize the signal. Default is False.

TYPE: bool DEFAULT: False

resample

New sampling frequency to resample the signal to. Default is None

TYPE: int DEFAULT: None

RETURNS DESCRIPTION
Binaural

Binaural signal object of wav recording.

See Also

acoustic_toolbox.Signal.from_wav : Base method for loading wav files.

Source code in soundscapy/audio/binaural.py
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
@classmethod
def from_wav(
    cls,
    filename: Path | str,
    normalize: bool = False,  # noqa: FBT001, FBT002
    calibrate_to: float | list | tuple | None = None,
    resample: int | None = None,
    recording: str | None = None,
) -> "Binaural":
    """
    Load a wav file and return a Binaural object.

    Overrides the Signal.from_wav method to return a
    Binaural object instead of a Signal object.

    Parameters
    ----------
    filename : Path or str
        Filename of wav file to load.
    calibrate_to : float or List or Tuple, optional
        Value(s) to calibrate to in dB (Leq).
        Can also handle np.ndarray and pd.Series of length 2.
        If only one value is passed, will calibrate both channels to the same value.
    normalize : bool, optional
        Whether to normalize the signal. Default is False.
    resample : int, optional
        New sampling frequency to resample the signal to. Default is None

    Returns
    -------
    Binaural
        Binaural signal object of wav recording.

    See Also
    --------
    acoustic_toolbox.Signal.from_wav : Base method for loading wav files.

    """
    filename = ensure_input_path(filename)
    if not filename.exists():
        logger.error(f"File not found: {filename}")
        msg = f"File not found: {filename}"
        raise FileNotFoundError(msg)

    logger.info(f"Loading WAV file: {filename}")
    fs, data = wavfile.read(filename)
    data = data.astype(np.float32, copy=False).T
    if normalize:
        data /= np.max(np.abs(data))

    recording = recording if recording is not None else Path(filename).stem
    b = cls(data, fs, recording=recording)

    if calibrate_to is not None:
        logger.info(f"Calibrating loaded signal to {calibrate_to} dB")
        b.calibrate_to(calibrate_to, inplace=True)
    if resample is not None:
        logger.debug(f"Resampling loaded signal to {resample} Hz")
        b = b.fs_resample(resample)
    return b

fs_resample

fs_resample(fs, original_fs=None)

Resample the signal to a new sampling frequency.

PARAMETER DESCRIPTION
fs

New sampling frequency.

TYPE: float

original_fs

Original sampling frequency. If None, it will be inferred from the signal (Binaural.fs).

TYPE: float or None DEFAULT: None

RETURNS DESCRIPTION
Binaural

Resampled Binaural signal. If inplace is True, returns self.

See Also

acoustic_toolbox.Signal.resample : Base method for resampling signals.

Source code in soundscapy/audio/binaural.py
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
def fs_resample(
    self,
    fs: float,
    original_fs: float | None = None,
) -> "Binaural":
    """
    Resample the signal to a new sampling frequency.

    Parameters
    ----------
    fs : float
        New sampling frequency.
    original_fs : float or None, optional
        Original sampling frequency.
        If None, it will be inferred from the signal (`Binaural.fs`).

    Returns
    -------
    Binaural
        Resampled Binaural signal. If inplace is True, returns self.

    See Also
    --------
    acoustic_toolbox.Signal.resample : Base method for resampling signals.

    """
    current_fs: float

    if original_fs is None:
        if hasattr(self, "fs") and self.fs is not None:
            current_fs = self.fs
        else:
            logger.error("Original sampling frequency not provided.")
            msg = "Original sampling frequency not provided."
            raise ValueError(msg)
    else:
        current_fs = original_fs

    if fs == current_fs:
        logger.info(f"Signal already at {current_fs} Hz. No resampling needed.")
        return self

    logger.info(f"Resampling signal to {fs} Hz")
    resampled_channels = [
        scipy.signal.resample(channel, int(fs * len(channel) / current_fs))
        for channel in self
    ]
    resampled_channels = np.stack(resampled_channels)
    return Binaural(resampled_channels, fs, recording=self.recording)

maad_metric

maad_metric(metric, channel=('Left', 'Right'), as_df=True, metric_settings=None, func_args={})

Run a metric from the scikit-maad library.

Currently only supports running all of the alpha indices at once.

PARAMETER DESCRIPTION
metric

The metric to run.

TYPE: (all_temporal_alpha_indices, all_spectral_alpha_indices) DEFAULT: "all_temporal_alpha_indices"

channel

Which channels to process. Default is ("Left", "Right").

TYPE: (tuple, list or str) DEFAULT: ('Left', 'Right')

as_df

Whether to return a dataframe or not. Default is True. If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.

TYPE: bool DEFAULT: True

metric_settings

Settings for metric analysis. Default is None.

TYPE: MetricSettings DEFAULT: None

func_args

Additional arguments to pass to the underlying scikit-maad method.

TYPE: dict DEFAULT: {}

RETURNS DESCRIPTION
dict or DataFrame

Dictionary of results if as_df is False, otherwise a pandas DataFrame.

RAISES DESCRIPTION
ValueError

If metric name is not recognised.

See Also

metrics.maad_metric_1ch metrics.maad_metric_2ch

Source code in soundscapy/audio/binaural.py
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
def maad_metric(
    self,
    metric: str,
    channel: int | tuple | list | str = ("Left", "Right"),
    as_df: bool = True,
    metric_settings: MetricSettings | None = None,
    func_args: dict = {},
) -> dict | pd.DataFrame:
    """
    Run a metric from the scikit-maad library.

    Currently only supports running all of the alpha indices at once.

    Parameters
    ----------
    metric : {"all_temporal_alpha_indices", "all_spectral_alpha_indices"}
        The metric to run.
    channel : tuple, list or str, optional
        Which channels to process. Default is ("Left", "Right").
    as_df : bool, optional
        Whether to return a dataframe or not. Default is True.
        If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.
    metric_settings : MetricSettings, optional
        Settings for metric analysis. Default is None.
    func_args : dict, optional
        Additional arguments to pass to the underlying scikit-maad method.

    Returns
    -------
    dict or pd.DataFrame
        Dictionary of results if as_df is False, otherwise a pandas DataFrame.

    Raises
    ------
    ValueError
        If metric name is not recognised.

    See Also
    --------
    metrics.maad_metric_1ch
    metrics.maad_metric_2ch

    """
    logger.info(f"Running maad metric: {metric}")
    if metric_settings:
        logger.debug("Using provided analysis settings")
        if metric not in {
            "all_temporal_alpha_indices",
            "all_spectral_alpha_indices",
        }:
            logger.error(f"Invalid maad metric: {metric}")
            raise ValueError(f"Metric {metric} not recognised")

        if not metric_settings.run:
            logger.info(f"Metric {metric} is disabled in analysis settings")
            return None

        channel = metric_settings.channel
    channel = ("Left", "Right") if channel is None else channel
    s = self._get_channel(channel)
    if s.channels == 1:
        logger.debug("Processing single channel")
        return maad_metric_1ch(s, metric, as_df)
    logger.debug("Processing both channels")
    return maad_metric_2ch(s, metric, channel, as_df, func_args)

mosqito_metric

mosqito_metric(metric, statistics=(5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew'), label=None, channel=('Left', 'Right'), as_df=True, return_time_series=False, parallel=True, metric_settings=None, func_args={})

Run a metric from the mosqito library.

PARAMETER DESCRIPTION
metric

Metric to run from mosqito library.

TYPE: (loudness_zwtv, sharpness_din_from_loudness, sharpness_din_perseg, sharpness_tv, roughness_dw) DEFAULT: "loudness_zwtv"

statistics

List of level statistics to calculate (e.g. L_5, L_90, etc.). Default is (5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew").

TYPE: tuple or list DEFAULT: (5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew')

label

Label to use for the metric. If None, will pull from default label for that metric.

TYPE: str DEFAULT: None

channel

Which channels to process. Default is ("Left", "Right").

TYPE: tuple or list of str or str DEFAULT: ('Left', 'Right')

as_df

Whether to return a dataframe or not. Default is True. If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.

TYPE: bool DEFAULT: True

return_time_series

Whether to return the time series of the metric. Default is False. Cannot return time series if as_df is True.

TYPE: bool DEFAULT: False

parallel

Whether to run the channels in parallel. Default is True. If False, will run each channel sequentially.

TYPE: bool DEFAULT: True

metric_settings

Settings for metric analysis. Default is None.

TYPE: MetricSettings DEFAULT: None

func_args

Any settings given here will override those in the other options. Can pass any args or *kwargs to the underlying acoustic_toolbox method.

TYPE: dict DEFAULT: {}

RETURNS DESCRIPTION
dict or DataFrame

Dictionary of results if as_df is False, otherwise a pandas DataFrame.

See Also

binaural.mosqito_metric_2ch : Method for running metrics on 2 channels. binaural.mosqito_metric_1ch : Method for running metrics on 1 channel.

Source code in soundscapy/audio/binaural.py
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
def mosqito_metric(
    self,
    metric: str,
    statistics: tuple | list = (
        5,
        10,
        50,
        90,
        95,
        "avg",
        "max",
        "min",
        "kurt",
        "skew",
    ),
    label: str | None = None,
    channel: int | tuple | list | str = ("Left", "Right"),
    as_df: bool = True,
    return_time_series: bool = False,
    parallel: bool = True,
    metric_settings: MetricSettings | None = None,
    func_args: dict = {},
) -> dict | pd.DataFrame:
    """
    Run a metric from the mosqito library.

    Parameters
    ----------
    metric : {"loudness_zwtv", "sharpness_din_from_loudness", "sharpness_din_perseg", "sharpness_tv", "roughness_dw"}
        Metric to run from mosqito library.
    statistics : tuple or list, optional
        List of level statistics to calculate (e.g. L_5, L_90, etc.).
        Default is (5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew").
    label : str, optional
        Label to use for the metric. If None, will pull from default label for that metric.
    channel : tuple or list of str or str, optional
        Which channels to process. Default is ("Left", "Right").
    as_df : bool, optional
        Whether to return a dataframe or not. Default is True.
        If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.
    return_time_series : bool, optional
        Whether to return the time series of the metric. Default is False.
        Cannot return time series if as_df is True.
    parallel : bool, optional
        Whether to run the channels in parallel. Default is True.
        If False, will run each channel sequentially.
    metric_settings : MetricSettings, optional
        Settings for metric analysis. Default is None.
    func_args : dict, optional
        Any settings given here will override those in the other options.
        Can pass any *args or **kwargs to the underlying acoustic_toolbox method.

    Returns
    -------
    dict or pd.DataFrame
        Dictionary of results if as_df is False, otherwise a pandas DataFrame.

    See Also
    --------
    binaural.mosqito_metric_2ch : Method for running metrics on 2 channels.
    binaural.mosqito_metric_1ch : Method for running metrics on 1 channel.

    """
    logger.info(f"Running mosqito metric: {metric}")
    if metric_settings:
        logger.debug("Using provided analysis settings")
        if not metric_settings.run:
            logger.info(f"Metric {metric} is disabled in analysis settings")
            return None

        channel = metric_settings.channel
        statistics = metric_settings.statistics
        label = metric_settings.label
        parallel = metric_settings.parallel
        func_args = metric_settings.func_args

    channel = ("Left", "Right") if channel is None else channel
    s = self._get_channel(channel)

    if s.channels == 1:
        logger.debug("Processing single channel")
        return mosqito_metric_1ch(
            s,
            metric,
            statistics,
            label,
            as_df=as_df,
            return_time_series=return_time_series,
            **func_args,
        )
    logger.debug("Processing both channels")
    return mosqito_metric_2ch(
        s,
        metric,
        statistics,
        label,
        channel,
        as_df=as_df,
        return_time_series=return_time_series,
        parallel=parallel,
        func_args=func_args,
    )

process_all_metrics

process_all_metrics(analysis_settings=AnalysisSettings.default(), parallel=True)

Process all metrics specified in the analysis settings.

This method runs all enabled metrics from the provided AnalysisSettings object and compiles the results into a single DataFrame.

PARAMETER DESCRIPTION
analysis_settings

Configuration object specifying which metrics to run and their parameters.

TYPE: AnalysisSettings DEFAULT: default()

parallel

Whether to run calculations in parallel where possible. Default is True.

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
DataFrame

A MultiIndex DataFrame containing the results of all processed metrics. The index includes "Recording" and "Channel" levels.

Notes

The parallel option primarily affects the MoSQITo metrics. Other metrics may not benefit from parallelization.

TODO: Provide default settings to analysis_settings to make it optional.

Examples:

>>> # xdoctest: +SKIP
>>> signal = Binaural.from_wav("audio.wav")
>>> settings = AnalysisSettings.from_yaml("settings.yaml")
>>> results = signal.process_all_metrics(settings)
Source code in soundscapy/audio/binaural.py
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
def process_all_metrics(
    self,
    analysis_settings: AnalysisSettings = AnalysisSettings.default(),
    parallel: bool = True,
) -> pd.DataFrame:
    """
    Process all metrics specified in the analysis settings.

    This method runs all enabled metrics from the provided AnalysisSettings object
    and compiles the results into a single DataFrame.

    Parameters
    ----------
    analysis_settings : AnalysisSettings
        Configuration object specifying which metrics to run and their parameters.
    parallel : bool, optional
        Whether to run calculations in parallel where possible. Default is True.

    Returns
    -------
    pd.DataFrame
        A MultiIndex DataFrame containing the results of all processed metrics.
        The index includes "Recording" and "Channel" levels.

    Notes
    -----
    The parallel option primarily affects the MoSQITo metrics. Other metrics may not benefit from parallelization.

    TODO: Provide default settings to analysis_settings to make it optional.

    Examples
    --------
    >>> # xdoctest: +SKIP
    >>> signal = Binaural.from_wav("audio.wav")
    >>> settings = AnalysisSettings.from_yaml("settings.yaml")
    >>> results = signal.process_all_metrics(settings)

    """
    logger.info(f"Processing all metrics for {self.recording}")
    logger.debug(f"Parallel processing: {parallel}")
    return process_all_metrics(self, analysis_settings, parallel)

pyacoustics_metric

pyacoustics_metric(metric, statistics=(5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew'), label=None, channel=('Left', 'Right'), as_df=True, return_time_series=False, metric_settings=None, func_args=None)

Run a metric from the pyacoustics library (deprecated).

This method has been deprecated. Use acoustics_metric instead. All parameters are passed directly to acoustics_metric.

PARAMETER DESCRIPTION
metric

The metric to run.

TYPE: (LZeq, Leq, LAeq, LCeq, SEL) DEFAULT: "LZeq"

statistics

List of level statistics to calculate (e.g. L_5, L_90, etc.). Default is (5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew").

TYPE: tuple or list DEFAULT: (5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew')

label

Label to use for the metric. If None, will pull from default label for that metric.

TYPE: str DEFAULT: None

channel

Which channels to process. Default is ("Left", "Right").

TYPE: tuple, list, or str DEFAULT: ('Left', 'Right')

as_df

Whether to return a dataframe or not. Default is True. If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.

TYPE: bool DEFAULT: True

return_time_series

Whether to return the time series of the metric. Default is False. Cannot return time series if as_df is True.

TYPE: bool DEFAULT: False

metric_settings

Settings for metric analysis. Default is None.

TYPE: MetricSettings DEFAULT: None

func_args

Any settings given here will override those in the other options. Can pass any args or *kwargs to the underlying acoustic_toolbox method.

TYPE: dict DEFAULT: None

RETURNS DESCRIPTION
dict or DataFrame

Results of the metric calculation.

See Also

Binaural.acoustics_metric

Source code in soundscapy/audio/binaural.py
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
def pyacoustics_metric(
    self,
    metric: Literal["LZeq", "Leq", "LAeq", "LCeq", "SEL"],
    statistics: tuple | list = (
        5,
        10,
        50,
        90,
        95,
        "avg",
        "max",
        "min",
        "kurt",
        "skew",
    ),
    label: str | None = None,
    channel: str | int | list | tuple = ("Left", "Right"),
    as_df: bool = True,  # noqa: FBT001, FBT002
    return_time_series: bool = False,  # noqa: FBT001, FBT002
    metric_settings: MetricSettings | None = None,
    func_args: dict | None = None,
) -> dict | pd.DataFrame | None:
    """
    Run a metric from the pyacoustics library (deprecated).

    This method has been deprecated. Use `acoustics_metric` instead.
    All parameters are passed directly to `acoustics_metric`.

    Parameters
    ----------
    metric : {"LZeq", "Leq", "LAeq", "LCeq", "SEL"}
        The metric to run.
    statistics : tuple or list, optional
        List of level statistics to calculate (e.g. L_5, L_90, etc.).
        Default is (5, 10, 50, 90, 95, "avg", "max", "min", "kurt", "skew").
    label : str, optional
        Label to use for the metric.
        If None, will pull from default label for that metric.
    channel : tuple, list, or str, optional
        Which channels to process. Default is ("Left", "Right").
    as_df : bool, optional
        Whether to return a dataframe or not. Default is True.
        If True, returns a MultiIndex Dataframe with
        ("Recording", "Channel") as the index.
    return_time_series : bool, optional
        Whether to return the time series of the metric. Default is False.
        Cannot return time series if as_df is True.
    metric_settings : MetricSettings, optional
        Settings for metric analysis. Default is None.
    func_args : dict, optional
        Any settings given here will override those in the other options.
        Can pass any *args or **kwargs to the underlying acoustic_toolbox method.

    Returns
    -------
    dict or pd.DataFrame
        Results of the metric calculation.

    See Also
    --------
    Binaural.acoustics_metric

    """
    if func_args is None:
        func_args = {}
    warnings.warn(
        "pyacoustics has been deprecated. Use acoustics_metric instead.",
        DeprecationWarning,
        stacklevel=2,
    )
    return self.acoustics_metric(
        metric,
        statistics,
        label,
        channel,
        as_df=as_df,
        return_time_series=return_time_series,
        metric_settings=metric_settings,
        func_args=func_args,
    )

show_submodules: true

Functions for calculating various acoustic and psychoacoustic metrics for audio signals.

It includes implementations for single-channel and two-channel signals, as well as wrapper functions for different libraries such as Acoustic Toolbox, MoSQITo, and scikit-maad.

FUNCTION DESCRIPTION
_stat_calcs : Calculate various statistics for a time series array.
mosqito_metric_1ch : Calculate a MoSQITo psychoacoustic metric for a single channel

signal.

maad_metric_1ch : Run a metric from the scikit-maad library on a single channel signal.
acoustics_metric_1ch : Run a metric from the Acoustic Toolbox on a single channel

object.

acoustics_metric_2ch : Run a metric from the Acoustic Toolbox on a Binaural object.
pyacoustics_metric_1ch: Deprecated function for running a metric from the PyAcoustics

library

pyacoustics_metric_2ch: Deprecated function for running a metric from the PyAcoustics

library (replaced with acoustics_metric_2ch).

pyacoustics_metric_2ch: Deprecated function for running a metric from the PyAcoustics

library (replaced with acoustics_metric_2ch).

mosqito_metric_2ch : Calculate metrics from MoSQITo for a two-channel signal.
maad_metric_2ch : Run a metric from the scikit-maad library on a binaural signal.
prep_multiindex_df : Prepare a MultiIndex dataframe from a dictionary of results.
add_results : Add results to a MultiIndex dataframe.
process_all_metrics : Process all metrics specified in the analysis settings for a

binaural signal.

Notes

This module relies on external libraries such as numpy, pandas, maad, mosqito, and scipy. Ensure these dependencies are installed before using this module.

acoustics_metric_1ch

acoustics_metric_1ch(s, metric, statistics=(5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew'), label=None, as_df=False, return_time_series=False, func_args={})

Run a metric from the acoustic_toolbox library on a single channel object.

PARAMETER DESCRIPTION
s

Single channel signal to calculate the metric for.

TYPE: Signal or Binaural (single channel slice)

metric

The metric to run.

TYPE: (LZeq, Leq, LAeq, LCeq, SEL) DEFAULT: "LZeq"

statistics

List of level statistics to calculate (e.g. L_5, L_90, etc).

TYPE: List[Union[int, str]] DEFAULT: (5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew')

label

Label to use for the metric in the results dictionary. If None, will pull from default label for that metric given in DEFAULT_LABELS.

TYPE: str DEFAULT: None

as_df

Whether to return a pandas DataFrame, by default False. If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.

TYPE: bool DEFAULT: False

return_time_series

Whether to return the time series of the metric, by default False. Cannot return time series if as_df is True.

TYPE: bool DEFAULT: False

func_args

Additional keyword arguments to pass to the metric function, by default {}.

TYPE: dict DEFAULT: {}

RETURNS DESCRIPTION
dict or DataFrame

Dictionary of the calculated statistics or a pandas DataFrame.

RAISES DESCRIPTION
ValueError

If the signal is not single-channel or if an unrecognized metric is specified.

See Also

acoustic_toolbox

Source code in soundscapy/audio/metrics.py
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def acoustics_metric_1ch(
    s,
    metric: str,
    statistics: list[int | str] = (
        5,
        10,
        50,
        90,
        95,
        "avg",
        "max",
        "min",
        "kurt",
        "skew",
    ),
    label: str | None = None,
    as_df: bool = False,
    return_time_series: bool = False,
    func_args={},
):
    """
    Run a metric from the acoustic_toolbox library on a single channel object.

    Parameters
    ----------
    s : Signal or Binaural (single channel slice)
        Single channel signal to calculate the metric for.
    metric : {"LZeq", "Leq", "LAeq", "LCeq", "SEL"}
        The metric to run.
    statistics : List[Union[int, str]], optional
        List of level statistics to calculate (e.g. L_5, L_90, etc).
    label : str, optional
        Label to use for the metric in the results dictionary.
        If None, will pull from default label for that metric given in DEFAULT_LABELS.
    as_df : bool, optional
        Whether to return a pandas DataFrame, by default False.
        If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.
    return_time_series : bool, optional
        Whether to return the time series of the metric, by default False.
        Cannot return time series if as_df is True.
    func_args : dict, optional
        Additional keyword arguments to pass to the metric function, by default {}.

    Returns
    -------
    dict or pd.DataFrame
        Dictionary of the calculated statistics or a pandas DataFrame.

    Raises
    ------
    ValueError
        If the signal is not single-channel or if an unrecognized metric is specified.

    See Also
    --------
    acoustic_toolbox

    """
    logger.debug(f"Calculating acoustics metric: {metric}")

    if s.channels != 1:
        logger.error("Signal must be single channel")
        raise ValueError("Signal must be single channel")
    try:
        label = label or DEFAULT_LABELS[metric]
    except KeyError as e:
        logger.error(f"Metric {metric} not recognized")
        raise ValueError(f"Metric {metric} not recognized.") from e
    if as_df and return_time_series:
        logger.warning(
            "Cannot return both a dataframe and time series. Returning dataframe only."
        )

        return_time_series = False

    logger.debug(f"Calculating Acoustic Toolbox: {metric} {statistics}")

    res = {}
    try:
        if metric in {"LZeq", "Leq", "LAeq", "LCeq"}:
            if metric in {"LZeq", "Leq"}:
                weighting = "Z"
            elif metric == "LAeq":
                weighting = "A"
            elif metric == "LCeq":
                weighting = "C"
            if "avg" in statistics or "mean" in statistics:
                stat = "avg" if "avg" in statistics else "mean"
                res[f"{label}"] = s.weigh(weighting).leq()
                statistics = list(statistics)
                statistics.remove(stat)
            if len(statistics) > 0:
                res = _stat_calcs(
                    label, s.weigh(weighting).levels(**func_args)[1], res, statistics
                )

            if return_time_series:
                res[f"{label}_ts"] = s.weigh(weighting).levels(**func_args)
        elif metric == "SEL":
            res[f"{label}"] = s.sound_exposure_level()
        else:
            logger.error(f"Metric {metric} not recognized")
            raise ValueError(f"Metric {metric} not recognized.")
    except Exception as e:
        logger.error(f"Error calculating {metric}: {e!s}")
        raise

    if not as_df:
        return res
    try:
        rec = s.recording
        return pd.DataFrame(res, index=[rec])
    except AttributeError:
        return pd.DataFrame(res, index=[0])

acoustics_metric_2ch

acoustics_metric_2ch(b, metric, statistics=(5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew'), label=None, channel_names=('Left', 'Right'), as_df=False, return_time_series=False, func_args={})

Run a metric from the Acoustic Toolbox library on a Binaural object.

PARAMETER DESCRIPTION
b

Binaural signal to calculate the metric for.

TYPE: Binaural

metric

The metric to run.

TYPE: (LZeq, Leq, LAeq, LCeq, SEL) DEFAULT: "LZeq"

statistics

List of level statistics to calculate (e.g. L_5, L_90, etc).

TYPE: tuple or list DEFAULT: (5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew')

label

Label to use for the metric in the results dictionary. If None, will pull from default label for that metric given in DEFAULT_LABELS.

TYPE: str DEFAULT: None

channel_names

Custom names for the channels, by default ("Left", "Right").

TYPE: tuple of str DEFAULT: ('Left', 'Right')

as_df

Whether to return a pandas DataFrame, by default False. If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.

TYPE: bool DEFAULT: False

return_time_series

Whether to return the time series of the metric, by default False. Cannot return time series if as_df is True.

TYPE: bool DEFAULT: False

func_args

Arguments to pass to the metric function, by default {}.

TYPE: dict DEFAULT: {}

RETURNS DESCRIPTION
dict or DataFrame

Dictionary of results if as_df is False, otherwise a pandas DataFrame.

RAISES DESCRIPTION
ValueError

If the input signal is not 2-channel.

See Also

acoustics_metric_1ch

Source code in soundscapy/audio/metrics.py
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
def acoustics_metric_2ch(
    b,
    metric: str,
    statistics: tuple | list = (
        5,
        10,
        50,
        90,
        95,
        "avg",
        "max",
        "min",
        "kurt",
        "skew",
    ),
    label: str | None = None,
    channel_names: tuple[str, str] = ("Left", "Right"),
    as_df: bool = False,
    return_time_series: bool = False,
    func_args={},
):
    """
    Run a metric from the Acoustic Toolbox library on a Binaural object.

    Parameters
    ----------
    b : Binaural
        Binaural signal to calculate the metric for.
    metric : {"LZeq", "Leq", "LAeq", "LCeq", "SEL"}
        The metric to run.
    statistics : tuple or list, optional
        List of level statistics to calculate (e.g. L_5, L_90, etc).
    label : str, optional
        Label to use for the metric in the results dictionary.
        If None, will pull from default label for that metric given in DEFAULT_LABELS.
    channel_names : tuple of str, optional
        Custom names for the channels, by default ("Left", "Right").
    as_df : bool, optional
        Whether to return a pandas DataFrame, by default False.
        If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.
    return_time_series : bool, optional
        Whether to return the time series of the metric, by default False.
        Cannot return time series if as_df is True.
    func_args : dict, optional
        Arguments to pass to the metric function, by default {}.

    Returns
    -------
    dict or pd.DataFrame
        Dictionary of results if as_df is False, otherwise a pandas DataFrame.

    Raises
    ------
    ValueError
        If the input signal is not 2-channel.

    See Also
    --------
    acoustics_metric_1ch

    """
    logger.debug(f"Calculating acoustics metric for 2 channels: {metric}")

    if b.channels != 2:
        logger.error("Must be 2 channel signal. Use `acoustics_metric_1ch` instead.")
        raise ValueError(
            "Must be 2 channel signal. Use `acoustics_metric_1ch instead`."
        )

    logger.debug(f"Calculating Acoustic Toolbox metrics: {metric}")

    try:
        res_l = acoustics_metric_1ch(
            b[0],
            metric,
            statistics,
            label,
            as_df=False,
            return_time_series=return_time_series,
            func_args=func_args,
        )

        res_r = acoustics_metric_1ch(
            b[1],
            metric,
            statistics,
            label,
            as_df=False,
            return_time_series=return_time_series,
            func_args=func_args,
        )

        res = {channel_names[0]: res_l, channel_names[1]: res_r}
    except Exception as e:
        logger.error(f"Error calculating {metric} for 2 channels: {e!s}")
        raise

    if not as_df:
        return res
    try:
        rec = b.recording
    except AttributeError:
        rec = 0
    df = pd.DataFrame.from_dict(res, orient="index")
    df["Recording"] = rec
    df["Channel"] = df.index
    df.set_index(["Recording", "Channel"], inplace=True)
    return df

add_results

add_results(results_df, metric_results)

Add results to MultiIndex dataframe.

PARAMETER DESCRIPTION
results_df

MultiIndex dataframe to add results to.

TYPE: DataFrame

metric_results

MultiIndex dataframe of results to add.

TYPE: DataFrame

RETURNS DESCRIPTION
DataFrame

Index includes "Recording" and "Channel" with a column for each index.

RAISES DESCRIPTION
ValueError

If the input DataFrames are not in the expected format.

Source code in soundscapy/audio/metrics.py
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
def add_results(results_df: pd.DataFrame, metric_results: pd.DataFrame):
    """
    Add results to MultiIndex dataframe.

    Parameters
    ----------
    results_df : pd.DataFrame
        MultiIndex dataframe to add results to.
    metric_results : pd.DataFrame
        MultiIndex dataframe of results to add.

    Returns
    -------
    pd.DataFrame
        Index includes "Recording" and "Channel" with a column for each index.

    Raises
    ------
    ValueError
        If the input DataFrames are not in the expected format.

    """
    logger.info("Adding results to MultiIndex DataFrame")
    try:
        # TODO: Add check for whether all of the recordings have rows in the dataframe
        # If not, add new rows first

        if not set(metric_results.columns).issubset(set(results_df.columns)):
            # Check if results_df already has the columns in results
            results_df = results_df.join(metric_results)
        else:
            results_df.update(metric_results, errors="ignore")
        logger.debug("Results added successfully")
        return results_df
    except Exception as e:
        logger.error(f"Error adding results to DataFrame: {e!s}")
        raise ValueError("Invalid input DataFrame format") from e

maad_metric_1ch

maad_metric_1ch(s, metric, as_df=False, func_args={})

Run a metric from the scikit-maad library (or suite of indices) on a single channel signal.

Currently only supports running all of the alpha indices at once.

PARAMETER DESCRIPTION
s

Single channel signal to calculate the alpha indices for.

TYPE: Signal or Binaural (single channel)

metric

Metric to calculate.

TYPE: (all_temporal_alpha_indices, all_spectral_alpha_indices) DEFAULT: "all_temporal_alpha_indices"

as_df

Whether to return a pandas DataFrame, by default False. If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.

TYPE: bool DEFAULT: False

func_args

Additional keyword arguments to pass to the metric function, by default {}.

TYPE: dict DEFAULT: {}

RETURNS DESCRIPTION
dict or DataFrame

Dictionary of results if as_df is False, otherwise a pandas DataFrame.

RAISES DESCRIPTION
ValueError

If the signal is not single-channel or if an unrecognized metric is specified.

See Also

maad.features.all_spectral_alpha_indices maad.features.all_temporal_alpha_indices

Source code in soundscapy/audio/metrics.py
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
def maad_metric_1ch(s, metric: str, as_df: bool = False, func_args={}):
    """
    Run a metric from the scikit-maad library (or suite of indices) on a single channel signal.

    Currently only supports running all of the alpha indices at once.

    Parameters
    ----------
    s : Signal or Binaural (single channel)
        Single channel signal to calculate the alpha indices for.
    metric : {"all_temporal_alpha_indices", "all_spectral_alpha_indices"}
        Metric to calculate.
    as_df : bool, optional
        Whether to return a pandas DataFrame, by default False.
        If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.
    func_args : dict, optional
        Additional keyword arguments to pass to the metric function, by default {}.

    Returns
    -------
    dict or pd.DataFrame
        Dictionary of results if as_df is False, otherwise a pandas DataFrame.

    Raises
    ------
    ValueError
        If the signal is not single-channel or if an unrecognized metric is specified.

    See Also
    --------
    maad.features.all_spectral_alpha_indices
    maad.features.all_temporal_alpha_indices

    """
    logger.debug(f"Calculating MAAD metric: {metric}")

    # Checks and status
    if s.channels != 1:
        logger.error("Signal must be single channel")
        raise ValueError("Signal must be single channel")

    logger.debug(f"Calculating scikit-maad {metric}")

    # Start the calc
    try:
        if metric == "all_spectral_alpha_indices":
            Sxx, tn, fn, ext = spectrogram(s, s.fs, **func_args)
            res = all_spectral_alpha_indices(Sxx, tn, fn, extent=ext, **func_args)[0]
        elif metric == "all_temporal_alpha_indices":
            res = all_temporal_alpha_indices(s, s.fs, **func_args)
        else:
            logger.error(f"Metric {metric} not recognized")
            raise ValueError(f"Metric {metric} not recognized.")
    except Exception as e:
        logger.error(f"Error calculating {metric}: {e!s}")
        raise

    if not as_df:
        return res.to_dict("records")[0]
    try:
        res["Recording"] = s.recording
        res.set_index(["Recording"], inplace=True)
        return res
    except AttributeError:
        return res

maad_metric_2ch

maad_metric_2ch(b, metric, channel_names=('Left', 'Right'), as_df=False, func_args={})

Run a metric from the scikit-maad library (or suite of indices) on a binaural signal.

Currently only supports running all the alpha indices at once.

PARAMETER DESCRIPTION
b

Binaural signal to calculate the alpha indices for.

TYPE: Binaural

metric

Metric to calculate.

TYPE: (all_temporal_alpha_indices, all_spectral_alpha_indices) DEFAULT: "all_temporal_alpha_indices"

channel_names

Custom names for the channels, by default ("Left", "Right").

TYPE: tuple of str DEFAULT: ('Left', 'Right')

as_df

Whether to return a pandas DataFrame, by default False. If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.

TYPE: bool DEFAULT: False

func_args

Additional arguments to pass to the metric function, by default {}.

TYPE: dict DEFAULT: {}

RETURNS DESCRIPTION
dict or DataFrame

Dictionary of results if as_df is False, otherwise a pandas DataFrame.

RAISES DESCRIPTION
ValueError

If the input signal is not 2-channel or if an unrecognized metric is specified.

See Also

scikit-maad library maad_metric_1ch

Source code in soundscapy/audio/metrics.py
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
def maad_metric_2ch(
    b,
    metric: str,
    channel_names: tuple[str, str] = ("Left", "Right"),
    as_df: bool = False,
    func_args={},
):
    """
    Run a metric from the scikit-maad library (or suite of indices) on a binaural signal.

    Currently only supports running all the alpha indices at once.

    Parameters
    ----------
    b : Binaural
        Binaural signal to calculate the alpha indices for.
    metric : {"all_temporal_alpha_indices", "all_spectral_alpha_indices"}
        Metric to calculate.
    channel_names : tuple of str, optional
        Custom names for the channels, by default ("Left", "Right").
    as_df : bool, optional
        Whether to return a pandas DataFrame, by default False.
        If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.
    func_args : dict, optional
        Additional arguments to pass to the metric function, by default {}.

    Returns
    -------
    dict or pd.DataFrame
        Dictionary of results if as_df is False, otherwise a pandas DataFrame.

    Raises
    ------
    ValueError
        If the input signal is not 2-channel or if an unrecognized metric is specified.

    See Also
    --------
    scikit-maad library
    maad_metric_1ch

    """
    logger.debug(f"Calculating MAAD metric for 2 channels: {metric}")

    if b.channels != 2:
        logger.error("Must be 2 channel signal. Use `maad_metric_1ch` instead.")
        raise ValueError("Must be 2 channel signal. Use `maad_metric_1ch` instead.")

    logger.debug(f"Calculating scikit-maad {metric}")

    try:
        res_l = maad_metric_1ch(b[0], metric, as_df=False, **func_args)
        res_r = maad_metric_1ch(b[1], metric, as_df=False, **func_args)
        res = {channel_names[0]: res_l, channel_names[1]: res_r}
    except Exception as e:
        logger.error(f"Error calculating MAAD metric {metric} for 2 channels: {e!s}")
        raise

    if not as_df:
        return res
    try:
        rec = b.recording
    except AttributeError:
        rec = 0
    df = pd.DataFrame.from_dict(res, orient="index")
    df["Recording"] = rec
    df["Channel"] = df.index
    df.set_index(["Recording", "Channel"], inplace=True)
    return df

mosqito_metric_1ch

mosqito_metric_1ch(s, metric, statistics=(5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew'), label=None, *, as_df=False, return_time_series=False, **kwargs)

Calculate a MoSQITo psychoacoustic metric for a single channel signal.

PARAMETER DESCRIPTION
s

Single channel signal object to analyze.

TYPE: Signal

metric

Name of the metric to calculate. Options are "loudness_zwtv", "roughness_dw", "sharpness_din_from_loudness", "sharpness_din_perseg", or "sharpness_din_tv".

TYPE: str

statistics

Statistics to calculate on the metric results.

TYPE: tuple[int | str, ...] DEFAULT: (5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew')

label

Label to use for the metric in the results. If None, uses a default label.

TYPE: str DEFAULT: None

as_df

If True, return results as a pandas DataFrame. Otherwise, return a dictionary.

TYPE: bool DEFAULT: False

return_time_series

If True, include the full time series in the results.

TYPE: bool DEFAULT: False

func_args

Additional arguments to pass to the underlying MoSQITo function.

TYPE: dict

RETURNS DESCRIPTION
Union[dict, DataFrame]

Results of the metric calculation and statistics.

RAISES DESCRIPTION
ValueError

If the input signal is not single-channel or if an unrecognized metric is specified.

Examples:

>>> # xdoctest: +SKIP
>>> from soundscapy.audio import Binaural
>>> signal = Binaural.from_wav("audio.wav", resample=480000)
>>> results = mosqito_metric_1ch(signal[0], "loudness_zwtv", as_df=True)
Source code in soundscapy/audio/metrics.py
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
def mosqito_metric_1ch(
    s: Signal,
    metric: Literal[
        "loudness_zwtv",
        "roughness_dw",
        "sharpness_din_from_loudness",
        "sharpness_din_perseg",
        "sharpness_din_tv",
    ],
    statistics: tuple[int | str, ...] = (
        5,
        10,
        50,
        90,
        95,
        "avg",
        "max",
        "min",
        "kurt",
        "skew",
    ),
    label: str | None = None,
    *,
    as_df: bool = False,
    return_time_series: bool = False,
    **kwargs: Unpack[_MosqitoMetricParams],
) -> dict | pd.DataFrame:
    """
    Calculate a MoSQITo psychoacoustic metric for a single channel signal.

    Parameters
    ----------
    s : Signal
        Single channel signal object to analyze.
    metric : str
        Name of the metric to calculate. Options are "loudness_zwtv",
        "roughness_dw", "sharpness_din_from_loudness", "sharpness_din_perseg",
        or "sharpness_din_tv".
    statistics : tuple[int | str, ...], optional
        Statistics to calculate on the metric results.
    label : str, optional
        Label to use for the metric in the results. If None, uses a default label.
    as_df : bool, optional
        If True, return results as a pandas DataFrame. Otherwise, return a dictionary.
    return_time_series : bool, optional
        If True, include the full time series in the results.
    func_args : dict, optional
        Additional arguments to pass to the underlying MoSQITo function.

    Returns
    -------
    Union[dict, pd.DataFrame]
        Results of the metric calculation and statistics.

    Raises
    ------
    ValueError
        If the input signal is not single-channel
        or if an unrecognized metric is specified.

    Examples
    --------
    >>> # xdoctest: +SKIP
    >>> from soundscapy.audio import Binaural
    >>> signal = Binaural.from_wav("audio.wav", resample=480000)
    >>> results = mosqito_metric_1ch(signal[0], "loudness_zwtv", as_df=True)

    """
    logger.debug(f"Calculating MoSQITo metric: {metric}")

    # Checks and warnings
    if s.channels != 1:
        logger.error("Signal must be single channel")
        msg = "Signal must be single channel"
        raise ValueError(msg)
    try:
        label = label or DEFAULT_LABELS[metric]
    except KeyError as e:
        logger.error(f"Metric {metric} not recognized")
        msg = f"Metric {metric} not recognized."
        raise ValueError(msg) from e
    if as_df and return_time_series:
        logger.warning(
            "Cannot return both a dataframe and time series. Returning dataframe only."
        )
        return_time_series = False

    # Start the calc
    res = {}
    try:
        if metric == "loudness_zwtv":
            # Prepare args specifically for loudness_zwtv
            loudness_args = {}
            if "field_type" in kwargs:
                loudness_args["field_type"] = kwargs["field_type"]
            # Call with filtered args
            N, N_spec, _, time_axis = loudness_zwtv(s, s.fs, **loudness_args)  # noqa: N806
            # TODO(MitchellAcoustics): Add the bark_axis back in
            # when we implement time series calcs
            # https://github.com/MitchellAcoustics/Soundscapy/issues/113
            res = _stat_calcs(label, N, res, statistics)
            if return_time_series:
                res[f"{label}_ts"] = (time_axis, N)

        elif metric == "roughness_dw":
            # Prepare args specifically for roughness_dw
            roughness_args = {}
            if "overlap" in kwargs:
                roughness_args["overlap"] = kwargs["overlap"]
            # Call with filtered args
            R, _, _, time_axis = roughness_dw(s, s.fs, **roughness_args)  # noqa: N806
            # TODO(MitchellAcoustics): Add the R_spec and bark_axis back in
            # when we implement time series calcs
            # https://github.com/MitchellAcoustics/Soundscapy/issues/113
            if isinstance(R, float | int):
                res[label] = R
            elif isinstance(R, np.ndarray) and len(R) == 1:
                res[label] = R[0]
            else:
                res = _stat_calcs(label, R, res, statistics)
            if return_time_series:
                res[f"{label}_ts"] = (time_axis, R)

        elif metric == "sharpness_din_from_loudness":
            # Prepare args for loudness_zwtv (needed first)
            loudness_args = {}
            if "field_type" in kwargs:
                loudness_args["field_type"] = kwargs["field_type"]
            N, N_spec, _, time_axis = loudness_zwtv(s, s.fs, **loudness_args)  # noqa: N806
            # TODO(MitchellAcoustics): Add the R_spec and bark_axis back in
            # when we implement time series calcs
            # https://github.com/MitchellAcoustics/Soundscapy/issues/113
            res = _stat_calcs("N", N, res, statistics)
            if return_time_series:
                res["N_ts"] = time_axis, N

            # Prepare args specifically for sharpness_din_from_loudness
            sharpness_args = {}
            if "weighting" in kwargs:
                sharpness_args["weighting"] = kwargs["weighting"]
            # Call with filtered args
            S = sharpness_din_from_loudness(N, N_spec, **sharpness_args)  # noqa: N806
            res = _stat_calcs(label, S, res, statistics)
            if return_time_series:
                res[f"{label}_ts"] = (time_axis, S)

        elif metric == "sharpness_din_perseg":
            # Prepare args specifically for sharpness_din_perseg
            sharpness_args = {}
            if "weighting" in kwargs:
                sharpness_args["weighting"] = kwargs["weighting"]
            if "nperseg" in kwargs:
                sharpness_args["nperseg"] = kwargs["nperseg"]
            if "noverlap" in kwargs:
                sharpness_args["noverlap"] = kwargs["noverlap"]
            # Call with filtered args
            S, time_axis = sharpness_din_perseg(s, s.fs, **sharpness_args)  # noqa: N806
            res = _stat_calcs(label, S, res, statistics)
            if return_time_series:
                res[f"{label}_ts"] = (time_axis, S)

        elif metric == "sharpness_din_tv":
            # Prepare args specifically for sharpness_din_tv
            sharpness_args = {}
            if "weighting" in kwargs:
                sharpness_args["weighting"] = kwargs["weighting"]
            if "skip" in kwargs:
                sharpness_args["skip"] = kwargs["skip"]
            # Call with filtered args
            S, time_axis = sharpness_din_tv(s, s.fs, **sharpness_args)  # noqa: N806
            res = _stat_calcs(label, S, res, statistics)
            if return_time_series:
                res[f"{label}_ts"] = (time_axis, S)
        else:
            msg = f"Metric {metric} not recognized."
            logger.error(msg)
            raise ValueError(msg)
    except Exception as e:
        logger.error(f"Error calculating {metric}: {e!s}")
        raise

    # Return the results in the requested format
    if not as_df:
        return res

    rec = getattr(s, "recording", None)
    return pd.DataFrame(res, index=[rec])

mosqito_metric_2ch

mosqito_metric_2ch(b, metric, statistics=(5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew'), label=None, channel_names=('Left', 'Right'), as_df=False, return_time_series=False, parallel=True, func_args={})

Calculate metrics from MoSQITo for a two-channel signal with optional parallel processing.

PARAMETER DESCRIPTION
b

Binaural signal to calculate the sound quality indices for.

TYPE: Binaural

metric

TYPE: {"loudness_zwtv", "sharpness_din_from_loudness", "sharpness_din_perseg",

statistics

List of level statistics to calculate (e.g. L_5, L_90, etc.).

TYPE: tuple or list DEFAULT: (5, 10, 50, 90, 95, 'avg', 'max', 'min', 'kurt', 'skew')

label

Label to use for the metric in the results dictionary. If None, will pull from default label for that metric given in DEFAULT_LABELS.

TYPE: str DEFAULT: None

channel_names

Custom names for the channels, by default ("Left", "Right").

TYPE: tuple of str DEFAULT: ('Left', 'Right')

as_df

Whether to return a pandas DataFrame, by default False. If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.

TYPE: bool DEFAULT: False

return_time_series

Whether to return the time series of the metric, by default False. Only works for metrics that return a time series array. Cannot be returned in a dataframe.

TYPE: bool DEFAULT: False

parallel

Whether to process channels in parallel, by default True.

TYPE: bool DEFAULT: True

func_args

Additional arguments to pass to the metric function, by default {}.

TYPE: dict DEFAULT: {}

RETURNS DESCRIPTION
dict or DataFrame

Dictionary of results if as_df is False, otherwise a pandas DataFrame.

RAISES DESCRIPTION
ValueError

If the input signal is not 2-channel.

Source code in soundscapy/audio/metrics.py
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
def mosqito_metric_2ch(
    b,
    metric: str,
    statistics: tuple | list = (
        5,
        10,
        50,
        90,
        95,
        "avg",
        "max",
        "min",
        "kurt",
        "skew",
    ),
    label: str = None,
    channel_names: tuple[str, str] = ("Left", "Right"),
    as_df: bool = False,
    return_time_series: bool = False,
    parallel: bool = True,
    func_args={},
):
    """
    Calculate metrics from MoSQITo for a two-channel signal with optional parallel processing.

    Parameters
    ----------
    b : Binaural
        Binaural signal to calculate the sound quality indices for.
    metric : {"loudness_zwtv", "sharpness_din_from_loudness", "sharpness_din_perseg",
    "sharpness_din_tv", "roughness_dw"}
        Metric to calculate.
    statistics : tuple or list, optional
        List of level statistics to calculate (e.g. L_5, L_90, etc.).
    label : str, optional
        Label to use for the metric in the results dictionary.
        If None, will pull from default label for that metric given in DEFAULT_LABELS.
    channel_names : tuple of str, optional
        Custom names for the channels, by default ("Left", "Right").
    as_df : bool, optional
        Whether to return a pandas DataFrame, by default False.
        If True, returns a MultiIndex Dataframe with ("Recording", "Channel") as the index.
    return_time_series : bool, optional
        Whether to return the time series of the metric, by default False.
        Only works for metrics that return a time series array.
        Cannot be returned in a dataframe.
    parallel : bool, optional
        Whether to process channels in parallel, by default True.
    func_args : dict, optional
        Additional arguments to pass to the metric function, by default {}.

    Returns
    -------
    dict or pd.DataFrame
        Dictionary of results if as_df is False, otherwise a pandas DataFrame.

    Raises
    ------
    ValueError
        If the input signal is not 2-channel.

    """
    logger.debug(f"Calculating MoSQITo metric for 2 channels: {metric}")

    if b.channels != 2:
        logger.error("Must be 2 channel signal. Use `mosqito_metric_1ch` instead.")
        raise ValueError("Must be 2 channel signal. Use `mosqito_metric_1ch` instead.")

    if metric == "sharpness_din_from_loudness":
        logger.debug(
            "Calculating MoSQITo metrics: `sharpness_din` from `loudness_zwtv`"
        )
    else:
        logger.debug(f"Calculating MoSQITo metric: {metric}")

    try:
        if parallel:
            with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
                future_l = executor.submit(
                    mosqito_metric_1ch,
                    b[0],
                    metric,
                    statistics,
                    label,
                    as_df=False,
                    return_time_series=return_time_series,
                    **func_args,
                )
                future_r = executor.submit(
                    mosqito_metric_1ch,
                    b[1],
                    metric,
                    statistics,
                    label,
                    as_df=False,
                    return_time_series=return_time_series,
                    **func_args,
                )
                res_l = future_l.result()
                res_r = future_r.result()
        else:
            res_l = mosqito_metric_1ch(
                b[0],
                metric,
                statistics,
                label,
                as_df=False,
                return_time_series=return_time_series,
                **func_args,
            )
            res_r = mosqito_metric_1ch(
                b[1],
                metric,
                statistics,
                label,
                as_df=False,
                return_time_series=return_time_series,
                **func_args,
            )

        res = {channel_names[0]: res_l, channel_names[1]: res_r}
    except Exception as e:
        logger.error(f"Error calculating MoSQITo metric {metric} for 2 channels: {e!s}")
        raise

    if not as_df:
        return res
    try:
        rec = b.recording
    except AttributeError:
        rec = 0
    df = pd.DataFrame.from_dict(res, orient="index")
    df["Recording"] = rec
    df["Channel"] = df.index
    df.set_index(["Recording", "Channel"], inplace=True)
    return df

prep_multiindex_df

prep_multiindex_df(dictionary, label='Leq', incl_metric=True)

Prepare a MultiIndex dataframe from a dictionary of results.

PARAMETER DESCRIPTION
dictionary

Dict of results with recording name as key, channels {"Left", "Right"} as second key, and Leq metric as value.

TYPE: dict

label

Name of metric included, by default "Leq".

TYPE: str DEFAULT: 'Leq'

incl_metric

Whether to include the metric value in the resulting dataframe, by default True. If False, will only set up the DataFrame with the proper MultiIndex.

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
DataFrame

Index includes "Recording" and "Channel" with a column for each index if incl_metric.

RAISES DESCRIPTION
ValueError

If the input dictionary is not in the expected format.

Source code in soundscapy/audio/metrics.py
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
def prep_multiindex_df(dictionary: dict, label: str = "Leq", incl_metric: bool = True):
    """
    Prepare a MultiIndex dataframe from a dictionary of results.

    Parameters
    ----------
    dictionary : dict
        Dict of results with recording name as key, channels {"Left", "Right"} as second key,
        and Leq metric as value.
    label : str, optional
        Name of metric included, by default "Leq".
    incl_metric : bool, optional
        Whether to include the metric value in the resulting dataframe, by default True.
        If False, will only set up the DataFrame with the proper MultiIndex.

    Returns
    -------
    pd.DataFrame
        Index includes "Recording" and "Channel" with a column for each index if `incl_metric`.

    Raises
    ------
    ValueError
        If the input dictionary is not in the expected format.

    """
    logger.info("Preparing MultiIndex DataFrame")
    try:
        new_dict = {}
        for outerKey, innerDict in dictionary.items():
            for innerKey, values in innerDict.items():
                new_dict[(outerKey, innerKey)] = values
        idx = pd.MultiIndex.from_tuples(new_dict.keys())
        df = pd.DataFrame(new_dict.values(), index=idx, columns=[label])
        df.index.names = ["Recording", "Channel"]
        if not incl_metric:
            df = df.drop(columns=[label])
        logger.debug("MultiIndex DataFrame prepared successfully")
        return df
    except Exception as e:
        logger.error(f"Error preparing MultiIndex DataFrame: {e!s}")
        raise ValueError("Invalid input dictionary format") from e

process_all_metrics

process_all_metrics(b, analysis_settings, parallel=True)

Process all metrics specified in the analysis settings for a binaural signal.

This function runs through all enabled metrics in the provided analysis settings, computes them for the given binaural signal, and compiles the results into a single DataFrame.

PARAMETER DESCRIPTION
b

Binaural signal object to process.

TYPE: Binaural

analysis_settings

Configuration object specifying which metrics to run and their parameters.

TYPE: AnalysisSettings

parallel

If True, run applicable calculations in parallel. Defaults to True.

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
DataFrame

A MultiIndex DataFrame containing results from all processed metrics. The index includes "Recording" and "Channel" levels.

RAISES DESCRIPTION
ValueError

If there's an error processing any of the metrics.

Notes

The parallel option primarily affects the MoSQITo metrics. Other metrics may not benefit from parallelization.

Examples:

>>> # xdoctest: +SKIP
>>> from soundscapy.audio import Binaural
>>> from soundscapy import AnalysisSettings
>>> signal = Binaural.from_wav("audio.wav", resample=480000)
>>> settings = AnalysisSettings.from_yaml("settings.yaml")
>>> results = process_all_metrics(signal,settings)
Source code in soundscapy/audio/metrics.py
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
def process_all_metrics(
    b, analysis_settings: AnalysisSettings, parallel: bool = True
) -> pd.DataFrame:
    """
    Process all metrics specified in the analysis settings for a binaural signal.

    This function runs through all enabled metrics in the provided analysis settings,
    computes them for the given binaural signal, and compiles the results into a single DataFrame.

    Parameters
    ----------
    b : Binaural
        Binaural signal object to process.
    analysis_settings : AnalysisSettings
        Configuration object specifying which metrics to run and their parameters.
    parallel : bool, optional
        If True, run applicable calculations in parallel. Defaults to True.

    Returns
    -------
    pd.DataFrame
        A MultiIndex DataFrame containing results from all processed metrics.
        The index includes "Recording" and "Channel" levels.

    Raises
    ------
    ValueError
        If there's an error processing any of the metrics.

    Notes
    -----
    The parallel option primarily affects the MoSQITo metrics. Other metrics may not
    benefit from parallelization.

    Examples
    --------
    >>> # xdoctest: +SKIP
    >>> from soundscapy.audio import Binaural
    >>> from soundscapy import AnalysisSettings
    >>> signal = Binaural.from_wav("audio.wav", resample=480000)
    >>> settings = AnalysisSettings.from_yaml("settings.yaml")
    >>> results = process_all_metrics(signal,settings)

    """
    logger.info(f"Processing all metrics for {b.recording}")
    logger.debug(f"Parallel processing: {parallel}")

    idx = pd.MultiIndex.from_tuples(((b.recording, "Left"), (b.recording, "Right")))
    results_df = pd.DataFrame(index=idx)
    results_df.index.names = ["Recording", "Channel"]

    try:
        for (
            library,
            metrics_settings,
        ) in analysis_settings.get_enabled_metrics().items():
            for metric in metrics_settings.keys():
                logger.debug(f"Processing {library} metric: {metric}")
                if library == "AcousticToolbox":
                    results_df = pd.concat(
                        (
                            results_df,
                            b.acoustics_metric(
                                metric, metric_settings=metrics_settings[metric]
                            ),
                        ),
                        axis=1,
                    )
                elif library == "MoSQITo":
                    results_df = pd.concat(
                        (
                            results_df,
                            b.mosqito_metric(
                                metric,
                                parallel=parallel,
                                metric_settings=metrics_settings[metric],
                            ),
                        ),
                        axis=1,
                    )
                elif library == "scikit-maad" or library == "scikit_maad":
                    results_df = pd.concat(
                        (
                            results_df,
                            b.maad_metric(
                                metric, metric_settings=metrics_settings[metric]
                            ),
                        ),
                        axis=1,
                    )
        logger.info("All metrics processed successfully")
        return results_df
    except Exception as e:
        logger.error(f"Error processing metrics: {e!s}")
        raise ValueError("Error processing metrics") from e

show_submodules: true

Analysis Settings

Module for managing audio analysis settings using Pydantic models.

This module defines Pydantic models for configuring analysis settings for different audio processing libraries (AcousticToolbox, MoSQITo, scikit-maad). It includes classes for individual metric settings, library settings, and overall analysis settings. It also provides a ConfigManager class for loading, saving, merging, and managing configurations from YAML files or dictionaries.

CLASS DESCRIPTION
AnalysisSettings

Settings for audio analysis methods.

ConfigManager

Manage configuration settings for audio analysis.

LibrarySettings

Settings for a library of metrics.

MetricSettings

Settings for an individual metric.

AnalysisSettings

Bases: BaseModel

Settings for audio analysis methods.

PARAMETER DESCRIPTION
version

Version of the configuration.

TYPE: str

AcousticToolbox

Settings for AcousticToolbox metrics.

TYPE: LibrarySettings | None

MoSQITo

Settings for MoSQITo metrics.

TYPE: LibrarySettings | None

scikit_maad

Settings for scikit-maad metrics.

TYPE: LibrarySettings | None

METHOD DESCRIPTION
default

Create a default AnalysisSettings using the package default configuration file.

from_dict

Create an AnalysisSettings object from a dictionary.

from_yaml

Create an AnalysisSettings object from a YAML file.

get_enabled_metrics

Get a dictionary of enabled metrics.

get_metric_settings

Get the settings for a specific metric.

to_yaml

Save the current settings to a YAML file.

update_setting

Update the settings for a specific metric.

validate_library_settings

Validate library settings.

default classmethod

default()

Create a default AnalysisSettings using the package default configuration file.

RETURNS DESCRIPTION
AnalysisSettings

An instance of AnalysisSettings with default settings.

Source code in soundscapy/audio/analysis_settings.py
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
@classmethod
def default(cls) -> AnalysisSettings:
    """
    Create a default AnalysisSettings using the package default configuration file.

    Returns
    -------
    AnalysisSettings
        An instance of AnalysisSettings with default settings.

    """
    config_resource = resources.files("soundscapy.data").joinpath(
        "default_settings.yaml"
    )
    with resources.as_file(config_resource) as f:
        logger.info(f"Loading default configuration from {f}")
        return cls.from_yaml(f)

from_dict classmethod

from_dict(d)

Create an AnalysisSettings object from a dictionary.

PARAMETER DESCRIPTION
d

Dictionary containing the configuration settings.

TYPE: dict

RETURNS DESCRIPTION
AnalysisSettings

An instance of AnalysisSettings.

Source code in soundscapy/audio/analysis_settings.py
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
@classmethod
def from_dict(cls, d: dict) -> AnalysisSettings:
    """
    Create an AnalysisSettings object from a dictionary.

    Parameters
    ----------
    d : dict
        Dictionary containing the configuration settings.

    Returns
    -------
    AnalysisSettings
        An instance of AnalysisSettings.

    """
    return cls(**d)

from_yaml classmethod

from_yaml(filepath)

Create an AnalysisSettings object from a YAML file.

PARAMETER DESCRIPTION
filepath

Path to the YAML configuration file.

TYPE: str | Path

RETURNS DESCRIPTION
AnalysisSettings

An instance of AnalysisSettings.

Source code in soundscapy/audio/analysis_settings.py
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
@classmethod
def from_yaml(cls, filepath: str | Path) -> AnalysisSettings:
    """
    Create an AnalysisSettings object from a YAML file.

    Parameters
    ----------
    filepath : str | Path
        Path to the YAML configuration file.

    Returns
    -------
    AnalysisSettings
        An instance of AnalysisSettings.

    """
    filepath = _ensure_path(filepath)
    logger.info(f"Loading configuration from {filepath}")
    with Path.open(filepath) as f:
        config_dict = yaml.safe_load(f)
    return cls(**config_dict)

get_enabled_metrics

get_enabled_metrics()

Get a dictionary of enabled metrics.

RETURNS DESCRIPTION
dict[str, dict[str, MetricSettings]]

A dictionary of enabled metrics grouped by library.

Source code in soundscapy/audio/analysis_settings.py
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
def get_enabled_metrics(self) -> dict[str, dict[str, MetricSettings]]:
    """
    Get a dictionary of enabled metrics.

    Returns
    -------
    dict[str, dict[str, MetricSettings]]
        A dictionary of enabled metrics grouped by library.

    """
    enabled_metrics = {}
    for library in ["AcousticToolbox", "MoSQITo", "scikit_maad"]:
        library_settings = getattr(self, library)
        if library_settings:
            enabled_metrics[library] = {
                metric: settings
                for metric, settings in library_settings.root.items()
                if settings.run
            }
    logger.debug(f"Enabled metrics: {enabled_metrics}")
    return enabled_metrics

get_metric_settings

get_metric_settings(library, metric)

Get the settings for a specific metric.

PARAMETER DESCRIPTION
library

The name of the library.

TYPE: str

metric

The name of the metric.

TYPE: str

RETURNS DESCRIPTION
MetricSettings

The settings for the specified metric.

RAISES DESCRIPTION
KeyError

If the specified library or metric is not found.

Source code in soundscapy/audio/analysis_settings.py
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
def get_metric_settings(self, library: str, metric: str) -> MetricSettings:
    """
    Get the settings for a specific metric.

    Parameters
    ----------
    library : str
        The name of the library.
    metric : str
        The name of the metric.

    Returns
    -------
    MetricSettings
        The settings for the specified metric.

    Raises
    ------
    KeyError
        If the specified library or metric is not found.

    """
    library_settings = getattr(self, library)
    if library_settings and metric in library_settings.root:
        return library_settings.root[metric]
    logger.error(f"Metric '{metric}' not found in library '{library}'")
    msg = f"Metric '{metric}' not found in library '{library}'"
    raise KeyError(msg)

to_yaml

to_yaml(filepath)

Save the current settings to a YAML file.

PARAMETER DESCRIPTION
filepath

Path to save the YAML file.

TYPE: str | Path

Source code in soundscapy/audio/analysis_settings.py
206
207
208
209
210
211
212
213
214
215
216
217
218
219
def to_yaml(self, filepath: str | Path) -> None:
    """
    Save the current settings to a YAML file.

    Parameters
    ----------
    filepath : str | Path
        Path to save the YAML file.

    """
    filepath = _ensure_path(filepath)
    logger.info(f"Saving configuration to {filepath}")
    with Path.open(filepath, "w") as f:
        yaml.dump(self.model_dump(by_alias=True), f)

update_setting

update_setting(library, metric, **kwargs)

Update the settings for a specific metric.

PARAMETER DESCRIPTION
library

The name of the library.

TYPE: str

metric

The name of the metric.

TYPE: str

**kwargs

Keyword arguments to update the metric settings.

TYPE: dict DEFAULT: {}

RAISES DESCRIPTION
KeyError

If the specified library or metric is not found.

Source code in soundscapy/audio/analysis_settings.py
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
def update_setting(self, library: str, metric: str, **kwargs: dict) -> None:
    """
    Update the settings for a specific metric.

    Parameters
    ----------
    library : str
        The name of the library.
    metric : str
        The name of the metric.
    **kwargs
        Keyword arguments to update the metric settings.

    Raises
    ------
    KeyError
        If the specified library or metric is not found.

    """
    library_settings = getattr(self, library)
    if library_settings and metric in library_settings.root:
        metric_settings = library_settings.root[metric]
        for key, value in kwargs.items():
            if hasattr(metric_settings, key):
                setattr(metric_settings, key, value)
            else:
                logger.error(f"Invalid setting '{key}' for metric '{metric}'")
    else:
        logger.error(f"Metric '{metric}' not found in library '{library}'")
        msg = f"Metric '{metric}' not found in library '{library}'"
        raise KeyError(msg)

validate_library_settings classmethod

validate_library_settings(v)

Validate library settings.

Source code in soundscapy/audio/analysis_settings.py
140
141
142
143
144
145
146
@field_validator("*", mode="before")
@classmethod
def validate_library_settings(cls, v: dict | LibrarySettings) -> LibrarySettings:
    """Validate library settings."""
    if isinstance(v, dict):
        return LibrarySettings(root=v)
    return v

ConfigManager

ConfigManager(config_path=None)

Manage configuration settings for audio analysis.

PARAMETER DESCRIPTION
default_config_path

Path to the default configuration file.

TYPE: str | Path | None

METHOD DESCRIPTION
generate_minimal_config

Generate a minimal configuration containing only changes from the default.

load_config

Load a configuration file or use the default configuration.

merge_configs

Merge the current config with override values and update the current_config.

save_config

Save the current configuration to a file.

Source code in soundscapy/audio/analysis_settings.py
316
317
318
def __init__(self, config_path: str | Path | None = None) -> None:  # noqa: D107
    self.config_path = _ensure_path(config_path) if config_path else None
    self.current_config: AnalysisSettings | None = None

generate_minimal_config

generate_minimal_config()

Generate a minimal configuration containing only changes from the default.

RETURNS DESCRIPTION
dict

A dictionary containing the minimal configuration.

RAISES DESCRIPTION
ValueError

If no current configuration is loaded.

Source code in soundscapy/audio/analysis_settings.py
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
def generate_minimal_config(self) -> dict:
    """
    Generate a minimal configuration containing only changes from the default.

    Returns
    -------
    dict
        A dictionary containing the minimal configuration.

    Raises
    ------
    ValueError
        If no current configuration is loaded.

    """
    if not self.current_config:
        msg = "No current configuration loaded."
        raise ValueError(msg)
    default_config = AnalysisSettings.default()
    current_dict = self.current_config.model_dump()
    default_dict = default_config.model_dump()
    return self._get_diff(current_dict, default_dict)

load_config

load_config(config_path=None)

Load a configuration file or use the default configuration.

PARAMETER DESCRIPTION
config_path

Path to the configuration file. If None, uses the default configuration.

TYPE: str | Path | None DEFAULT: None

RETURNS DESCRIPTION
AnalysisSettings

The loaded configuration.

Source code in soundscapy/audio/analysis_settings.py
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
def load_config(self, config_path: str | Path | None = None) -> AnalysisSettings:
    """
    Load a configuration file or use the default configuration.

    Parameters
    ----------
    config_path : str | Path | None, optional
        Path to the configuration file. If None, uses the default configuration.

    Returns
    -------
    AnalysisSettings
        The loaded configuration.

    """
    if config_path:
        logger.info(f"Loading configuration from {config_path}")
        self.current_config = AnalysisSettings.from_yaml(config_path)
    elif self.config_path:
        logger.info(f"Loading configuration from {self.config_path}")
        self.current_config = AnalysisSettings.from_yaml(self.config_path)
    else:
        logger.info("Loading default configuration")
        self.current_config = AnalysisSettings.default()
    return self.current_config

merge_configs

merge_configs(override_config)

Merge the current config with override values and update the current_config.

PARAMETER DESCRIPTION
override_config

Dictionary containing override configuration values.

TYPE: dict

RETURNS DESCRIPTION
AnalysisSettings

The merged configuration.

RAISES DESCRIPTION
ValueError

If no base configuration is loaded.

Source code in soundscapy/audio/analysis_settings.py
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
def merge_configs(self, override_config: dict) -> AnalysisSettings:
    """
    Merge the current config with override values and update the current_config.

    Parameters
    ----------
    override_config : dict
        Dictionary containing override configuration values.

    Returns
    -------
    AnalysisSettings
        The merged configuration.

    Raises
    ------
    ValueError
        If no base configuration is loaded.

    """
    if not self.current_config:
        logger.error("No base configuration loaded")
        msg = "No base configuration loaded."
        raise ValueError(msg)
    logger.info("Merging configurations")
    merged_dict = self.current_config.model_dump()
    self._deep_update(merged_dict, override_config)
    merged_config = AnalysisSettings(**merged_dict)
    self.current_config = merged_config  # Update the current_config
    return merged_config

save_config

save_config(filepath)

Save the current configuration to a file.

PARAMETER DESCRIPTION
filepath

Path to save the configuration file.

TYPE: str | Path

RAISES DESCRIPTION
ValueError

If no current configuration is loaded.

Source code in soundscapy/audio/analysis_settings.py
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
def save_config(self, filepath: str | Path) -> None:
    """
    Save the current configuration to a file.

    Parameters
    ----------
    filepath : str | Path
        Path to save the configuration file.

    Raises
    ------
    ValueError
        If no current configuration is loaded.

    """
    if self.current_config:
        logger.info(f"Saving configuration to {filepath}")
        self.current_config.to_yaml(filepath)
    else:
        logger.error("No current configuration to save")
        msg = "No current configuration to save."
        raise ValueError(msg)

LibrarySettings

Bases: RootModel

Settings for a library of metrics.

METHOD DESCRIPTION
get_metric_settings

Get the settings for a specific metric.

get_metric_settings

get_metric_settings(metric)

Get the settings for a specific metric.

PARAMETER DESCRIPTION
metric

The name of the metric.

TYPE: str

RETURNS DESCRIPTION
MetricSettings

The settings for the specified metric.

RAISES DESCRIPTION
KeyError

If the specified metric is not found.

Source code in soundscapy/audio/analysis_settings.py
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
def get_metric_settings(self, metric: str) -> MetricSettings:
    """
    Get the settings for a specific metric.

    Parameters
    ----------
    metric : str
        The name of the metric.

    Returns
    -------
    MetricSettings
        The settings for the specified metric.

    Raises
    ------
    KeyError
        If the specified metric is not found.

    """
    if metric in self.root:
        return self.root[metric]
    logger.error(f"Metric '{metric}' not found in library")
    msg = f"Metric '{metric}' not found in library"
    raise KeyError(msg)

MetricSettings

Bases: BaseModel

Settings for an individual metric.

PARAMETER DESCRIPTION
run

Whether to run this metric.

TYPE: bool

main

The main statistic to calculate.

TYPE: str | int | None

statistics

List of statistics to calculate.

TYPE: list[str | int] | None

channel

List of channels to analyze.

TYPE: list[str]

label

Label for the metric.

TYPE: str

parallel

Whether to run the metric in parallel.

TYPE: bool

func_args

Additional arguments for the metric function.

TYPE: dict[str, Any]

METHOD DESCRIPTION
check_main_in_statistics

Check that the main statistic is in the statistics list.

check_main_in_statistics classmethod

check_main_in_statistics(values)

Check that the main statistic is in the statistics list.

Source code in soundscapy/audio/analysis_settings.py
68
69
70
71
72
73
74
75
76
77
@model_validator(mode="before")
@classmethod
def check_main_in_statistics(cls, values: dict[str, Any]) -> dict[str, Any]:
    """Check that the main statistic is in the statistics list."""
    main = values.get("main")
    statistics = values.get("statistics", [])
    if main and main not in statistics:
        statistics.append(main)
        values["statistics"] = statistics
    return values

options: show_root_heading: false show_root_toc_entry: false

Parallel Processing

Functions for parallel processing of binaural audio files.

It includes functions to load and analyze binaural files, as well as to process multiple files in parallel using concurrent.futures.

Functions: load_analyse_binaural: Load and analyze a single binaural file. parallel_process: Process multiple binaural files in parallel.

Note: This module requires the tqdm library for progress bars and concurrent.futures for parallel processing. It uses loguru for logging.

FUNCTION DESCRIPTION
load_analyse_binaural

Load and analyze a single binaural audio file.

parallel_process

Process multiple binaural files in parallel.

tqdm_write_sink

Custom sink for loguru that writes messages using tqdm.write().

load_analyse_binaural

load_analyse_binaural(wav_file, levels, analysis_settings, resample=None, *, parallel_mosqito=True)

Load and analyze a single binaural audio file.

PARAMETER DESCRIPTION
resample

TYPE: int | None DEFAULT: None

wav_file

Path to the WAV file.

TYPE: Path

levels

Dictionary with calibration levels for each channel.

TYPE: Dict

analysis_settings

Analysis settings object.

TYPE: AnalysisSettings

parallel_mosqito

Whether to process MoSQITo metrics in parallel. Defaults to True.

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
DataFrame

DataFrame with analysis results.

Source code in soundscapy/audio/parallel_processing.py
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
def load_analyse_binaural(
    wav_file: Path,
    levels: dict[str, float] | list[float] | None,
    analysis_settings: AnalysisSettings,
    resample: int | None = None,
    *,
    parallel_mosqito: bool = True,
) -> pd.DataFrame:
    """
    Load and analyze a single binaural audio file.

    Parameters
    ----------
    resample
    wav_file : Path
        Path to the WAV file.
    levels : Dict
        Dictionary with calibration levels for each channel.
    analysis_settings : AnalysisSettings
        Analysis settings object.
    parallel_mosqito : bool, optional
        Whether to process MoSQITo metrics in parallel. Defaults to True.

    Returns
    -------
    pd.DataFrame
        DataFrame with analysis results.

    """
    logger.info(f"Processing {wav_file}")
    try:
        b = Binaural.from_wav(wav_file, resample=resample)
        if levels is not None:
            if isinstance(levels, dict) and b.recording in levels:
                decibel = (levels[b.recording]["Left"], levels[b.recording]["Right"])
                b.calibrate_to(decibel, inplace=True)
            elif isinstance(levels, list | tuple):
                logger.debug(f"Calibrating {wav_file} to {levels} dB")
                b.calibrate_to(levels, inplace=True)
            else:
                logger.warning(f"No calibration levels found for {wav_file}")
        else:
            logger.warning(f"No calibration levels found for {wav_file}")
        return process_all_metrics(b, analysis_settings, parallel=parallel_mosqito)
    except Exception as e:
        logger.error(f"Error processing {wav_file}: {e!s}")
        raise

parallel_process

parallel_process(wav_files, results_df, levels, analysis_settings, max_workers=None, resample=None, *, parallel_mosqito=True)

Process multiple binaural files in parallel.

PARAMETER DESCRIPTION
resample

TYPE: int | None DEFAULT: None

wav_files

List of WAV files to process.

TYPE: List[Path]

results_df

Initial results DataFrame to update.

TYPE: DataFrame

levels

Dictionary with calibration levels for each file.

TYPE: Dict

analysis_settings

Analysis settings object.

TYPE: AnalysisSettings

max_workers

Maximum number of worker processes. If None, it will default to the number of processors on the machine.

TYPE: int DEFAULT: None

parallel_mosqito

Whether to process MoSQITo metrics in parallel within each file. Defaults to True.

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
DataFrame

Updated results DataFrame with analysis results for all files.

Source code in soundscapy/audio/parallel_processing.py
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
def parallel_process(
    wav_files: list[Path],
    results_df: pd.DataFrame,
    levels: dict,
    analysis_settings: AnalysisSettings,
    max_workers: int | None = None,
    resample: int | None = None,
    *,
    parallel_mosqito: bool = True,
) -> pd.DataFrame:
    """
    Process multiple binaural files in parallel.

    Parameters
    ----------
    resample
    wav_files : List[Path]
        List of WAV files to process.
    results_df : pd.DataFrame
        Initial results DataFrame to update.
    levels : Dict
        Dictionary with calibration levels for each file.
    analysis_settings : AnalysisSettings
        Analysis settings object.
    max_workers : int, optional
        Maximum number of worker processes. If None, it will default to the number of processors on the machine.
    parallel_mosqito : bool, optional
        Whether to process MoSQITo metrics in parallel within each file. Defaults to True.

    Returns
    -------
    pd.DataFrame
        Updated results DataFrame with analysis results for all files.

    """
    logger.info(f"Starting parallel processing of {len(wav_files)} files")

    # Add a handler that uses tqdm.write for output
    tqdm_handler_id = logger.add(tqdm_write_sink, format="{message}")

    with concurrent.futures.ProcessPoolExecutor(max_workers=max_workers) as executor:
        futures = []
        for wav_file in wav_files:
            future = executor.submit(
                load_analyse_binaural,
                wav_file,
                levels,
                analysis_settings,
                resample,
                parallel_mosqito=parallel_mosqito,
            )
            futures.append(future)

        with tqdm(total=len(futures), desc="Processing files") as pbar:
            for future in concurrent.futures.as_completed(futures):
                try:
                    result = future.result()
                    results_df = add_results(results_df, result)
                except Exception as e:
                    logger.error(f"Error processing file: {e!s}")
                finally:
                    pbar.update(1)

    # Remove the tqdm-compatible handler
    logger.remove(tqdm_handler_id)

    logger.info("Parallel processing completed")
    return results_df

tqdm_write_sink

tqdm_write_sink(message)

Custom sink for loguru that writes messages using tqdm.write().

This ensures that log messages don't interfere with tqdm progress bars.

Source code in soundscapy/audio/parallel_processing.py
33
34
35
36
37
38
39
def tqdm_write_sink(message: str) -> None:
    """
    Custom sink for loguru that writes messages using tqdm.write().

    This ensures that log messages don't interfere with tqdm progress bars.
    """  # noqa: D401
    tqdm.write(message, end="")

show_submodules: true