Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tokotron: Tokenized TTS for the SpeechBrain benchmark (single speaker) #37

Merged
merged 144 commits into from
Jul 25, 2024
Merged
Show file tree
Hide file tree
Changes from 124 commits
Commits
Show all changes
144 commits
Select commit Hold shift + click to select a range
d76a3b9
Tokotron: Initial import for the Benchmark
flexthink May 7, 2024
1dcc20c
Tokotron: Update hyperparameter defaults
flexthink May 7, 2024
7e1ae9b
Tokotron: Add a workaround for concat_padded_features, which cannot b…
flexthink May 8, 2024
75509fc
Tokotron: Update the default EOS mode
flexthink May 8, 2024
d55a23d
Tokotron: Add multispeaker support with LibriTTS
flexthink May 9, 2024
9dcaf55
Tokotron: Fix defaults
flexthink May 9, 2024
c23f6ce
Tokotron: LibriTTS fixes
flexthink May 9, 2024
1e187e1
Tokotron: Fixes for feature extraction
flexthink May 10, 2024
6d1326b
Tokotron: Add support for Encodec
flexthink May 10, 2024
65e0754
Tokotron: Fix for Encodec
flexthink May 10, 2024
4afb395
Tokotron: Add normalization to embeddings, change the injection strategy
flexthink May 18, 2024
0e84349
DASB: Implement SSL Tacotron (continuous baseline)
flexthink May 22, 2024
bc29dfa
DASB: Tacotron: Update ljspeech parameters
flexthink May 22, 2024
3f9d4bf
DASB: Tacotron: Add a "freezer"
flexthink May 22, 2024
3f19048
DASB: Tacotron: Fixes
flexthink May 22, 2024
ad5991b
Tokotron: Add Encodec support
flexthink May 22, 2024
4fe3a10
DASB: Tacotron: Fix splits
flexthink May 22, 2024
6ac1a13
DASB: Tacotron device fix
flexthink May 23, 2024
31b6080
Tokotron: Fix audio_num_tokens for encodec
flexthink May 23, 2024
ea0afb1
DASB: Add a MSTacotron2 recipe
flexthink May 24, 2024
4229ef9
Tokotron: Update LJSpeech to use the new vocoder
flexthink May 24, 2024
b456a79
Tokotron: Update for the latest vocoders
flexthink May 24, 2024
c4a30a2
DASB: MSTacotron2: Update for all possible model types
flexthink May 25, 2024
2142488
MSTacotron2: Fixes
flexthink May 25, 2024
bbdd9dc
DASB: MSTacotron2: Update a bad reference
flexthink May 26, 2024
4ec2af8
DASB: MSTacotron2: Fixes
flexthink May 26, 2024
dce0563
DASB: MSTacotron: Undo a temporary debugging change
flexthink May 27, 2024
7d65efe
MSTacotron2: Remove the test set
flexthink May 27, 2024
d296345
DASB: Tokotron: Update hparams
flexthink May 27, 2024
fad4c1c
DASB: Tokotron: Update multispeaker for the latest vocoder
flexthink May 27, 2024
2002a29
DASB: Tokotron: Fixes
flexthink May 27, 2024
aa809a0
MSTacotron2: Implement layer squishing
flexthink May 28, 2024
45e97c9
DASB: Tokotron: Fixes
flexthink May 28, 2024
bdb305b
DASB: Encodec: Add vocoder choice and bandwidth customization
flexthink May 28, 2024
3bd5c05
DASB: Tokotron: Encodec fixes
flexthink May 28, 2024
d099003
DASB: Tokotron: Encodec/DAC fixes
flexthink May 28, 2024
707c4bf
DASB: Tokotron: Fix the character list
flexthink May 29, 2024
2966e7d
DASB: Fix DAC OOM
flexthink May 29, 2024
7915f12
DASB: Tokotron: Add SpeechTokenizer
flexthink May 29, 2024
659e822
DASB: Tokotron: Add standalone evaluation
flexthink May 29, 2024
a74df26
DASB: Tokotron: Refactoring for standalone evaluation
flexthink May 29, 2024
89fbe64
DASB: Tokotron: Remove the reference to the custom MOS model, update …
flexthink May 29, 2024
631c6f3
DASB: Tacotron: Update the continuous baseline Tacotron for the new v…
flexthink May 29, 2024
f358f1d
DASB: TTS: Add unbatching for Whisper ASR (to evaluate faster)
flexthink May 30, 2024
410247e
DASB: Add layer "squishification" for Tacotron LJSpeech
flexthink May 30, 2024
f15a35e
DASB: Tacotron: Add dynamic layer selection
flexthink May 30, 2024
fbe0740
DASB: Preparation: Add optional offloading of saving to a thread exec…
flexthink May 30, 2024
de841f7
DASB: Tacotron: fixes
flexthink May 30, 2024
bbfd1fb
DASB: Feature Extraction: Fix a performance bottleneck with undo_batch
flexthink May 30, 2024
44dca0a
DASB: Preparation: add support for gzipping (because continuous repre…
flexthink May 30, 2024
2a42aff
LJSpeech: Tokotron: Fixes
flexthink May 30, 2024
aee749e
DASB: Preparation: Workarounds for slow clusters
flexthink May 30, 2024
0af7c3e
DASB: Tokotron: Evaluation improvements (alternative tmp folder, samp…
flexthink May 31, 2024
1629d90
DASB: Tokotron: Fixes
flexthink May 31, 2024
fd456f6
DASB: Tokotron: Evaluation: Fix for vocoders
flexthink May 31, 2024
4bd733c
DASB: Tokotron: Implement layer selection
flexthink Jun 3, 2024
475219c
DASB: Tacotron: Add layer selection
flexthink Jun 3, 2024
c689568
DASB: FIxed an extraction bug
flexthink Jun 3, 2024
b266eac
DASB: Tacotron (continuous baseline): Add standalone evaluation script
flexthink Jun 4, 2024
a017edb
DASB: Tacotron: Evaluation: Fixes
flexthink Jun 4, 2024
0f7bfff
DASB: Tacotron: Fixes
flexthink Jun 4, 2024
e3434c4
DASB: Tokotron: Add logic for frozen splits
flexthink Jun 4, 2024
0028e10
DASB: Evaluation: Small fix for older Pythons
flexthink Jun 5, 2024
4986524
DASB: Evaluation fixes
flexthink Jun 5, 2024
119c254
DASB: Vocoder fixes
flexthink Jun 5, 2024
8ea47c0
DASB: Fix checkpoint recovery
flexthink Jun 5, 2024
ad0ed48
DASB: Add the ability to skip unused datasets during evaluation when …
flexthink Jun 5, 2024
39bf46c
DASB: Tokotron: Fixes
flexthink Jun 5, 2024
4e5e7d1
DASB: Evaluation: Fix the save directory
flexthink Jun 6, 2024
3659775
DASB: Add an evaluation script for Multispeaker (tokenized)
flexthink Jun 6, 2024
2ba5af1
DASB: Add an evaluation script for multispaker (tokenized)
flexthink Jun 6, 2024
1926462
DASB: Fix the evaluation script
flexthink Jun 6, 2024
deae0f9
DASB: Evaluation: Add speaker embeddings the extraction of audio toke…
flexthink Jun 6, 2024
0c498a2
DASB: Evaluation: Vocoder fixes
flexthink Jun 7, 2024
e3e5143
DASB: Evaluation: Vocoder fixes
flexthink Jun 7, 2024
d623055
DASB: MSTacotron2: Add overfit test
flexthink Jun 7, 2024
d28cd57
DASB: MSTacotron2: Add overfit test
flexthink Jun 7, 2024
2411fff
DASB: Evaluation: Implement datset filtering (for standardized subsets)
flexthink Jun 10, 2024
ed355f3
Merge branch 'main' into DASB-tokotron-clean
flexthink Jun 11, 2024
47c602d
DASB: Remove experimental multispeaker recipes
flexthink Jun 11, 2024
68f2c6a
DASB: Fix linting errors
flexthink Jun 11, 2024
1f8f76e
DASB: Tokotron: Cosmetic changes, comments
flexthink Jun 12, 2024
708e1ae
DASB: Tokotron: Adjust default embedding sizes and freezing
flexthink Jun 12, 2024
3fc8a15
DASB: Clean-up, minor fixes
flexthink Jun 12, 2024
d3735fa
Merge branch 'DASB-tokotron-clean' into DASB-tokotron
flexthink Jun 12, 2024
e27e4f2
DASB: Tokotron: Integrate continuous representations
flexthink Jun 12, 2024
8420c1d
DASB: Tokotron Continuous: Fixes
flexthink Jun 13, 2024
62dcb9b
DASB: Tokotron: fixes
flexthink Jun 13, 2024
30828cf
DASB: Tokotron/Tacotron: Update epoch counts
flexthink Jun 13, 2024
8e67c69
DASB: Tokotron: Add clipping
flexthink Jun 14, 2024
0ef2577
DASB: Tokotron: Continuous: Fixes
flexthink Jun 17, 2024
e32d7df
DASB: Embedding size fix
flexthink Jun 19, 2024
84df573
Merge branch 'DASB-tokotron-continuous' of https://github.com/flexthi…
flexthink Jun 19, 2024
b3614c6
DASB: Tokotron: Continuous: Set up parameter groups for audio embeddings
flexthink Jun 19, 2024
2eecd0c
DASB: Tokotron: Add length reports
flexthink Jun 20, 2024
f0c8dd5
DASB: Tokotron: Add a fix making it possible to change the size of po…
flexthink Jun 25, 2024
eef4167
DASB: Tokotron: Fixes
flexthink Jun 25, 2024
f258c63
DASB: Tokotron: Fixes
flexthink Jun 25, 2024
94c3c6e
DASB: Tokotron: Add the ability to restore a checkpoint without an op…
flexthink Jun 25, 2024
828af19
DASB: Tokotron: Fix checkpoint restoration
flexthink Jun 25, 2024
0c450fb
TTS Stats: remove tesnor concatenation
flexthink Jun 26, 2024
382482b
DASB: Add the ability to pretrain models with "dummy" speaker embeddi…
flexthink Jun 26, 2024
241eea7
DASB: LJSpeech: Fix ljspeech_prepare
flexthink Jun 27, 2024
a05a679
DASB: Tokotron: Add the ability to reset LR annealing at a specific e…
flexthink Jul 3, 2024
a53349f
Merge branch 'DASB-tokotron-continuous' of https://github.com/flexthi…
flexthink Jul 3, 2024
ecbdad4
DASB: Tokotron: Impement resumable evals
flexthink Jul 5, 2024
86c1f34
Merge branch 'DASB-tokotron-tracker' into DASB-tokotron-continuous
flexthink Jul 5, 2024
91325e3
DASB: Tokotron: Add custom optimizer initialization
flexthink Jul 7, 2024
0a2c0ef
Merge branch 'DASB-tokotron-continuous' of https://github.com/flexthi…
flexthink Jul 7, 2024
34cf73c
DASB: Tokotron: Fixes
flexthink Jul 7, 2024
fb7ae07
DASB: Tokotron Continuous: Fixes
flexthink Jul 8, 2024
c0c06ca
DASB: Tokotron: Additional fixes
flexthink Jul 8, 2024
617875c
Merge branch 'DASB-tokotron-continuous' of https://github.com/flexthi…
flexthink Jul 8, 2024
3482c0e
Merge branch 'DASB-tokotron-continuous' into DASB-tokotron-refactor
flexthink Jul 10, 2024
7314c05
DASB: Tokotron: Remove LibriTTS multi-speaker (out of scope for this PR)
flexthink Jul 10, 2024
d672915
DASB: Refactor: Remove Tacotron, align naming conventions
flexthink Jul 10, 2024
657f7c1
DASB: Tokotron: Naming updates
flexthink Jul 10, 2024
f2ee2ac
DASB: Tokotron: Integrate evaluation with training
flexthink Jul 14, 2024
88aebfb
DASB: Tokotron: Additional fixes
flexthink Jul 16, 2024
add34ba
DASB: Tokotron: Fix default experiment names
flexthink Jul 16, 2024
e2f4b51
DASB: Tokotron: Device fix
flexthink Jul 16, 2024
c32c709
DASB: Tokotron: Fix a typo
flexthink Jul 17, 2024
c4389bd
DASB: Tokotron: Remove ST duplication
flexthink Jul 17, 2024
a80c1b6
Merge remote-tracking branch 'upstream/DASB' into DASB-tokotron-clean
poonehmousavi Jul 22, 2024
5fc43bc
DASB: Tokotron: Fix flake8 errors
flexthink Jul 23, 2024
d50c0df
DASB: Tokotron: Minor refactoring
flexthink Jul 23, 2024
29e3bab
DASB: Tokotron: Update generative script
flexthink Jul 23, 2024
e1f6690
DASB: Tokotron: Update script, fix a missing import
flexthink Jul 23, 2024
ea28166
DASB: Update the script to allow arbitrary overrides
flexthink Jul 23, 2024
a306d3c
fix discriminative script
poonehmousavi Jul 23, 2024
1ba7afd
DASB: Tokotron: Refactoring
flexthink Jul 24, 2024
586a7a5
Merge branch 'DASB-tokotron-clean' of https://github.com/flexthink/be…
flexthink Jul 24, 2024
e767da5
DASB: Tokotron: Fixes
flexthink Jul 24, 2024
6ae2d02
modify bash
poonehmousavi Jul 24, 2024
a1493a8
fix bash scripts
poonehmousavi Jul 24, 2024
7f92e6f
add print to script
poonehmousavi Jul 24, 2024
dc1daf3
DASB: Tokotron: Add the UTMOS path to the shell script
flexthink Jul 24, 2024
f74c40e
Merge branch 'DASB-tokotron-clean' of https://github.com/flexthink/be…
flexthink Jul 24, 2024
4522953
DASB: Tokotron: Update README for UTMOS
flexthink Jul 24, 2024
b11c1d2
FIX INDENT IN README
poonehmousavi Jul 24, 2024
dc0bf29
DASB: Tokotron: Update TTS args
flexthink Jul 24, 2024
1318aee
remove comments
poonehmousavi Jul 25, 2024
e89de0e
fix precommit
poonehmousavi Jul 25, 2024
2939ae4
fix main readme with main branch
poonehmousavi Jul 25, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ repos:
hooks:
- id: trailing-whitespace
types: [file, text]
exclude: ".*char_[a-z]{2}.txt"
- id: end-of-file-fixer
types: [python]
- id: requirements-txt-fixer
Expand Down
366 changes: 366 additions & 0 deletions benchmarks/DASB/LJSpeech/TTS/tokotron/evaluate.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,366 @@
"""Evaluates a checkpoint using an MOS estimation tool

Authors
* Artem Ploujnikov 2024
"""

import speechbrain as sb
import json
import logging
import csv
import torch
import re
from pathlib import Path
from types import SimpleNamespace
from torch.nn import ModuleDict
from benchmarks.DASB.utils.data import undo_batch
from benchmarks.DASB.utils.eval import vocoder_to_device

logger = logging.getLogger(__name__)


class TokotronEvaluator:
"""An evaluator class for the TTS model

Arguments
---------
hparams: dict
hyperparameters (as a dictionary)
create_waveform_fn : callable
the function that will be used to create
waveforms (not unified across all implementations)
device : str | torch.device
the device
"""

def __init__(self, hparams, create_waveform_fn, device):
self.hparams = SimpleNamespace(**hparams)
self.create_waveform_fn = create_waveform_fn
self.device = device
modules = self.hparams.modules
self.modules = ModuleDict(modules).to(self.device)
self.modules.model.vocoder = None
self.enabled_evaluators = set(self.hparams.evaluations.split(","))
evaluators = hparams.get("evaluators", {})
if evaluators:
self.evaluators = {
key: evaluator_f(run_opts={"device": device})
for key, evaluator_f in evaluators.items()
if key in self.enabled_evaluators
}
else:
self.evaluators = {}

bulk_evaluators = getattr(self.hparams, "bulk_evaluators", {})
if bulk_evaluators:
self.bulk_evaluators = {
key: evaluator_f()
for key, evaluator_f in bulk_evaluators.items()
if key in self.enabled_evaluators
}
else:
self.bulk_evaluators = {}

if not self.evaluators and not self.bulk_evaluators:
logger.warn(
"No evaluators were defined - this run will produce samples only"
)

self.attention = []
self.compression = getattr(self.hparams, "compression", False)
if self.compression:
self.compression_model = self.hparams.compression_model(
run_opts={"device": self.device}
)
self.modules.model.compression_model = self.compression_model

def on_evaluate_start(self, stage, epoch):
"""Invoked when evaluation starts

Arguments
---------

stage : sb.Stage
One of sb.Stage.TRAIN, sb.Stage.VALID, or sb.Stage.TEST.
epoch : int
The currently-starting epoch. This is passed
`None` during the test stage.
"""
self.stage = stage
self.epoch = epoch
self.output_folder = self.get_output_folder(stage, epoch)
self.samples_folder = self.output_folder / "samples"
self.samples_folder.mkdir(parents=True, exist_ok=True)
logger.info(
"Starting evaluation, results will be saved in %s",
self.output_folder,
)
self.create_reports()
self.modules.model.show_inference_progress = False
self.item_ids = []
details_keys = list(self.evaluators.keys()) + list(
self.bulk_evaluators.keys()
)
self.details = {evaluator_key: [] for evaluator_key in details_keys}
self.sample_text = []
self.sample_file_names = []
self.ref_file_names = []
if hasattr(self.modules, "vocoder"):
vocoder_to_device(self.modules.vocoder, self.device)

def get_output_folder(self, stage, epoch):
"""Computes the output folder of evaluation results
for the specified stage and epoch.

If the folder does not exists, it will be created.

Arguments
---------
stage : sb.Stage
One of sb.Stage.TRAIN, sb.Stage.VALID, or sb.Stage.TEST.
epoch : int
The currently-starting epoch. This is passed
`None` during the test stage.

Returns
-------
"""
output_folder = (
Path(self.hparams.output_folder) / "eval" / stage.name.lower()
)
if epoch is not None:
output_folder = output_folder / str(epoch)
output_folder.mkdir(parents=True, exist_ok=True)
return output_folder

def on_evaluate_end(self):
"""Runs evaluation on a dataset

Arguments
---------
dataset : speechbrain.dataio.dataset.DynamicItemDataset
a dataset
"""
self.evaluate_bulk()
self.write_summary()
logger.info("Evaluation done")

def create_reports(self):
"""Creates report files and report writers"""
self.report_files = {}
self.report_writers = {}
for evaluator_key in self.enabled_evaluators:
columns = self.get_report_columns(evaluator_key)
file_name = self.output_folder / f"{evaluator_key}.csv"
report_file = open(file_name, "w")
self.report_files[evaluator_key] = report_file
writer = csv.DictWriter(report_file, columns)
writer.writeheader()
self.report_writers[evaluator_key] = writer

def get_report_columns(self, evaluator_key):
"""Returns the columns for the specified evaluator

Arguments
---------
evaluator_key : str
the identifier of the evaluator

Returns
-------
columns : list[str]
a list of column headers
"""
bogus_wavs = torch.randn(2, 10000, device=self.device)
bogus_length = torch.tensor([1.0, 1.0], device=self.device)
if evaluator_key in self.evaluators:
evaluator = self.evaluators[evaluator_key]
result = evaluator.evaluate(
wavs=bogus_wavs,
length=bogus_length,
text=["BOGUS"] * len(bogus_wavs),
wavs_ref=bogus_wavs,
length_ref=bogus_length,
)
else:
bogus_file_name = self.output_folder / "bogus.wav"
evaluator = self.bulk_evaluators[evaluator_key]
sb.dataio.dataio.write_audio(
str(bogus_file_name),
bogus_wavs[0].cpu(),
samplerate=self.hparams.model_sample_rate,
)
result = evaluator.evaluate_files(
file_names=[bogus_file_name],
text=["BOGUS"],
file_names_ref=[bogus_file_name],
)

return ["uttid"] + list(result.details.keys())

def evaluate_batch(self, batch):
"""Runs evaluation on a single batch of speech

Arguments
---------
batch : speechbrain.dataio.batch.PaddedBatch
the batch to be evaluated"""
with torch.no_grad():
batch = batch.to(self.device)
tokens, tokens_length = batch.tokens
infer_out = self.modules.model.infer(
input_tokens=tokens, input_length=tokens_length
)
wav = self.create_waveform_fn(infer_out.audio, infer_out.length)
self.save_samples(batch, wav, infer_out.length)
self.item_ids.extend(batch.uttid)
for evaluator_key, evaluator in self.evaluators.items():
result = evaluator.evaluate(
wavs=wav,
length=infer_out.length,
text=batch.label_norm_eval,
wavs_ref=batch.sig.data,
length_ref=batch.sig.lengths,
sample_rate_ref=self.hparams.sample_rate,
sample_rate=self.hparams.model_sample_rate,
)
details = undo_batch(result.details)
self.write_result(evaluator_key, batch.uttid, details)
self.details[evaluator_key].extend(details)

def evaluate_bulk(self):
"""Runs all configured bulk evaluators, which evaluate a directory
of files - rather than one file at a time"""
for evaluator_key, evaluator in self.bulk_evaluators.items():
result = evaluator.evaluate_files(
file_names=self.sample_file_names,
text=self.sample_text,
file_names_ref=self.ref_file_names,
)
self.details[evaluator_key].append(result.details)
details = undo_batch(result.details)
self.write_result(evaluator_key, self.item_ids, details)

def write_result(self, evaluator_key, uttid, details):
"""Outputs the result details to the report for the specified evaluator

Arguments
---------
evaluator_key : str
The evaluator key
batch : list
The list of IDs
details : list
a list of evaluation details, one dictionary per item
"""
writer = self.report_writers[evaluator_key]
for uttid, details_item in zip(uttid, details):
report_details = {
"uttid": uttid,
**details_item,
}
writer.writerow(ascii_only(flatten(report_details)))
self.report_files[evaluator_key].flush()

def save_samples(self, batch, wav, length):
"""Saves the samples generated by the TTS system

Arguments
---------
batch : speechbrain.dataio.batch.PaddedBatch
the batch being evaluated
wav : torch.Tensor
the waveform
length: torch.Tensor
relative lengths
"""
wav_length_abs = (length * wav.size(1)).int()
for item_id, infer_wav, wav_length in zip(
batch.uttid, wav, wav_length_abs
):
file_name = str(self.samples_folder / f"{item_id}_pred.wav")
infer_wav_cut = infer_wav[: wav_length.item()].cpu()
sb.dataio.dataio.write_audio(
file_name,
infer_wav_cut,
samplerate=self.hparams.model_sample_rate,
)
self.sample_file_names.append(file_name)

def write_summary(self):
"""Outputs summarized statistics"""
summary = self.compute_summary()
file_name = self.output_folder / "summary.json"
with open(file_name, "w") as output_file:
json.dump(summary, output_file, indent=4)

def compute_summary(self):
"""Computes the summarized statistics"""
return {
f"{evaluator_key}_{stat_key}": value
for evaluator_key in self.enabled_evaluators
if evaluator_key in self.details
for metric_key in self.hparams.eval_summary[evaluator_key][
"descriptive"
]
for stat_key, value in descriptive_statistics(
items=self.details[evaluator_key], key=metric_key,
).items()
}


def flatten(value):
"""Converts tensors to scalars and lists of strings to strings

Arguments
---------
value : dict
the dictionary to flatten

Returns
-------
result : dict
a flattened dictionary
"""
return {
key: item_value.item() if torch.is_tensor(item_value) else item_value
for key, item_value in value.items()
}


RE_NON_ASCII = re.compile(r"[^\x00-\x7F]+")


def ascii_only(values):
"""Removes non-ASCII characters"""
return {
key: RE_NON_ASCII.sub("", value) if isinstance(value, str) else value
for key, value in values.items()
}


def descriptive_statistics(items, key):
"""Computes descriptive statistics for the summary

Arguments
---------
items : list
a list of dictionaries with metric values for each item
key : str
"""
values = torch.tensor([item[key] for item in items])
quantiles = torch.tensor([0.25, 0.5, 0.75])
q1, median, q3 = values.quantile(quantiles)
stats = {
"mean": values.mean(),
"std": values.std(),
"min": values.min(),
"max": values.max(),
"median": median,
"q1": q1,
"q3": q3,
"iqr": q3 - q1,
}
return {
f"{key}_{stat_key}": value.item() for stat_key, value in stats.items()
}
Loading
Loading