New benchmarking study for taxonomic profiling methods

Hello,
I wanted to highlight a new pre-print we’ve made available, which features MEGAN-LR among other methods:

It evaluates the performance of several methods for long read shotgun metagenomic datasets (including PacBio HiFi and ONT data).

The methods include:

  • Kraken2
  • Bracken
  • Centrifuge
  • MetaPhlAn3
  • MEGAN-LR using DIAMOND to NCBI nr
  • MEGAN-LR using minimap2 to NCBI nt
  • MetaMaps
  • MMSeqs2
  • BugSeq

Relevant to this user-base: MEGAN-LR using DIAMOND to NCBI nr was among the top performing methods (along with BugSeq), with extremely high precision and solid recall.

One important point - we found the default value for the minSupportPercent parameter (0.05) is too conservative and results in high precision (no false positives) but lower recall (several false negatives; low abundance species are missed). After tuning we found the optimal value is 0.01, which increases recall (particularly at lower abundances) without reducing precision. We strongly recommend using this setting, which provides high confidence detection down to the 0.04% relative abundance level. It is now the default setting for the PacBio taxonomic profiling pipeline available at: https://github.com/PacificBiosciences/pb-metagenomics-tools. And just as a reminder, setting this to 0 will report ALL read assignments, which will contain potentially thousands of false positives at ultra-low abundances (<0.01%), just like the short-read method outputs (Kraken, Bracken, Centrifuge). This can also be useful, depending on your use-case.

Abstract is attached below:

Long-read shotgun metagenomic sequencing is gaining in popularity and offers many advantages over short-read sequencing. The higher information content in long reads is useful for a variety of metagenomics analyses, including taxonomic profiling. The development of long-read specific tools for taxonomic profiling is accelerating, yet there is a lack of consensus regarding their relative performance. Here, we perform a critical benchmarking study using five long-read methods and four popular short-read methods. We applied these tools to several mock community datasets generated using Pacific Biosciences (PacBio) HiFi or Oxford Nanopore Technology (ONT) sequencing, and evaluated their performance based on read utilization, detection metrics, and relative abundance estimates. Our results show that long-read methods generally outperformed short-read methods. Short-read methods (including Kraken2, Bracken, Centrifuge, and MetaPhlAn3) produced many false positives (particularly at lower abundances), required heavy filtering to achieve acceptable precision (at the cost of reduced recall), and produced inaccurate abundance estimates. By contrast, several long-read methods displayed very high precision and acceptable recall without any filtering required, including BugSeq, MEGAN-LR using translation alignments (DIAMOND to NCBI nr) or nucleotide alignments (minimap2 to NCBI nt). Furthermore, in the PacBio HiFi datasets these long-read methods detected all species down to the 0.1% abundance level with high precision. Other long-read methods, such as MetaMaps and MMseqs2, required moderate filtering to reduce false positives to achieve a suitable balance between precision and recall. We found read quality affected performance for methods relying on protein prediction or exact kmer matching, and these methods performed better with PacBio HiFi datasets. We also found that long-read datasets with a large proportion of shorter reads (<2kb length) resulted in lower precision and worse abundance estimates, relative to length-filtered datasets. Finally, for a given mock community we found that the long-read datasets produced significantly better results than short-read datasets, demonstrating clear advantages for long-read metagenomic sequencing. Our critical assessment of available methods provides recommendations for current research using long reads and establishes a baseline for future benchmarking studies.

1 Like

That is very good to hear, thank you!
I will change the default minSupportPercent to 0.01…