Exception in thread "main" java.lang.OutOfMemoryError: Java heap space

I m getting the following error when trying to run MEGANizer, do you have any suggestions?

Version MEGAN Community Edition (version 6.21.7, built 23 Jun 2021)
Author(s) Daniel H. Huson
Copyright © 2021 Daniel H. Huson. This program comes with ABSOLUTELY NO WARRANTY.
Java version: 14.0.1
Functional classifications to use: EC, EGGNOG, GTDB, INTERPRO2GO, SEED
Loading ncbi.map: 2,302,807
Loading ncbi.tre: 2,302,811
Loading ec.map: 8,081
Loading ec.tre: 8,085
Loading eggnog.map: 30,875
Loading eggnog.tre: 30,986
Loading gtdb.map: 240,103
Loading gtdb.tre: 240,107
Loading interpro2go.map: 13,894
Loading interpro2go.tre: 28,869
Loading seed.map: 979
Loading seed.tre: 980
Meganizing: C7O_metat_diamond.daa
Meganizing init
Exception in thread “main” java.lang.OutOfMemoryError: Java heap space
at megan/megan.daa.DAAReferencesAnnotator.apply(DAAReferencesAnnotator.java:71)
at megan/megan.daa.Meganize.apply(Meganize.java:73)
at megan/megan.tools.DAAMeganizer.run(DAAMeganizer.java:269)
at megan/megan.tools.DAAMeganizer.main(DAAMeganizer.java:65)
slurmstepd-cpu004: error: *** JOB 2389136 ON cpu004 CANCELLED AT 2022-04-23T13:05:23 DUE TO TIME LIMIT ***

These are the parameters
27849936649 Apr 22 08:45 C7O_metat_diamond.daa

#!/bin/bash
#SBATCH -J megan_sbatch
#SBATCH -o filename_%j.txt
#SBATCH -e filename_%j.err
#SBATCH -c 30 # Number of Cores per Task
#SBATCH --time=24:00:00
#SBATCH --mem=128G

With Slurm jobs, your ~/.bashrc is not sourced,

so it doesn’t initialize the shell for Conda environments and you will get an error message

Add this line before activating the environment:

eval “$(conda shell.bash hook)”

Activate the conda environment

conda activate mybioconda

Run megan

daa-meganizer -i C7O_metat_diamond.daa -mdb megan-map-Feb2022.db

Thank you!
Brooke

Heya,

I think i’ve seen this before. You tell Slurm how many memory you would like to reserve to run by adding the line
#SBATCH --mem=128G

But you don’t tell your software how muh memory to use probalby
Since these tools are often Java based i’ll just assume this one is the same.
You might be able to get it to work if you just add the XMX statement in the software that tells it how much memory it has access too.

Java -jar -Xmx115G daa-meganizer -i C7O_metat_diamond.daa -mdb megan-map-Feb2022.db

Or sometimes it could also work to just add the Xmx part in the command line

daa-meganizer -Xmx115G -i C7O_metat_diamond.daa -mdb megan-map-Feb2022.db

Give that a shot and if it still fails we can stry something else haha

Edit the file megan.vmoptions to set the amount of memory that MEGAN can use.

Usually, this error is thrown when the Java Virtual Machine cannot allocate an object because it is out of memory, and no more memory could be made available by the garbage collector.

Therefore you pretty much have two options:

  • Increase the default memory your program is allowed to use using the -Xmx option (for instance for 1024 MB: -Xmx1024m)

  • Modify your program so that it needs less memory, using less big data structures and getting rid of objects that are not any more used at some point in your program

Increasing the heap size is a bad solution, 100% temporary. It will crash again in somewhere else. To avoid these issues, write high performance code.

  • Use local variables wherever possible.
  • Make sure you select the correct object (EX: Selection between String, StringBuffer and StringBuilder)
  • Use a good code system for your program(EX: Using static variables VS non static variables)
  • Other stuff which could work on your code.
  • Try to move with Multy Threading