Page 1 of 1

processing large data

Posted: Wed Feb 03, 2021 11:57 am
by bsilva
How to optimize processing time?
For an array of 996 x 224, processing takes ca. 30 min at my notebook.
Could you give me some hints on how to reduce that processing time?
- For instance, do the number of wavelengths considered as input interfere in the processing time?
What else could I test in input alternatives, or what configuration in my computer (more RAM, more swap?).
Kind regards,
Brenner

Re: processing large data

Posted: Thu Feb 04, 2021 10:10 pm
by fsteinmetz
Hi Brenner,

Indeed processing is slower with hyperspectral data. The number of wavelengths provided as output can make a difference (bands_rw). The bands used for atmospheric correction (bands_corr and bands_oc) will also make the most significant difference, but changing these bands will affect the results.

But the easiest way to make the processing faster is to parallelize it on the local cpu cores (multiprocessing=-1). Given the size of your array, the blocksize should then be something in the order of 100 x 100.

I hope this will help,
Kind regards,
François