I installed SeaTable at my company a few months ago, and we’ve been slowly migrating everything away from Google Sheets and getting it strucutred. The porblems started after reading 40.000 emails into a table, which, even though big data mode was enabled, made the entire base unusable (opening would result in an error 504). Splitting that up, and loading only the first 1/4 also started dropping the performance drastically. While the base still opens, it now takes ~30 seconds to load, and PDF creation is so slow that you have to wait until it gives an error and manually link the document, which still got created.
I’ve read about the minimum requirements of 4 cores and 8gb ram, so I thought a server with 8 cores and 32gb should be enough, but it appears that SeaTable won’t use more than 8gb on my Hetzner VPS anyway.
My question is, is it even possible to have that much data (40.000 entries with about 50 columns each, also containing the raw HTML (= a large amount of data)) inside a base without it stopping to load endlessly or time out after too many entries, and if so, how can I achieve that?
Big Data Mode is active inside the base, and the Table containing the 11.000 entries only has Big-Date views, but this still didn’t improve anything.
I’m happy to provide logs if needed, but the “slow_logs” folder is entirely empty, and glancing through the logs, I didn’t find anything suspicious, but maybe you can point me in the right direction here.