Maximum size reached (200 MB), deleting doesnt help but copy works fine

So yesterday I was trying to fix an error in a base that has about 20000 columns without any files or pictures. I tried to fix the error by copying a table within the base with around 1000 entries, then changing stuff to see if it worked and deleting the copied table after that. I did that several times until seatable told me that I reached the maximum size of 200 MB (even though I had the same base as I had at the beginning of the day).

I then deleted some data, cleaned the whole base of whatever I could spare but I still get the error, even though now the base should be considerably smaller than before (where I did not get that error). Also when I copy the base, the copy works fine (so no 200 MB I guess).

So my question is, is there some kind of internal memory or cache that I overloaded with my behaviour? Can this be cleaned up? And where can I see the size of my base so this won’t happen again?

I am using the seatable cloud within a chrome browser, hope I posted all informations needed!


Please check the “Dateiverwaltung” including the “Papierkorb”

1 Like

By the way 200 MB is far to less. We are currently establishing a picture base, and I could be used fine, if the base could be bigger.

I dont see anything in Dateiverwaltung/Papierkorb, all empty :frowning:

When you “delete” files from cells, you do NOT delete the files. You only delete references to the files. The files are still saved in the base.

To remove files from the base, open file management (deutsch: Dateiverwaltung) und remove the images there. When you delete the files in file management, you actually delete the files.

By the way, you find all these info in SeaTable Docs at Wie man Dateien dauerhaft entfernt - SeaTable

This is a missunderstanding, I don’t use any files in my base/table as I wrote in my first post, sorry if that was unclear. The problem appeared just by copying and deleting tables with ordenary data fields (string/int/functions) within a base.

20.000 columns? I guess you mean rows.

First thing you should do is to take a look in the base status. This will tell you how many rows you have in your base.

This said: I am pretty sure that you have files in your base. Without files, it is quite hard to hit the 200MB limit. At the risk of repeating myself: Have a look in the file management if there are any files.

The 200M limit does not include attachment storage. It is the size of pure row data in your base. It is a mechanism to prevent the system.

Currently there is no good way to remove the cached information quickly. Because detecting whether a base exceed 200M limit requires a lot of resource.

The only way to remove the cache is to not visit the base for 24 hours and the cache will be cleaned.

Dear @daniel.pan, so 200 MB is only for data in the rows? Can we upload “unlimited” pictures and documents to the base?
This would be great, as I am currently try to create an Intranet page for company information with SeaTable and use the picture gallery widely for optimal visibility.

Pictures and documents are files that stored outside of the base. Only the links are recorded in the base. A base itself is a file (like Excel file) when stored into the disk.

The base can’t exceed 200MB when stored as a file, otherwise saving it to disk and loading it to memory need too much time.

For your question, it is yes, you can save “unlimited” pictures and documents in a base.

On the other hand, if the base itself is too large, you can also use the big data storage feature and archive some rows to the big data storage.

1 Like

@daniel.pan thank you very much for the clarification. This information is very helpful for my “Intranet” project.

@rdb As I mentioned before, I dont have any files in the base. Also no apps, statistics or whatsoever.

@daniel.pan I did as you said and did not log in for 4 days, but the problem is still the same. I do have daily automations running though, I guess I have to turn them off as well and wait again for a day or two?

Yes, you have to turn it off. We will think a better way to handle it later.

Unfortunately waiting for two days did not work / change anything at all. Any other ideas what I could do? I mean I could just build everything around the copied version, as it seems to be fine, but I’d rather find a way to fix the “old” one …

Cheers Tim

I don’t know whey it does not work. The server should remove the document from its memory when no access for 24 hours.

You can work on the copied version for now. We will improve the mechanism later.