I’m facing a performance issue when running a JavaScript script in SeaTable that processes a large number of rows (up to 10,000 or more). The script performs multiple operations, such as filtering, adding, and deleting rows, based on certain conditions. However, when handling this volume, the browser often freezes, and I need to restart it. Sometimes the script eventually completes, but it takes around 15 minutes or longer, likely due to memory or resource constraints.
Here’s a brief overview of what the script does:
Filters rows with more than one value in a specific column.
Splits each value into separate rows.
Adds new rows and deletes the original rows.
Marks processed rows to avoid re-processing.
I suspect the issue could be due to the high volume of async operations or browser resource limitations. Here are my questions:
• Are there known strategies or best practices to optimize scripts for handling large data sets in SeaTable?
• Would using batch processing or limiting async operations help? If so, how might that be implemented within SeaTable’s JavaScript API?
• Has anyone else encountered this browser memory issue with large SeaTable scripts?
The best way to optimize this is by reducing the amount of requests that you send to SeaTable. I see two ways to do that:
Multiple row deletions at once
Currently you delete every row with a single request await base.deleteRow. Instead you could just write the row._id into an array and then delete all rows with one single query command:
await base.query('Delete from table_name [WhereClause]')
Update multiple rows at once
the same logic to row updates. Instead of updating row by row, you can summarize the actions and then update multiple rows at the same time.