Bases broken after domain change

Hello everyone!

I’m writing this because i’m in a bit of a pickle and i would love to have some suggestions or advice.

Yesterday i changed my domain name to something else, which went very well except for the fact that my bases in Seatable still use the old domain for their URL. This causes CORS errors when trying to load the base and it fails entirely. I can access the login and main page fine, but when selecting a base to load, even a newly created one it still uses the old domain name.

I changed the SEATABLE_SERVER_HOSTNAME variable in my docker-compose (using Community Edition Docker) to the new domain, but it somehow didn’t take effect.

Do i need to manually change a config somewhere?

Hello, welcome to the SeaTable Forum!

Changing your domain is always a little tricky! As you can open the interface, it means the domain change was successful. That the bases cannot be opened is due to the wrong address of the dtable-server.

Did you modify the dtable-server address accordingly in the dtable_web_settings.py?
If your domain is “www.example.com” then the dtable-server URL should be “www.example.com/dtable-server”.

Read this article for details:
https://manual.seatable.io/config/dtable_web_settings/

1 Like

That worked wonderfully, I wasn’t aware of those options!
Thank you so much for the ultra speedy help :slight_smile:

Also, just to prevent opening another topic, the images inside the bases (and their thumbnails) are inaccessible/broken. Should i just wait for some sort of automated maintenance script from Seatable, i should i change something manually?

You are welcome!

SeaTable takes the images in the base’s asset and indexes their URLs into the tables. In your case, the URLs are not valid any more, because the domain has been changed. These URLs will not be automatically re-generated.

I’m afraid you’ll have to re-upload those images. Here’s my suggestions:

  1. Go to Attachment Management of your base (from the base menu)
  2. Delete the broken images (from 2.5.4, you can multi-select attachments to delete in batch)
  3. In your base, upload those images again.

Sorry for the inconvenience, but like I said, changing the domain is tricky! I hope you don’t have too many images already in the base!

Yeah, changing domains is a pain! Hopefully i’ll never have to do it again.

Though the idea of manually re-entering all the images is kinda daunting…Do you reckon some sort of script would be able to do it through the API (or a direct SQL edition of the table)?

You may try the “Update a row” or “Batch update rows” API requests (SeaTable API Reference) and see if it works to change the image URLs.

I would recommend you changing one row first and see if it works - if yes, then change it for all the rows. Let me know if this works!

Will try that, thank you so much for your patience!

You are welcome! One further tip: use the “List rows” API request to study how the image URLs look like first…

Just to let you know that i successfully managed to do it! (with some help of a friend that knows Python).
The workflow was the following, in case someone else is in the same dire straits as me:

  1. Get API and Base access credentials
  2. Use a GET request to get a list of rows to a file (table.json)

curl --location --request GET ‘https://new.domain/dtable-server/api/v1/dtables/42c3cc5c-35db-4a26-861f-703fe35484ea/rows/?table_name=InsectDB
-H ‘Authorization: Token myverysecrettoken’ -o table.json

  1. Transform table.json to table_updated.json using the following Python script (python script.py table.json table_updated.json TableName):

import sys
import json

with open(sys.argv[1]) as raw_file:
json_obj = json.loads(json.dumps(json.load(raw_file)).replace(“old.domain”, “new.domain”))

json_result = {‘table_name’: sys.argv[3], ‘updates’: []}

for item in json_obj[‘rows’]:
entry = {“row”:{}}
for key, value in item.items():
if key == ‘_id’:
key = ‘row_id’
entry[key] = value
else:
entry[“row”][key] = value
json_result[‘updates’].append(entry)

with open(sys.argv[2], ‘w’) as final_result:
json.dump(json_result, final_result)

  1. Use a PUT request to batch update the rows:
    curl --location --request PUT ‘https://new.domain/dtable-server/api/v1/dtables/f74e81ef-cfa0-4a12-a2e5-b1f6d087eb11/batch-update-rows/
    –header ‘Authorization: Token myverysecrettoken’
    –header ‘Accept: application/json’
    –header ‘Content-type: application/json’
    -d @table_updated.json

Thanks for sharing your code! Glad to hear that it worked for you.


My only concern is that the Python script simply re-writes all column values, although the only thing that needs to be changed is the image URL.

So a better solution could be, that only the images URLs are re-written, and all the other column values should be removed from the table.json before it’s transformed into table_updated.json. The API will then only change the column values in the updated JSON and leave the other values in the table intact.

This could save you time and avoid making mistakes in the table, because: if you have automatic column types like formula, link formula, created time and so on, they will not really be re-written, but rather ignored when you update them.

Anyway, that this method of yours could work is certainly one advantage of SeaTable, in comparison to Airtable! On the one hand, Airtable cannot be self-hosted (therefore no such problem as “domain change” anyway), and on the other hand, Airtable API doesn’t offer so much possibility and flexibility.

Yeah, I fully agree with your comments and it is great information to have in mind, but for my use case there wasn’t much need to take effort to prune the other fields, as they were static.

Thanks again for all the support and communication!

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.