Hi, I successfully run two instances of a small SeaTable servers at two distinct location where I regularly work. I would like them to be/stay perfectly in sync, eg when I have worked on one location and updated my tables, I would like to be able to initiate a sync to the other server, so when I continue working at that second location I will be up to date, and vice versa.
I found how I can export a table and re-import, but for instance it then gets re-assigned a different internal ID and my scripts don’t work any longer, etc.
So I would really like a full synchronization to have both installs be identical.
Is there a way to do this?
Do I need to stop eg the source or the target server to do this properly?
Is there any existing procedure or documentation?
NB: I set up an automatic backup with restic, if that is of any help.
Thank you for your reply. I didn’t realize this was so complex. I have been doing this for many other data/service types between these two locations and it is working very well.
Just to be clear: I am not looking for an automated solution where instantly both servers are synchronized. It is fine that this is an action I initiate when I want to transition from one site to the other.
In my mind, I guess the radical basic approach I imagined would be to stop both servers, fully synchronize all files from source to target and restart them. This is a bit heavy and I was hoping for a simpler approach, eg maybe merely tell source server to dump all data not yet written to disk, sync files to target (maybe somehow put target on hold while doing this), then tell target server to update.
I mean in some sense the restic backup must do the first half of that, right? Dump what is on the source server to the backup location. Maybe I can use that as a start, and then I need to know how to “restore”/update/sync the backup to the target server. I couldn’t find this clearly in the doc. Eg does the target server need to be put down for this? What’s the command sequence? …
I hope my approach/intention is fully clear now. I welcome any comments on that (be frank, it’s ok to tell me this is naive/stupid/unfeasable ).
Thank you for these pointers. After some trial and error I think I found a working solution. Please find attached two scripts for anybody with a similar issue to start from. The first script seatable-push-sync.sh pushes from one server to its duplicate, using a local directory “seatable-sync” that is synced between both servers either way on each machine.
Once the synchronization is pushed from machine 1 to machine 2, on machine 2, I execute seatable-restore-sync.sh which loads the new data into seatable.
I noticed that this is not sufficient, because seatable keeps data in memory, so to be sure to have a clean situation on machine 2, I do immediately after the restore sync the following:
sudo docker compose down
sudo docker-compose up -d
The whole process is shown below, in both directions:
This seems to work fine as process. I have/had a few minor issues:
I wanted to use --ignore-table suggested by the documentation to save unnecessary syncronization, but it doesn’t work (haven’t followed up)
I had issues because there are different urls/IPs (that is mentioned in the documentation) but also ports that need to be specified. In particular this needs to be edited in seatable/conf/dtable_web_settings.py, and when this was included in the sync it messed things up. So I just delete this file, because locally on each server the settings are correct
I followed the instructions for adjusting the URLs, but it was unclear what the proper syntax is if one needs to specify ports. See the last line of the restore script. It seems to work, but this could also be because my tables are pretty simple so far (no files, images, which I think are most affected by URLs). In particular I wondered whether the <ip>:<port> needs to be quoted or not for instance
I hope you may find this useful.
Script 1 seatable-push-sync.sh:
printf "\nASKING YOUR PASSWORD FOR SUDO ACCESS\n"
sudo ls &> /dev/null
# Determine which server we are on
ip_address=$(hostname -i)
ip_home="<ip1>"
ip_lab="<ip2>"
if [[ "$ip_address" == "$ip_home" ]]; then
printf "\nWe are at home\n"
BDIR="/volume1/data/opt"
RDIR="/volume1/Data/opt"
SSHCMD="ssh -J proxy-jump@somemachi.ne"
elif [[ "$ip_address" == "$ip_lab" ]]; then
printf "\We are at the lab\n"
BDIR="/volume1/Data/opt"
RDIR="/volume1/data/opt"
else
printf "\nNo usable IP address found -- ABORRTING!\n"
exit
fi
# you can copy these commands to a shell script and execute this via a cronjob.
# Beware that this method will expose your mysql password in the process list
# and shell history of the docker host.
printf "\nSTARTING SEATABLE SYNC TO LOCAL DIRECTORY\n"
#Does not work .. generates empty dump files
#IGNORES="--ignore-table operation_log --ignore-table delete_operation_log --ignore-table session_log --ignore-table activities"
IGNORES=""
SYNCNAM="seatable-sync"
LOGFILE=${BDIR}"/${SYNCNAM}.log"
source ${BDIR}/seatable-compose/.env
printf "\n mariadb dumps\n"
mkdir -p ${BDIR}/${SYNCNAM} && cd ${BDIR}/${SYNCNAM}
sudo docker exec -it mariadb mariadb-dump -u root -p${SEATABLE_MYSQL_ROOT_PASSWORD} --opt ccnet_db ${IGNORES} > ./ccnet_db.sql
printf "\n ccnet.. done\n"
sudo docker exec -it mariadb mariadb-dump -u root -p${SEATABLE_MYSQL_ROOT_PASSWORD} --opt seafile_db ${IGNORES} > ./seafile_db.sql
printf "\n seafile done\n"
sudo docker exec -it mariadb mariadb-dump -u root -p${SEATABLE_MYSQL_ROOT_PASSWORD} --opt dtable_db ${IGNORES} > ./dtable_db.sql
printf "\n dtable done\n"
#NO BIG DATA YET, SO THIS PART HAS NOT BEEN INCLUDED
# force dump of big data to storage-data folder
#sudo docker exec -it seatable-server /opt/seatable/scripts/seatable.sh backup-all
printf "\n data (r)synchronization\n"
# backup files (exclude unnecessary directories)
rsync -az --exclude 'ccnet' --exclude 'logs' --exclude 'db-data' --exclude 'pids' --exclude 'scripts' \
${BDIR}/seatable-server/seatable ${BDIR}/${SYNCNAM} > ${LOGFILE}
rsync -az ${BDIR}/seatable-compose ${BDIR}/${SYNCNAM}/ > ${LOGFILE}
# SYNCHRONIZE DIRECTORY SYNCNAM
printf "\n synchronizing to remote site\n"
rsync -avz -e "${SSHCMD}" ${BDIR}/${SYNCNAM}/ user@remote-host:${RDIR}/${SYNCNAM}/
printf "\n >>> I AM DONE !! <<<\n\n"
Script 2, seatable-restore-sync.sh:
#!/bin/sh
printf "\nASKING YOUR PASSWORD FOR SUDO ACCESS\n"
sudo ls &> /dev/null
# Determine which server we are on
ip_address=$(hostname -i)
ip_home="<ip1>"
ip_lab="<ip2>"
if [[ "$ip_address" == "$ip_home" ]]; then
printf "\nWe are at home\n"
BDIR="/volume1/data/opt"
elif [[ "$ip_address" == "$ip_lab" ]]; then
printf "\nWe are at the lab\n"
BDIR="/volume1/Data/opt"
else
printf "\nNo usable IP address found -- ABORRTING!\n"
exit
fi
# beware that this method will expose your mysql password in the process list and shell history of the docker host
source ${BDIR}/seatable-compose/.env
SYNCNAM="seatable-sync"
cd ${BDIR}
printf "\nRestore ccnet\n"
sudo docker exec -i "mariadb" "/usr/bin/mariadb" -u"root" -p${SEATABLE_MYSQL_ROOT_PASSWORD} ccnet_db < ${BDIR}/${SYNCNAM}/ccnet_db.sql
printf "\nRestore seafile\n"
sudo docker exec -i "mariadb" "/usr/bin/mariadb" -u"root" -p${SEATABLE_MYSQL_ROOT_PASSWORD} seafile_db < ${BDIR}/${SYNCNAM}/seafile_db.sql
printf "\nRestore dtable\n"
sudo docker exec -i "mariadb" "/usr/bin/mariadb" -u"root" -p${SEATABLE_MYSQL_ROOT_PASSWORD} dtable_db < ${BDIR}/${SYNCNAM}/dtable_db.sql
printf "\nRestore server data\n"
rm ${BDIR}/${SYNCNAM}/seatable/conf/dtable_web_settings.py
rsync -az ${BDIR}/${SYNCNAM}/seatable ${BDIR}/seatable-server
# NO BIG DATA REQUIRED YET; MAYBE TO BE IMPLEMENTED IN THE FUTURE
#sudo docker exec -it seatable-server /opt/seatable/scripts/seatable.sh restore-all
# domain change
printf "\nDomain change\n"
sudo docker exec -it seatable-server /bin/bash -c "/opt/seatable/scripts/seatable.sh python-env /opt/seatable/seatable-server-latest/dtable-web/manage.py domain_transfer -all -od http://<ip1>:50080 -nd http://<ip2>:50080"
printf "\n >>> I AM DONE !! <<<\n\n"
1 Like
Do it like thousands of other people who have used SeaTable to develop powerful processes and get their ideas and tasks done more efficiently.