-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Startup Crash psycopg2.errors.DuplicateTable: relation "background_updates" already exists #16286
Comments
This code should only run if Synapse thinks you don't have a database setup, see here. What does |
@clokep is empty:
|
Are the other expected tables there, e.g. does I'm not sure what could have blanked |
@clokep any idea how i can fix that?
Yes events exists:
|
Hello, I have the same problem with a deployment with synapse and postgresql with docker swarm with a storage on glusterfs. |
@aukfood do you have a backup? maybe try to insert the data from schema_versions manually from your backup into the table. It fixed that issue for me, but other issues spawned and i could not fix them so i had to start over. |
I looked a bit into this and that table is updated in a transaction so I can't really see how it would get cleared out. Is it possible that multiple Synapse (main) containers started at the same time? Note that deleting the table manually is likely just to break the database more. If you previously had v1.91.2 installed then the proper data for that table should be: INSERT INTO schema_version (version, upgraded) VALUES (80, true); |
Idem with a new deploy :( |
This sounds not quite like the same issue then -- I'd really double check that only a single main Synapse process is running at once though. This is a recipe for database corruption and will not work. |
hum hum very strange, it's works with local storage but not with glusterfs storage ... |
@aukfood are you using some kind of replication? |
@Y0ngg4n yes glusterfs storage replicated in 3 servers |
@Y0ngg4n Are you also using some sort of storage replication? I can't quite figure out how kubegres works. If so, it sounds like there might be some sort of misconfiguration / incompatibility when using storage replication. |
@Y0ngg4n yes glusterfs is a replicated storage |
@aukfood It looks like glusterfs is known to have incompatibilies with PostgreSQL: gluster/glusterfs#2056 @Y0ngg4n I'm guessing something weird happened with your cephfs, it is probably better to use the PostgreSQL application-level replication instead of file system level replication in this case. Regardless, neither of these feel like a bug in Synapse, but something broken in your file system. |
@clokep my postgresql has a ceph replication and streaming postgresql replication. |
Description
So i restarted my synapse container and it somehow throws this error now.
If i delete the table and restart it it recreates the table, but throws the error again. I did not change anything on configuration side. There was also a discussion on matrix:
https://matrix.to/#/!ehXvUhWNASUkSLvAGP:matrix.org/$cCeo3dJyaLOVDruxDQ85KtLf8Pf5HWl6Y9XC6O8MDUs?via=matrix.org&via=libera.chat&via=matrix.breakpointingbad.com
Steps to reproduce
Hard to reproduce.
Restart container.
Homeserver
https://hs.obco.pro
Synapse Version
v1.91.2
Installation Method
Other (please mention below)
Database
postgresql with Kubegres, single database, not ported, not restored
Workers
Single process
Platform
Kubernetes with Kubegres, the deployment is mentioned in the chat and below.
Configuration
Relevant log output
Anything else that would be useful to know?
No response
The text was updated successfully, but these errors were encountered: