-
Notifications
You must be signed in to change notification settings - Fork 224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with DB transactions in dev:console due to getmypid used in db connection key #1110
Comments
@maaarghk We did a rework of the Magento init process in the develop branch of the project.
|
first line of the issue |
oh wait, is that not an unstable version? if yes the upgrade --unstable command didn't work on the latest version haha |
That's also optimized in #1089 😅, but the self-update runs in the current existing phar. |
@maaarghk You can always manually download the development release here: https://files.magerun.net/n98-magerun2-dev.phar |
Ok, sorry for the confusion, I am pretty sure I ran self update yesterday and got the unstable version (I remember ignoring the "do not use in production" warning). Today I confirmed this issue accurately describes the exact behaviour I see when running 7.0.0-dev (commit: 89ed0b2) so I'm pretty sure I just copied the wrong output into the initial post :) |
@maaarghk ok. The we have to figure out how we can solve the issue. I did some investigations in the past. It's very complex what's going on in the bootstrap processes. |
@maaarghk I was able to reproduce the behavior with a saved customer model. |
First save is good and can be seen instantly from another DB connection. Second save fails with lock timeout. All subsequent saves appear to succeed but cannot be seen from other DB connections. I cannot work out for certain (because I'm running it on a prod site and can't mess around too much) whether the save is using some mystery DB connection that I am unable to find reference to using reflection; or, whether the save is not actually succeeding and returning early for unknown reasons (custom logic in beforeSave / _save for orders or plugins maybe?); or, feels more likely, some third option that I'm miles away from guessing :) (The reason I write off the second option is because |
I guess we have to dive into |
In your repro above, no data is changed between first and second save - the issue of uncommitted or silently rolled back data can't be tested by opening another db connection and attempting to read out from there - it may be the issue only occurs when there is an exception during save (i.e. one of the connections opens and rolls back a transaction) or when a transaction is opened generally? |
@maaarghk Which OS are you using? |
In my case the customer was successfully saved. |
I'm on Linux (looks like 4.9.96 with custom patches added by the hosting company). If the issue only occurs when an exception is thrown during save, then this savegame fork seems relevant - one copy of the execution loop which has a copy of the memory as it was pre-exception - https://github.com/bobthecow/psysh/blob/main/src/ExecutionLoop/ProcessForker.php#L220 - it would also mean for me it would be the 3rd+ saves which fail and not the second. It makes sense that transactions would be lost in that situation but I'm not sure how it recovers lost data like the entity IDs of unsaved comments on the order. Additionally, full php version info:
|
@maaarghk not very easy to figure out ;-) |
n98 Version: 7.0.0-dev (commit: 89ed0b2)
mage version: 2.4.3-p3 Community
php version: 7.4.30 nts
In a dev:console session, orders can only be saved once. On following saves, a lock wait timeout is thrown. I then find e.g. $dh->debugOrderById() returns updated information which is not in sync with the database (including entity_ids for comment rows that are not visible from other connections). #502 mentions multiple transactions but I think the issue could be multiple connections like #667 without context being carried from one to the next:
i.e. whenever any new instruction is called and a new runner process is forked, a new mysql connection is opened which would lose the context of any open transactions. Even still, I cannot find any way similar to the above which returns PDO object having inTransaction set to true, or which returns the missing rows when query()->fetchAll() is run. Possibly transactions are rolled back by the forked process on exit - not sure. I don't know enough about how forking works in php / psysh to square that with my previous statement about debugOrderById having entity_ids that were not committed in it -
default_process_30098
from above references the PID of the repl before it forks, so that may explain why the first time works but following times do not.Can repro this by using $dh->getOrderRepository() to load an order, update e.g. "state", save with $repository->save($order), then update "status", save again and get a lock wait timeout exception. At that point it is not possible to retry - calling save() multiple times will look like it has worked (e.g. but the database is never updated. (Have to use valid values of status as it gets validated in the resourcemodel).
I wasn't able to check whether this is a side effect of \Zend_Db_Adapter_Exception being thrown generally, or specific to LockWaitException being thrown during a transaction.
The text was updated successfully, but these errors were encountered: