You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When you hit "Execute" on a large datahub job, the operation to queue the items completes without error even if it is slow.
Given sometimes it may take longer than 30 seconds to queue large imports, the web request to Execute the operation should either complete right away then poll the backend regularly to see if it succeeded or failed, or a socket should be opened and kept alive to show if the queueing succeeded or failed. Waiting for the backend to respond to a regular POST request causes the timeout to be able to expire with an error.
Actual behavior
When you queue a large datahub job that takes >30 seconds to queue, the Ext.js hard coded 30s timeout in Pimcore expires, leading to a 503 gateway timeout error on the datahub Execution screen. The queueing will usually continue in the background on the server, and once complete the progress bar will appear with the queue items staged. But for a user, it is confusing because it appears that execution has failed to fill the queue with an error message, only to later show the queue filled.
Steps to reproduce
Create a datahub data-impoter job, maybe of CSV type.
Upload a large CSV file with millions of rows
Go to the Execution tab.
Hit Execute
If the processing to fill the queue takes longer than 30 seconds, a 503 Gateway timeout error will be thrown.
Later the queue will appear with staged items and progress as if there was no error.
The text was updated successfully, but these errors were encountered:
Thanks a lot for reporting the issue. We did not consider the issue as "Pimcore:Priority", "Pimcore:ToDo" or "Pimcore:Backlog", so we're not going to work on that anytime soon. Please create a pull request to fix the issue if this is a bug report. We'll then review it as quickly as possible. If you're interested in contributing a feature, please contact us first here before creating a pull request. We'll then decide whether we'd accept it or not. Thanks for your understanding.
Expected behavior
When you hit "Execute" on a large datahub job, the operation to queue the items completes without error even if it is slow.
Given sometimes it may take longer than 30 seconds to queue large imports, the web request to Execute the operation should either complete right away then poll the backend regularly to see if it succeeded or failed, or a socket should be opened and kept alive to show if the queueing succeeded or failed. Waiting for the backend to respond to a regular POST request causes the timeout to be able to expire with an error.
Actual behavior
When you queue a large datahub job that takes >30 seconds to queue, the Ext.js hard coded 30s timeout in Pimcore expires, leading to a 503 gateway timeout error on the datahub Execution screen. The queueing will usually continue in the background on the server, and once complete the progress bar will appear with the queue items staged. But for a user, it is confusing because it appears that execution has failed to fill the queue with an error message, only to later show the queue filled.
Steps to reproduce
The text was updated successfully, but these errors were encountered: