You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Trying out 3dfier on a dataset of approximately 6000 polygon features and LAS data around 400 million points end up in silent exit of 3dfier with simple std::bad_alloc output. Tracing this down leads to static packaged JSON library https://github.com/tudelft3d/3dfier/blob/master/thirdparty/nlohmann-json/json.hpp
All process until file creation was going correctly as far as I interpret the output.
I tried this on a machine with 16gb of RAM. Clearly this is the bottleneck. But I think there could be some improvement. Maybe writing to the file not in one dump but in steps? Or at least catch this a bit better to give some hint why this fails.
Running the same on a machine with 48gb of RAM works through all process and results in proper output file.
The text was updated successfully, but these errors were encountered:
I haven't found this issue yet. I did create datasets with more then 6000 polygons into CityJSON output before but not a lot. What was the output size of the CityJSON file on the 48gb RAM machine?
Large file output with CityGML is fine since we implemented it ourselfs by writing all to a file stream. Not sure how the NLohmann json package does the trick. I imagine it keeps the complete object in memory and writes to disk once at the end.
Trying out 3dfier on a dataset of approximately 6000 polygon features and LAS data around 400 million points end up in silent exit of 3dfier with simple
std::bad_alloc
output. Tracing this down leads to static packaged JSON library https://github.com/tudelft3d/3dfier/blob/master/thirdparty/nlohmann-json/json.hppAll process until file creation was going correctly as far as I interpret the output.
I tried this on a machine with 16gb of RAM. Clearly this is the bottleneck. But I think there could be some improvement. Maybe writing to the file not in one dump but in steps? Or at least catch this a bit better to give some hint why this fails.
Running the same on a machine with 48gb of RAM works through all process and results in proper output file.
The text was updated successfully, but these errors were encountered: