-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there a way to reduce the size of output pdf files? #42
Comments
I second this request. |
pdftoppm is producing pngs. Maybe if it could be swapped over to jpeg, perhaps as an option, it would shrink the pdf |
Having just tested the plugin for the first time, I really feel this need to be prioritised. Here are my file differences for comparison: Original file size: 59.1 MiB It seems that not only the file format but also the resolution and possibly colour space of the image files could use some tweaking? I doubt that most scanned or otherwise rasterised PDFs come as high as 300dpi, so exporting PNGs at that resolution will definitely increase file size. Assuming that these files are only used for generate OCR information -- ie, colour elements from the original PDF will remain intact in the OCR'ed file -- a compromise could be to export the page images as greyscale, which will shrink the file size by half, and might also reduce image noise. Exporting pages as JPGs can also contribute to smaller file sizes. If I save the same greyscale image as PNG and JPG (90% quality), the latter is only third of the PNG file size. But lowering JPG quality might also impact the readability of the text. Issue #23 suggests making image resolution configurable by the user, and it could be really helpful in reducing interim image file sizes, but at the same time makes the process more fidgety, as I know I would end up trying different resolutions to balance OCR quality vs file size... It may be necessary instead to post-process the generated PDF; I can't tell from the Poppler documentation if it is any help in file compression? I resorted to an online PDF service which reduced the 438.7 Mib PDF to 46.9 Mib, with the OCR intact, but it would be nice to save the bandwidth and process the file locally. Especially since I have close to a hundred PDFs in my Zotero library that need the OCR treatment... |
I'm using https://github.com/ocrmypdf/OCRmyPDF as a manual workaround. Maybe it will be sufficient for your case. |
Back in June when I last tried this, I also resorted to using OCRmyPDF after trying zotero-ocr. |
Yeah, the size can become quite large. Tesseract itself creates the PDF with the input we give it. Tesseract would also run on jpg images, but the quality of the OCR output also depends on the inputed images and the colors. The |
Reducing the resolution like in pull request #41 would reduce the size a lot. Using JPEG 2000 files with lossy compression would allow really small PDF files. Ideally that should be implemented in Tesseract. |
I tested this ocr tool on some PDFs I downloaded from Academia.edu and the results were great. However, there's a problem: it increased the file size by A LOT (ex: a 11.8 MB file turned a 107 MB pdf).
I was hoping to use this tool to create searchable and conveniently highlightable PDFs using scans from physical books I have, but scanned files are normally huge on their own. When I ran zotero-ocr on one of my scans (257 MB) I ended up with a file that's over 2GB in size (it won't even open). :(
Is there something I can do to decrease the file sizes?
(I use Zotero 6.0.9 on Windows and have installed the latest version of zotero-ocr)
The text was updated successfully, but these errors were encountered: