You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Issue -
When uploading files during backup, if the size of compressed data generated by SnappyOutputStream is multiple of 512, upload fails with following exception ERROR [2017-07-12 12:26:25,087] com.mesosphere.dcos.cassandra.executor.tasks.UploadSnapshot: Upload snapshot failed ! java.io.IOException: Stream is already closed. ! at com.microsoft.azure.storage.blob.BlobOutputStreamInternal.close(BlobOutputStreamInternal.java:313) ~[azure-storage-4.2.0.jar:na] ! at java.io.FilterOutputStream.close(FilterOutputStream.java:159) ~[na:1.8.0_121] ! at com.mesosphere.dcos.cassandra.executor.backup.azure.PageBlobOutputStream.close(PageBlobOutputStream.java:77) ~[cassandra-executor.jar:na] ! at java.io.FilterOutputStream.close(FilterOutputStream.java:159) ~[na:1.8.0_121] ! at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:339) ~[commons-io-2.5.jar:2.5] ! at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:298) ~[commons-io-2.5.jar:2.5] ! at com.mesosphere.dcos.cassandra.executor.backup.AzureStorageDriver.uploadFile(AzureStorageDriver.java:147) ~[cassandra-executor.jar:na] ! ... 19 common frames omitted ! Causing: java.lang.IllegalArgumentException: Self-suppression not permitted ! at java.lang.Throwable.addSuppressed(Throwable.java:1043) ~[na:1.8.0_121] ! at java.io.FilterOutputStream.close(FilterOutputStream.java:159) ~[na:1.8.0_121] ! at com.mesosphere.dcos.cassandra.executor.backup.azure.PageBlobOutputStream.close(PageBlobOutputStream.java:77) ~[cassandra-executor.jar:na] ! at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:339) ~[commons-io-2.5.jar:2.5] ! at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:298) ~[commons-io-2.5.jar:2.5] ! at com.mesosphere.dcos.cassandra.executor.backup.AzureStorageDriver.uploadFile(AzureStorageDriver.java:148) ~[cassandra-executor.jar:na] ! at com.mesosphere.dcos.cassandra.executor.backup.AzureStorageDriver.lambda$uploadDirectory$0(AzureStorageDriver.java:120) ~[cassandra-executor.jar:na] ! at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) ~[na:1.8.0_121] ! at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[na:1.8.0_121] ! at java.util.Iterator.forEachRemaining(Iterator.java:116) ~[na:1.8.0_121] ! at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) ~[na:1.8.0_121] ! at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[na:1.8.0_121] ! at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[na:1.8.0_121] ! at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) ~[na:1.8.0_121] ! at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) ~[na:1.8.0_121] ! at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[na:1.8.0_121] ! at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) ~[na:1.8.0_121] ! at com.mesosphere.dcos.cassandra.executor.backup.AzureStorageDriver.uploadDirectory(AzureStorageDriver.java:116) ~[cassandra-executor.jar:na] ! at com.mesosphere.dcos.cassandra.executor.backup.AzureStorageDriver.upload(AzureStorageDriver.java:91) ~[cassandra-executor.jar:na] ! at com.mesosphere.dcos.cassandra.executor.tasks.UploadSnapshot.run(UploadSnapshot.java:82) ~[cassandra-executor.jar:na] ! at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_121] ! at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_121] ! at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_121] ! at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_121] ! at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Reason -
PageBlobOutput stream does not write to underlying stream till stream close, where it pads the data to be a multiple of 512. However, in case a flush is called on this stream the buffered data if flushed, irrespective of wether it was multiple of 512 or not and an exception is thrown from underlying BlobOutputStreamInternal.
in uploadFile in AzureStorageDriver.java finally { IOUtils.closeQuietly(compress); // super important that the compress close is called first in order to flush IOUtils.closeQuietly(bufferedOutputStream); IOUtils.closeQuietly(pageBlobOutputStream); }
For most case while closing 'compress', the data size(generated from Snappy output) is not multiple of 512, and the flush fails and the underlying PageBlobOutput stream is not closed and exception is set in stream. All subsequent close calls simply ignore the exception.
For the special case when data size from Snappy is multiple of 512, the close on 'compress' succeeds and underlying PageBlob stream is closed. The subsequent close on bufferedOutputStream then fails in the close of try-with-resources block.
The text was updated successfully, but these errors were encountered:
Issue -
When uploading files during backup, if the size of compressed data generated by SnappyOutputStream is multiple of 512, upload fails with following exception
ERROR [2017-07-12 12:26:25,087] com.mesosphere.dcos.cassandra.executor.tasks.UploadSnapshot: Upload snapshot failed ! java.io.IOException: Stream is already closed. ! at com.microsoft.azure.storage.blob.BlobOutputStreamInternal.close(BlobOutputStreamInternal.java:313) ~[azure-storage-4.2.0.jar:na] ! at java.io.FilterOutputStream.close(FilterOutputStream.java:159) ~[na:1.8.0_121] ! at com.mesosphere.dcos.cassandra.executor.backup.azure.PageBlobOutputStream.close(PageBlobOutputStream.java:77) ~[cassandra-executor.jar:na] ! at java.io.FilterOutputStream.close(FilterOutputStream.java:159) ~[na:1.8.0_121] ! at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:339) ~[commons-io-2.5.jar:2.5] ! at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:298) ~[commons-io-2.5.jar:2.5] ! at com.mesosphere.dcos.cassandra.executor.backup.AzureStorageDriver.uploadFile(AzureStorageDriver.java:147) ~[cassandra-executor.jar:na] ! ... 19 common frames omitted ! Causing: java.lang.IllegalArgumentException: Self-suppression not permitted ! at java.lang.Throwable.addSuppressed(Throwable.java:1043) ~[na:1.8.0_121] ! at java.io.FilterOutputStream.close(FilterOutputStream.java:159) ~[na:1.8.0_121] ! at com.mesosphere.dcos.cassandra.executor.backup.azure.PageBlobOutputStream.close(PageBlobOutputStream.java:77) ~[cassandra-executor.jar:na] ! at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:339) ~[commons-io-2.5.jar:2.5] ! at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:298) ~[commons-io-2.5.jar:2.5] ! at com.mesosphere.dcos.cassandra.executor.backup.AzureStorageDriver.uploadFile(AzureStorageDriver.java:148) ~[cassandra-executor.jar:na] ! at com.mesosphere.dcos.cassandra.executor.backup.AzureStorageDriver.lambda$uploadDirectory$0(AzureStorageDriver.java:120) ~[cassandra-executor.jar:na] ! at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) ~[na:1.8.0_121] ! at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[na:1.8.0_121] ! at java.util.Iterator.forEachRemaining(Iterator.java:116) ~[na:1.8.0_121] ! at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) ~[na:1.8.0_121] ! at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[na:1.8.0_121] ! at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[na:1.8.0_121] ! at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) ~[na:1.8.0_121] ! at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) ~[na:1.8.0_121] ! at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[na:1.8.0_121] ! at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) ~[na:1.8.0_121] ! at com.mesosphere.dcos.cassandra.executor.backup.AzureStorageDriver.uploadDirectory(AzureStorageDriver.java:116) ~[cassandra-executor.jar:na] ! at com.mesosphere.dcos.cassandra.executor.backup.AzureStorageDriver.upload(AzureStorageDriver.java:91) ~[cassandra-executor.jar:na] ! at com.mesosphere.dcos.cassandra.executor.tasks.UploadSnapshot.run(UploadSnapshot.java:82) ~[cassandra-executor.jar:na] ! at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_121] ! at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_121] ! at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_121] ! at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_121] ! at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Reason -
PageBlobOutput stream does not write to underlying stream till stream close, where it pads the data to be a multiple of 512. However, in case a flush is called on this stream the buffered data if flushed, irrespective of wether it was multiple of 512 or not and an exception is thrown from underlying BlobOutputStreamInternal.
in uploadFile in AzureStorageDriver.java
finally { IOUtils.closeQuietly(compress); // super important that the compress close is called first in order to flush IOUtils.closeQuietly(bufferedOutputStream); IOUtils.closeQuietly(pageBlobOutputStream); }
For most case while closing 'compress', the data size(generated from Snappy output) is not multiple of 512, and the flush fails and the underlying PageBlobOutput stream is not closed and exception is set in stream. All subsequent close calls simply ignore the exception.
For the special case when data size from Snappy is multiple of 512, the close on 'compress' succeeds and underlying PageBlob stream is closed. The subsequent close on bufferedOutputStream then fails in the close of try-with-resources block.
The text was updated successfully, but these errors were encountered: