You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have seen this solution on how to write in ELWC format, but this solution does not scale to huge data. Is there a way to do parallel write with tf.io.TFRecordWriter or any other solution that scales to big data?
The text was updated successfully, but these errors were encountered:
Hi, I'm not the maintainer of this package, but since I have some experience with it, I'll try to answer this question.
The way I've done this is to use the spark-tensorflow-connector (or you can use spark-tfrecord if you want partition by), and output the data frame into SequenceExample format. And when you build the dataset, you can use SEQ instead of ELWC in the dataset builder. (Note that to have the plugin correctly recognize whether a feature is an example feature, you need to wrap it into an array of array of some primitive types)
But, if you want ELWC format in spark, what I've also done instead is to construct ELWC examples by using apply function, and for each partition you write them into one tfrecord file and upload it to your cloud storage.
I have seen this solution on how to write in ELWC format, but this solution does not scale to huge data. Is there a way to do parallel write with
tf.io.TFRecordWriter
or any other solution that scales to big data?The text was updated successfully, but these errors were encountered: