Chunk a Dataframe to Parquet
I have a large pandas dataframe in memory. How can I chunk it into Parquet files using Metaflow?
I have a large pandas dataframe in memory. How can I chunk it into Parquet files using Metaflow?
I have a Parquet dataset stored in AWS S3 and want to access it in a Metaflow flow. How can I read one or several Parquet files at once from a flow and use them in an Arrow table?