You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm new to vaex, my senario is calculation: calculate and save the results into a bunch of big files preparation: to select some columns and rows from each one of that bunch of big files, concat the selection, and then save the whole dataframe into a big file loading: my other process need to load it into memory very fast.
Notice that my big files is big enough, say 1/3 of my whole memory, loading them all at once would cause OOM. I need to sample each file by a rather small fraction, say 5%( sampling can be done during calculation), with a planning of precise allocation without copying. I need to concat them into a big one, as big as my memory can hold. This file format must be some kind of memory mapping thus I can load them at once, reading must be super fast while writing cannot be too slow, my test result is save: arrow>feather>>parquet>>>hdf, load: arrow=parquet=hdf>>feather, which is not matched with official docs.
I've tried several ways but the performance turns out dramatically different.
For example, this way can only runs in one core thus is extremly slow df = vaex.concat([vaex.open(f)[x_cols][i:j] for f in files]).export('cache.arrow')
This way is rather slow too. df = vaex.open_many(files)[x_cols].sample(frac=0.05).export('cache.arrow')
So, does someone could please give me some guide towards the best practise?
The text was updated successfully, but these errors were encountered:
I'm new to vaex, my senario is
calculation: calculate and save the results into a bunch of big files
preparation: to select some columns and rows from each one of that bunch of big files, concat the selection, and then save the whole dataframe into a big file
loading: my other process need to load it into memory very fast.
Notice that my big files is big enough, say 1/3 of my whole memory, loading them all at once would cause OOM. I need to sample each file by a rather small fraction, say 5%( sampling can be done during calculation), with a planning of precise allocation without copying. I need to concat them into a big one, as big as my memory can hold. This file format must be some kind of memory mapping thus I can load them at once, reading must be super fast while writing cannot be too slow, my test result is save: arrow>feather>>parquet>>>hdf, load: arrow=parquet=hdf>>feather, which is not matched with official docs.
I've tried several ways but the performance turns out dramatically different.
For example, this way can only runs in one core thus is extremly slow
df = vaex.concat([vaex.open(f)[x_cols][i:j] for f in files]).export('cache.arrow')
This way is rather slow too.
df = vaex.open_many(files)[x_cols].sample(frac=0.05).export('cache.arrow')
So, does someone could please give me some guide towards the best practise?
The text was updated successfully, but these errors were encountered: