kedro.extras.datasets.pandas.ParquetDataSet¶
-
class
kedro.extras.datasets.pandas.
ParquetDataSet
(filepath, load_args=None, save_args=None, version=None, credentials=None, fs_args=None)[source]¶ Bases:
kedro.io.core.AbstractVersionedDataSet
ParquetDataSet
loads/saves data from/to a Parquet file using an underlying filesystem (e.g.: local, S3, GCS). It uses pandas to handle the Parquet file.Example:
from kedro.extras.datasets.pandas import ParquetDataSet import pandas as pd data = pd.DataFrame({'col1': [1, 2], 'col2': [4, 5], 'col3': [5, 6]}) # data_set = ParquetDataSet(filepath="gcs://bucket/test.parquet") data_set = ParquetDataSet(filepath="test.parquet") data_set.save(data) reloaded = data_set.load() assert data.equals(reloaded)
Attributes
ParquetDataSet.DEFAULT_LOAD_ARGS
ParquetDataSet.DEFAULT_SAVE_ARGS
Methods
ParquetDataSet.__init__
(filepath[, …])Creates a new instance of ParquetDataSet
pointing to a concrete Parquet file on a specific filesystem.ParquetDataSet.exists
()Checks whether a data set’s output already exists by calling the provided _exists() method. ParquetDataSet.from_config
(name, config[, …])Create a data set instance using the configuration provided. ParquetDataSet.load
()Loads data by delegation to the provided load method. ParquetDataSet.release
()Release any cached data. ParquetDataSet.resolve_load_version
()Compute the version the dataset should be loaded with. ParquetDataSet.resolve_save_version
()Compute the version the dataset should be saved with. ParquetDataSet.save
(data)Saves data by delegation to the provided save method. -
DEFAULT_LOAD_ARGS
= {}¶
-
DEFAULT_SAVE_ARGS
= {}¶
-
__init__
(filepath, load_args=None, save_args=None, version=None, credentials=None, fs_args=None)[source]¶ Creates a new instance of
ParquetDataSet
pointing to a concrete Parquet file on a specific filesystem.Parameters: - filepath (
str
) – Filepath in POSIX format to a Parquet file prefixed with a protocol like s3://. If prefix is not provided, file protocol (local filesystem) will be used. The prefix should be any protocol supported byfsspec
. It can also be a path to a directory. If the directory is provided then it can be used for reading partitioned parquet files. Note: http(s) doesn’t support versioning. - load_args (
Optional
[Dict
[str
,Any
]]) – Additional options for loading Parquet file(s). Here you can find all available arguments when reading single file: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_parquet.html Here you can find all available arguments when reading partitioned datasets: https://arrow.apache.org/docs/python/generated/pyarrow.parquet.ParquetDataset.html#pyarrow.parquet.ParquetDataset.read All defaults are preserved. - save_args (
Optional
[Dict
[str
,Any
]]) – Additional saving options for pyarrow.parquet.write_table and pyarrow.Table.from_pandas. Here you can find all available arguments for write_table(): https://arrow.apache.org/docs/python/generated/pyarrow.parquet.write_table.html?highlight=write_table#pyarrow.parquet.write_table The arguments for from_pandas() should be passed through a nested key: from_pandas. E.g.: save_args = {“from_pandas”: {“preserve_index”: False}} Here you can find all available arguments for from_pandas(): https://arrow.apache.org/docs/python/generated/pyarrow.Table.html#pyarrow.Table.from_pandas - version (
Optional
[Version
]) – If specified, should be an instance ofkedro.io.core.Version
. If itsload
attribute is None, the latest version will be loaded. If itssave
attribute is None, save version will be autogenerated. - credentials (
Optional
[Dict
[str
,Any
]]) – Credentials required to get access to the underlying filesystem. E.g. forGCSFileSystem
it should look like {“token”: None}. - fs_args (
Optional
[Dict
[str
,Any
]]) – Extra arguments to pass into underlying filesystem class constructor (e.g. {“project”: “my-project”} forGCSFileSystem
), as well as to pass to the filesystem’s open method through nested keys open_args_load and open_args_save. Here you can find all available arguments for open: https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem.open All defaults are preserved.
Return type: None
- filepath (
-
exists
()¶ Checks whether a data set’s output already exists by calling the provided _exists() method.
Return type: bool
Returns: Flag indicating whether the output already exists. Raises: DataSetError
– when underlying exists method raises error.
-
classmethod
from_config
(name, config, load_version=None, save_version=None)¶ Create a data set instance using the configuration provided.
Parameters: - name (
str
) – Data set name. - config (
Dict
[str
,Any
]) – Data set config dictionary. - load_version (
Optional
[str
]) – Version string to be used forload
operation if the data set is versioned. Has no effect on the data set if versioning was not enabled. - save_version (
Optional
[str
]) – Version string to be used forsave
operation if the data set is versioned. Has no effect on the data set if versioning was not enabled.
Return type: AbstractDataSet
Returns: An instance of an
AbstractDataSet
subclass.Raises: DataSetError
– When the function fails to create the data set from its config.- name (
-
load
()¶ Loads data by delegation to the provided load method.
Return type: Any
Returns: Data returned by the provided load method. Raises: DataSetError
– When underlying load method raises error.
-
release
()¶ Release any cached data.
Raises: DataSetError
– when underlying release method raises error.Return type: None
-
resolve_load_version
()¶ Compute the version the dataset should be loaded with.
Return type: Optional
[str
]
-
resolve_save_version
()¶ Compute the version the dataset should be saved with.
Return type: Optional
[str
]
-
save
(data)¶ Saves data by delegation to the provided save method.
Parameters: data ( Any
) – the value to be saved by provided save method.Raises: DataSetError
– when underlying save method raises error.Return type: None
-