Feast operationalizes your offline data so it’s available for real-time predictions, without building custom pipelines.
Ensure consistency across training and serving
Feast guarantees you’re serving the same data to models during training and inference, eliminating training-serving skew.
Reuse your current infrastructure
Feast doesn’t require the deployment and ongoing management of dedicated infrastructure.
It runs on top of cloud managed services; reusing your existing infrastructure and spinning up new resources when needed.
Standardize your data workflows across teams
Feast brings standardization and consistency to your data engineering workflows across models and teams. Many teams use Feast as the foundation of their internal ML platforms.
Is Feast a feature computation system?
No. Even though some feature stores include transformations, Feast purely manages retrieval. Feast is used alongside a separate system that computes feature values. Most often, these are pipelines written in SQL or a Python Dataframe library and scheduled to run periodically.
If you need a managed feature store that provides feature computation, check out Tecton.
How do I install and run Feast?
Feast is a Python library + optional CLI. You can install it using pip.
You might want to periodically run certain Feast commands (e.g. `feast materialize-incremental`, which updates the online store.) We recommend using schedulers such as Airflow or Cloud Composer for this.