Hazy can be installed and deployed in two different formats: single container or distributed architecture:
- Single container (SC) runs as a self-contained service & UI for training, generation, and model management within a single OCI container runtime (e.g. Docker),
- Distributed architecture (DA) runs multiple containerised services (orchestrated with Kubernetes) to enable elastically scalable training and generation workloads.
Both of these installation types have a few common components & concepts:
Synthesiseris the Hazy inference engine itself, providing the core engine for training and generation,
Modelsare trained representations of your source data, which are used to generate synthetic data on demand,
- The Hub provides a browser-based portal to training, generation, your models, and performance metrics.
All of the above are encapsulated in the single
multi-table container image, available from the
Hazy container registry.
SC deployments can be run alongside the following services for enhanced functionality (without the requirement for Kubernetes):
Authservice provides IAM and authentication services to integrate with your enterprise security platforms.
DA deploys have support for all the above, plus also include the following components:
Dispatcheris an orchestration service, used to enable scalable and elastic Hazy job scheduling on Kubernetes.
While the installation methods for each of these options differ, both are reliant on Hazy images being pulled from our container registry for deployment, and need to be run on servers with suitable resources as outlined in Requirements.
For more detailed instructions about securing your Hazy installation see Security.
- Importing container images
- Single container install
- Distributed arch. install
- AWS Marketplace install
- Standalone Synth deployment
- Python SDK install
- Upgrade guide
- Identity and Access Management
- Secrets Manager