Lentiq's architecture has been designed to leverage cloud native services to deliver cost efficiency, simplicity, effectiveness and above all resilience.
Lentiq is a SaaS service that has two major layers:
- Lentiq Control Layer is a microservices based control layer that offers provisioning, stores data documentation and shared notebooks, Docker images, workflows etc
- One or more "data pools" which are fully managed Kubernetes based execution environments coupled with object storage buckets. A data pool cannot span multiple cloud providers but different data pools can be deployed on two different cloud providers and are able to share data and code.
Cloud services used
Lentiq offers a cloud portability layer and as such it abstracts away cloud specific services offering a seamless user experience. Data pools can be deployed in a customer's own cloud environment (a such as a VPC or a Google Project) to leverage the existing security context, encryption keys, company wide infrastructure contracts etc.
For each data pool, Lentiq provisions several services from a cloud provider, on behalf of the customer:
- Kubernetes clusters (Google's GKE, Amazon EKS etc.)
- Object Storage buckets (S3, GCS etc.).
- Security keys and ACL rules (IAM etc.)
- Firewall rules (cloud specific)
- Load-balancer services (ELB etc)
Apart from these primitive services Lentiq strives to use the minimum possible to allow on-premises portability as well. It is perfectly possible to use a cloud's unique services such as BigQuery, SageMaker etc directly, but this would of course invalidate the portability objective.
Only the services used by the customer's data pool will generate infrastructure costs for the user. The control service's costs are covered by the Lentiq service cost. To increase the level of security and prohibit the customer data to leave the infrastructure, Lentiq has several small management applications that run in a specific namespace in each data pool. These are used to offer provisioning and data management services and increased performance.
Lentiq takes care of multiple aspects with respect to data:
- Cross-cloud data portability
- Data location & security
- Data sharing & discovery
- Data documentation
- Schema sharing
- Structured data statistics
- Cross-cloud data movement
The portability layer is a library that is automatically loaded by a Spark instance and used to adapt any file path and file access from a Lentiq specific
bdl://data-lake-name.project-name/file-path towards a cloud provider specific file path and protocol such as
By default, using
/filename is expanded to the project's path
bdl://data-lake-name.project-name/file-name. The cloud provider's access key is automatically loaded from the Vault service.
Currently data access needs to be done via Spark, via the SFTP proxy or via the BDL CLI tool. In the future, it will be possible to access the data directly from python.
There are two metastore services in Lentiq that work together:
- The data documentation metadata service, which is hosted in the Lentiq's control services.
- The hive metastore database. This is a database used by Hive, SparkSQL, Spark to share the table schema. We added several extra metadata tables to extend the database with our own data. By creating a table in Spark, it will become available in the data management view, under the Table Browser tab. For performance reasons this metastore service is part of the "Lentiq Services" layer that runs within the client's data pool.
More information on this subject can be found on the Data Management user guide.
The underlying Kubernetes cluster can be scaled up and down by the user from the Lentiq interface. While it is possible to access the kubectl interface we discourage the direct use of the Kubernetes cluster to avoid interference. For example, a separate Kubernetes cluster should be created for additional services such as a microservices based web application that would consume the inference API exposed by the data pool's model server.
A data pool's resources can be further split between teams using the project. A project is not only a resource budgeting mechanism, but also a security context. A project has associated a namespace, object storage keys etc.
Lentiq uses its own authentication and authorization mechanism linked to the entire application and the object storage abstraction layer. A single Lentiq API key will be automatically translated into the appropriate cloud provider's Object Storage key depending on the data being accessed. The cloud provider keys are encrypted and stored into a special Vault service.
Lentiq's access to a customer's data pool security context (kubernetes cluster and object storage) is only possible programatically and has to be allowed by the authentication service explicitly. There is no direct access for Lentiq operators to access the data pool or object storage keys.
Lentiq also manages the Firewall rules for each Application deployed. By default, all access is disabled and needs to be explicitly allowed.
Most services in a data pool are not designed to be shared with the outside world. Some however, such as Apache Spark expose diagnostic interfaces or ingestion endpoints (Apache Kafka). As such, we provision "kubernetes services" and loadbalancer services depending on the situation. The UI clearly differentiates the two.
To find out more about why we built on kubernetes and why we chose this architecture checkout some of our blog posts: