Jupyter as a Service On Lentiq EdgeLake

Share and reproduce your data science projects with your team in a few clicks

Collaborate on your experiments using a reproducible, scalable and flexible Jupyter environment. Deliver robust results and get insights quickly.

Get Early Access

Data Science as it should be:
efficient & collaborative

Jupyter as a Service running on Lentiq EdgeLake gives you the freedom to share your projects with other team members and increase collaboration and efficiency.

  • Bundle code, data, and results and easily share them with your team
  • Publish and share embedded models and notebooks with other team members
  • Test new hypotheses by improving an already existing and trained model
  • Easily use curated datasets and notebooks that are documented and explainable

1

Kelly from the marketing data team shares a customer behavior analysis model and associated data.

2

Jim is looking for something similar for his customer churn prediction use case, analyzes the work, runs the experiment, spots an unintended error and signals Kelly.

3

Kelly is correcting her error, improves the model and republishes it, and Jim extends it for his own use case.

Scale your Data Science and AI workloads

Scale and distribute machine learning

Accelerate model training and use distributed computing to match the size of the dataset, and the complexity of the task.

  • Scale the resources allocated to each Jupyter Notebook instance
  • Offload resource-intensive jobs to a managed Spark cluster
  • Automatically scale the Spark cluster as needed both through the UI and API
  • Switch to distributed data frame implementations such as Dask, Ray or Spark

Automated, NoOps infrastructure management

Channel your energy on the science part of data rather than the ops, with out of the box, automated infrastructure management

  • Fully managed AWS and GCP infrastructure
  • Automatic provisioning and scaling of applications
  • Rolling-upgrades for applications
  • Pre-configured, auto-tuned, highly-available and resilient applications by design

Scale your notebooks with a click

Just create a Spark application, connect to it from Jupyter and start crunching data at scale. Instantly scale clusters based on application requirements and don't worry about infrastructure.

Initialize Spark Context

from pyspark.sql import SparkSession
spark = SparkSession.builder\
.master("spark://35.228.151.102:7077")\
.getOrCreate()

For all Spark functions to be available, a Spark context has to be initialized in the current notebook.

Convenient for data scientists and developers

Get instant access to an interactive, curated, pre-configured and extensible data science environment that offers you everything you need.

Try EdgeLake with your team for free

Get Early Access