Jupyter as a Service On Lentiq

We speak Python. Our Jupyter Notebooks speak Data Science at scale. You speak collaboration.

Data science teams will love the flexibility of Jupyter Notebook offered as a Service on Lentiq. They can write code in Python, scale as they go and share the code and notebook with other team members. This increases collaboration and shortens the time to production. Great results come from great collaboration.

Get in touch

Data Science as it should be:
efficient & collaborative

Jupyter as a Service running on Lentiq gives you the freedom to share your projects with other team members and increase collaboration and efficiency.

Working in Jupyter is the ultimate way to train models, but what if you could share your discoveries with the team, get feedback, ideas, more insights and improve? It's not an unrealistic expectation. It's a service on Lentiq: shareable, portable code and Notebooks.

  • Bundle code, data and results and easily share them with your team
  • Publish and share embedded models and notebooks with other team members
  • Test new hypotheses by improving an already existing and trained model
  • Easily use curated datasets and notebooks that are documented and explainable

1

Kelly from the marketing data team shares a customer behavior analysis model and associated data.

2

Jim is looking for something similar for his customer churn prediction use case, analyzes the work, runs the experiment, spots an unintended error and signals Kelly.

3

Kelly is correcting her error, improves the model and republishes it, and Jim extends it for his own use case.

Scale your Data Science and AI workloads

Data experts are part of data teams for a reason: they handle data. We handle the backstage. It's automated, anyway.

Scale and distribute
machine learning

Accelerate model training and use distributed computing to match the size of the dataset, and the complexity of the task.

  • Scale the resources allocated to each Jupyter Notebook instance
  • Offload resource-intensive jobs to a managed Spark cluster
  • Automatically scale the Spark cluster as needed both through the UI and API
  • Switch to distributed data frame implementations such as Dask, Ray or Spark

    No infrastructure worries.
    We handle the hard part.

    Channel your energy on the science part of data rather than on what happens under the hood with an out-of-the-box, automated and integrated infrastructure management.

    • Fully managed AWS and GCP infrastructure
    • Automatic provisioning and scaling of applications
    • Rolling-upgrades for applications
    • Pre-configured, auto-tuned, highly-available and resilient applications by design

    Scale your notebooks with a click

    Just create a Spark application, connect to it from Jupyter and start crunching data at scale.Scale clusters based on application requirements instantly and don't worry about infrastructure.

    Initialize Spark Context

    from pyspark.sql import SparkSession
    spark = SparkSession.builder\
    .master("spark://35.228.151.102:7077")\
    .getOrCreate()

    For all Spark functions to be available, a Spark context has to be initialized in the current notebook.

    Convenient for data scientists and developers

    Get instant access to an interactive, curated, pre-configured and extensible data science environment that offers you everything you need.

    Do you want to try Lentiq with your team?

    Get in touch