Jupyter as a Service On Lentiq EdgeLake
Collaborate on your experiments using a reproducible, scalable and flexible Jupyter environment. Deliver robust results and get insights quickly.
Jupyter as a Service running on Lentiq EdgeLake gives you the freedom to share your projects with other team members and increase collaboration and efficiency.
Kelly from the marketing data team shares a customer behavior analysis model and associated data.
Jim is looking for something similar for his customer churn prediction use case, analyzes the work, runs the experiment, spots an unintended error and signals Kelly.
Kelly is correcting her error, improves the model and republishes it, and Jim extends it for his own use case.
Accelerate model training and use distributed computing to match the size of the dataset, and the complexity of the task.
Channel your energy on the science part of data rather than the ops, with out of the box, automated infrastructure management
Just create a Spark application, connect to it from Jupyter and start crunching data at scale. Instantly scale clusters based on application requirements and don't worry about infrastructure.
Initialize Spark Context
from pyspark.sql import SparkSession spark = SparkSession.builder\ .master("spark://188.8.131.52:7077")\ .getOrCreate()
For all Spark functions to be available, a Spark context has to be initialized in the current notebook.
Get instant access to an interactive, curated, pre-configured and extensible data science environment that offers you everything you need.