WORKFLOWS AND CODE BLOCKS
Putting models in production is top priority for all data teams, but being able to do that easily, without needing sysops or a dedicated developer is what data teams are looking for. With Lentiq teams can write portable, scalable, cross-environment code using our abstraction layer and reusable code blocks functionality.
Have notebooks or applications that solve very specific problems? Link them together as part of a workflow and automate your insights discovery process from data ingestion, to model development and model serving. Bring predictions closer to the end applications and enable real-time decisions and actions that are aimed to wow your users.
Our reusable code blocks can be built from container images or from published notebooks and they can be converted into executable tasks that can have dependencies and can be linked in a workflow.
Enable your team to work closer and faster without spending time adapting code to various environments or reinventing the wheel all the time.
Reusable Code Blocks get shared across data pools and can be executed without adaptation across clouds. It doesn't matter if your data is stored in S3 in AWS or GCS in Google, our portability mechanisms will act as an abstraction layer and enable the code to run anywhere.
to production code
Write scalable notebooks by using Spark as an execution engine and transform notebooks to LambdaBooks that can be used as building blocks in the workflow designer.
Define best practices
to empower data teams
Create generic, portable, reusable code blocks and share them with the rest of the data lake, to promote best practices, kickstart new projects and increase overall collaboration and productivity.
analytics and ML workflows
Combine LambdaBooks and CodeBlocks in a workflow to perform end to end data science and data analytics operations and schedule them to run whenever you need.