Models in Natural Language Processing are fun to train but can be difficult to deploy. The size of their models, libraries and necessary files can be challenging, especially in a microservice environment. When services should be built as lightweight and slim as possible, large (language) models can lead to a lot of problems. With a recent real-world use case as an example, which runs productively for over a year and in 10 different languages, I will walk you through my experiences with deploying NLP models. What kind of pitfalls, shortcuts, and tricks are possible while bringing an NLP model to production?

In this talk, you will learn about different ways and possibilities to deploy NLP services. I will speak briefly about the way leading from data to model and a running service (without going into much detail) before I will focus on the MLOps part in the end. I will take you with me on my past journey of struggles and successes so that you don’t need to take these detours by yourselves.

Larissa Haas

Affiliation: sovanta AG

I am a Senior Data Scientist working at sovanta AG in Heidelberg. With university degrees in Political Science and Data Science, I combine ethical and business views on NLP projects. My latest projects dealt with combining Machine Learning approaches with SAP technologies. Besides that, I care about AI in Science Fiction, Bullet Journaling, and bringing Roundnet to the Olympic Games.

visit the speaker at: GithubHomepage

Jonathan Brandt

Affiliation: sovanta AG

Hi :) I'm working as a Data Scientist since one year, mainly on topics of natural language processing and timeseries forecasting. Before that I studied physics in Heidelberg.