All Posts

The selection of topic-specific texts from Wikipedia

For some machine learning tasks, you want to obtain texts on specific topics. For example, as a basis for a DPR or RAG training data set. But which texts can be used? Especially for non-English texts? And how can they be filtered by topic?

Read more ...


Pandas Data Format and Compression

This is a systematic comparison of the most important pandas data formats (CSV, Parquet with PyArrow backend and Feather) and different compression methods respectively compression levels. The comparison is based on the compression ratio and the time it takes to save and load the data. Factors such as RAM usage are not considered.

Read more ...


The importance of chat templates

A long time ago, when GPT-3.5 (without turbo) was current, LLMs were simply trained to complete texts. When GPT-3.5-turbo was released, there was a small but essential change in the course of ChatGPT. Now the LLM could not only complete texts but also “read and understand” multi turn conversations. At the same time, there was also the option of using system prompts.

Read more ...


Pandas apply

I often use Pandas to process NLP data. In many cases I want to create a new column from the information in an existing column. For example, if I want to have the number of characters or tokens.

Read more ...


Options for Date Encoding

Some data, such as strings, must be encoded to be used in machine learning models. Here we explore the different options for encoding date fields.

Read more ...


Python Installation and Package Management with conda and pip

This article is about installing Python and package management. It is a subjective article and represents my own opinion and experience. The article is structured by several recommendations.

Read more ...


Anomalies in the MLSUM Dataset

While evaluating the ml6team/mt5-small-german-finetune-mlsum summarization model, my colleague Michal Harakal and I noticed that in many cases this model for summarization simply reproduces the first sentence of the input text. Instead, it should generate an independent summary of the whole text.

Read more ...


Clean German Wikipedia Text Corpus released

Today I published a new Wikipedia-based German text corpus. It is to be used for NLP machine learning tasks.

Read more ...


LightGBM with Optuna: Demo released

This week I published a project to show how to combine LightGBM and Optuna efficiently to train good models. The purpose of this work is to be able to be reused as a template for new projects.

Read more ...


German colossal, cleaned Common Crawl corpus (GC4) released

Philipp Reißel (ambeRoad) and me published the largest German text corpus within the German NLP Group: The German colossal, cleaned Common Crawl corpus

Read more ...


Training and Evaluation of our German Electra Language Model Talk

Together with Philipp Reissel from ambeRoad I gave a talk about the training and evaluation of our open-source German Electra NLP language model.

Read more ...