Team Projects
Over the years, our team has successfully completed numerous challenging projects. These projects not only reflect our technical expertise but also demonstrate our ability to tackle complex tasks across various IT domains. Below is an insight into some of our standout projects.
Construction and Operation of Specialized Hadoop Clusters
Duration: Since 2014
Technologies: HDFS, YARN, Kudu, Mapreduce, Impala, Hive, Hue, Spark, Kafka, Zookeeper, Kerberos, OpenLDAP, Puppet2, Puppet 4/5, Cloudera Manager
In 2015, the team developed its own Puppet module for the installation and management of Cloudera Hadoop clusters under Debian. This module was used to install and operate over ~30 separate clusters, while an additional ~10 clusters were managed using the proprietary Cloudera Manager software. In total, the team has operated over 40 Hadoop clusters for ~15 different clients at one time.
Migration of a Complex Hadoop Environment
Duration: 2018 – 2019
Technologies: HDFS, YARN, Mapreduce, Impala, Hive, Kafka, Zookeeper, Puppet 2, Puppet 4
Over the course of a year, the team took over a complex Hadoop infrastructure with > 200 nodes, without affecting the environment’s availability, from another team with little to no experience in setting up and operating such infrastructure. This required complex reconfiguration and integration into the existing team landscape. System downtime was fundamentally unacceptable and, on an individual case basis, limited to short, predefined time windows.
Architecture, Construction, and Operation of a Consolidated Hadoop Cluster
Duration: Since 2019
Technologies: HDFS, YARN, Mapreduce, Impala, Hive, Hue, Spark, Zookeeper, Kerberos, OpenLDAP, Docker, Puppet 5
Initial efforts by the team to consolidate the Hadoop landscape began in 2015. After extensive lobbying, relevant stakeholders were convinced by 2018 to participate in a joint solution. From 2019, this solution took shape as part of a project, resulting in a high-performance Hadoop environment with 90 nodes. Measures were taken in parallel to migrate existing Hadoop clusters and additionally establish new use cases with minimal lead time. Resource and stakeholder management was especially challenging.
Architecture, Construction, and Operation of a Permission Store
Duration: Since 2017
Technologies: Kafka, Kafka Connect, Kafka Streams, Kafka Mirror Maker, Kafka Schema Registry, Zookeeper, Cassandra, Redis, IPSec, Puppet 5, F5 Loadbalancer
Since 2017, in close collaboration with software development, a high-availability, georedundant system for GDPR-compliant storage of user consents was created as a replacement for an existing but inadequate setup from 2013. Specifically, two similar but independent landscapes were created to cover various application areas and provide a uniform setup for external partners.
Architecture, Construction, and Operation of Central Logging Infrastructure
Duration: Since 2016Technologies: Logstash, Redis, Kafka, Kafka Connect, Kafka Mirror Maker, Kafka Streams, ElasticSearch, Searchguard, OpenDistro, Kerberos, OpenLDAP, Kibana, Puppet 5, F5 Loadbalancer
Following the paradigm that log data fundamentally consists of three components – “SysLog”, “Application logs”, and “Usage data” – the team’s central logging infrastructure is made up of two components: the ApplogStore and the SyslogStore. Both systems receive data from all ~15,000 servers (plus thousands of containers in the Kubernetes infrastructure), process it according to predefined parameters, and make it available to users for purposes ranging from troubleshooting and quality assurance to IT security evaluation. Both systems are highly available, georedundant, and importantly decoupled to avoid repercussions on the input systems.
Construction and Operation of a Tracking / Targeting / Profiling Environment
Duration: Since 2014
Technologies: Apache Webserver, Tomcat, Kafka, Cassandra, Redis, Puppet 2, Puppet 4/5, F5 Loadbalancer
Operating the tracking, targeting, and profiling infrastructure for delivering personalized advertising to end-users forms the basis for all other products. This infrastructure is also offered as a service to other corporate divisions.
Construction and Operation of a Company-Wide Communication Channel
Duration: 2015 – 2016
Technologies: ejabberd, Puppet 4, F5 Loadbalancer
In the process of renewing existing communication channels, the team provided a functional, high-availability communication platform for the entire company in a short time. In 2016, this system was handed over to a team responsible for the company-wide operation of such infrastructure to relieve the employees.
Construction and Operation of a GPU-Based Machine Learning Environment
Duration: Since 2019
Technologies: Docker, cuML, Jupyter Hub, Jupyter Notebook
To accelerate Machine Learning-based product approaches, the team has provided several systems with multiple GPUs to the DataScience colleagues since 2019, where complex calculations are performed using Jupyter, among other tools.
Outsourcing of Non-Core Team Products & Technologies
Duration: Since 2016
Technologies: Adition Adserver Landscape, Typo3, Inxmail
Since 2016, the team has strived to offer the best possible service to our internal customers by outsourcing products and technologies that do not fit the team’s portfolio, thus reducing complexity within the team.