13.09.2025 aktualisiert

**** ******** ****
20 % verfügbar

Big Data Engineer

Frankfurt, Deutschland Master in Informatik
Frankfurt, Deutschland Master in Informatik

Profilanlagen

Lebenslauf.pdf

Skills

? Data warehousing, ETL & data science
  • Data integration & ETL development : Design and implementation of complex ETL processes (extract, transform, load)
  • Data Warehousing (DWH) : Experience with Oracle Data Warehouse , MS SQL Server , Azure Synapse Analytics
  • Data modeling : Relational modeling (ER, UML), NoSQL databases (Cassandra), partitioning concepts, query optimization
  • Data analysis & visualization : Power BI, Tableau, Seaborn
⚙️ Big Data technologies
  • Apache Spark (development, optimizations, on-premises Java applications)
  • Apache Hadoop , Apache Flink , Apache Kafka (streaming architectures, event processing)
  • Apache Airflow for workflow orchestration and data pipelines
  • Development and operation of scalable big data applications
? Programming & Databases
  • Languages : Python, Java, Scala, PL/SQL, T-SQL
  • Query languages ​​& standards : SQL, XQuery
  • Relational databases : Oracle, MS SQL Server
  • Streaming databases : Oracle Streaming
☁️ Cloud & Infrastructure
  • Microsoft Azure (incl. Azure Synapse, Azure Pipelines, Azure Data Services)
  • Operating systems: Windows & Linux
? Methods & Tools
  • CI/CD pipelines (including Azure Pipelines)
  • Project management methods & agile tools : Jira, Scrum/Kanban
  • Error analysis, debugging & performance tuning in distributed systems

Sprachen

DeutschverhandlungssicherEnglischverhandlungssicher

Projekthistorie

Big Data Engineer

Timocom GmbH
  • Entwicklung mit Apache Spark (Spark-Entwicklung, Spark-Optimierungen, Java-basierte On-Premises-Anwendungen)
  • Einrichtung und Wartung von CI/CD-Pipelines (inkl. Azure Pipelines)
  • SQL-Entwicklung und Datenbankabfragen
  • Arbeit mit Apache Hadoop und Big-Data-Ökosystemen
  • Entwicklung und Betrieb von ETL-Prozessen (Extrahieren, Transformieren, Laden)
  • Konzeption und Implementierung von Big Data Applications
  • Orchestrierung und Workflow-Management mit Apache Airflow
  • Streaming-Architekturen mit Apache Kafka
  • Cloud Computing mit Microsoft Azure, inkl. Azure Synapse Analytics für moderne Data-Warehouse-Lösungen
  • Programmierung in Python, Java und Scala
  • Performance-Tuning und Optimierung großer Datenverarbeitungsprozesse
  • Fehleranalyse und Debugging in verteilten Systemen
  • Datenmodellierung und Pipeline-Architektur für skalierbare Datenlösungen

Big Data Engineer

Capgemini GmbH

>10.000 Mitarbeiter

  • Development with Apache Spark (Spark development, Spark optimizations)
  • Setting up and maintaining CI/CD pipelines
  • SQL development and database queries
  • Working with Apache Hadoop and Big Data ecosystems
  • Development of ETL processes (extract, transform, load)
  • Planning and operation of Big Data Applications
  • Streaming architectures with Apache Kafka
  • Programming in Python , Java and Scala
  • Cloud computing (Microsoft Azure) including data-related services
  • Performance tuning and optimization of large data processing processes
  • Error analysis and debugging in distributed systems
  • Data modeling and pipeline architecture for scalable data solutions

DWH Developer

ID1

Internet und Informationstechnologie

50-250 Mitarbeiter

I'm now part of the data warehouse development team. We have a total of seven people on the team. Our main customers are the aviation industry. These companies provide them with key performance indicators based on data provided by the airlines. My main responsibilities involve developing code to extract insights from data analysis and deploying them. I'm also responsible for developing and generating new ideas to improve our existing architecture.

Kontaktanfrage

Einloggen & anfragen.

Das Kontaktformular ist nur für eingeloggte Nutzer verfügbar.

RegistrierenAnmelden