05.07.2023 aktualisiert


100 % verfügbar
IT-Freelancer
Niedersachsen - Hameln, Deutschland
Deutschland
Promovierter Diplom-MathematikerSkills
Spark (Scala)Spark SQLSpark StreamingPythondata sciencemachine learningAWS (Amazon WebServices)Scala / SparkScala DeveloperScala EnwicklungScala Functional ProgrammingMathematik / IT Consultingdeep leariningNumPyKeras PySparkAWS Kinesis
Branchenerfahrung: Online-Advertisement, Automotive, Soziale Netzwerke, Retail, Finance (FX/MM)
Fachlicher Schwerpunkt: Entwicklung und Umsetzung von Software-Lösungen im Bereich "Data"
Fachwissen:
Fachlicher Schwerpunkt: Entwicklung und Umsetzung von Software-Lösungen im Bereich "Data"
Fachwissen:
- Data Science/Data Mining
- Machine Learning/Deep Learning/Natural Language Processing
- Software Design Patterns
- Programmierparadigmen:
- Objektorientierte Programmierung
- Funktionale Programmierung
- Imperative Programmierung
- Statistik & Stochastik/Numerik
- Webtechnologien
- Scala
- Python
- Kotlin
- SQL
- Bash
- MS SQL Server
- MySQL
- MongoDB
- Spark
- Hadoop
- Kafka
- Cassandra
- Git (sowie Github/Bitbucket)
- Confluence/Jira/Trello
- Docker
- EC2/EMR
- Kinesis
- Lambda
- S3
- Spark MLlib
- scikit-learn
- NumPy
- Pandas
- Keras
- PyTest
- Flask
- JUnit
- ScalaTest/ScalaMock
- Apache Airflow
- Apache Zeppelin
- Confluent Developer Training: Building Kafka Solutions
- EdX BerkeleyX – CS190.1x: Scalable Machine Learning
- EdX Introduction to Big Data with Apache Spark
- Coursera Functional Programming Principles in Scala
- Coursera R Programming
- Coursera Exploratory Data Analysis
- Coursera Developing Data Products
- Coursera Statistical Inference
- Coursera Pattern Discovery in Data Mining
- Coursera Cluster Analysis in Data Mining
- Coursera Text Mining and Analytics
- Coursera Applied AI with Deep Learning
- Coursera Getting Started with Google Kubernetes Engine
- Coursera Kotlin for Java Developers
- AWS AI Bootcamp, München, 07/2017 (1 Tag)
- Confluent Developer Training: Building Kafka Solutions, Dallas (TX), USA, 07/2017 (3 Tage)
- Mesosphere DC/OS Crash Course (Container Days 2016), 06/2016, Container Days 2016
- Spark-Training by lightbend (Typesafe), 11/2015 (2 Tage)
- Advanced Training by databricks: Exploring Wikipedia with Spark (Spark Summit 2014), Spark Summit 2014, (1 Tag)
- Flink hands-on Training (Flink Forward 2014) (Halber Tag)
Sprachen
DeutschMutterspracheEnglischverhandlungssicherFranzösischGrundkenntnisse
Projekthistorie
As a data engineer, I developed the replacement of existing clustering batch jobs for a software company belonging to a leading German multinational automotive manufacturer. This clustering uses location and additional metadata from worldwide distributed electric charging stations.
As part of the company's team responsible for POI data management (POI=point of interest), I collaborated closely with the developers who previously worked on this task and utilized my expertise with Spark and Airflow to design and implement the new solution using Databricks on Azure.
Technologies: Databricks on Azure, Azure Blob Storage, AWS S3, Python/Scala, PySpark/Spark, GraphX, Pandas, Numpy, Jenkins, Azure Functions, Redis, Poetry/SBT, Airflow, Git, Bitbucket
As part of the company's team responsible for POI data management (POI=point of interest), I collaborated closely with the developers who previously worked on this task and utilized my expertise with Spark and Airflow to design and implement the new solution using Databricks on Azure.
- In collaboration with another team member, I worked on refactoring the existing Airflow pipeline for the batch jobs and the grown and complex business logic. We also refactored the library code used for the clustering and preprocessing steps. While doing so, I added missing documentation and improved code quality by using techniques from defensive programming.
- Based on the acquired domain knowledge and business requirements for the clustering, I built a new solution in Azure Cloud using Databricks. Tasks involved integrating existing data in AWS S3 and Azure blob storage, development of library code, and Spark job for geospatial clustering of charging station data.
- While the team mainly works with Python, the new solution also uses the official open-source Scala library for graph clustering. For this, I used my experience with Scala and JVM-based development to make the library callable from Python using a suitable wrapper class.
- At the beginning of the project, I also worked with the developers and testing team to eliminate bugs and increase the test coverage for an event-driven service. This Azure Functions-based service detects and removes certain privacy-related information within streams of vehicle signals. Redis is used to cache intermediary results detected in the event streams.
- Other tasks: Contribution to code reviews, PI planning, testing, and documentation.
Technologies: Databricks on Azure, Azure Blob Storage, AWS S3, Python/Scala, PySpark/Spark, GraphX, Pandas, Numpy, Jenkins, Azure Functions, Redis, Poetry/SBT, Airflow, Git, Bitbucket
Data Engineer in the central team responsible for data and scores used by automatised customer relationship management.
Technologies: Python, PySpark, AWS (S3, Kinesis, Athena, EMR, Glue), SQL, Pandas, Kubernetes, Terraform, Git, GitLab
- Productionizing of machine learning models in the cloud for churn scoring, next best actions and the prediction of customer behavior.
- Collaboration with Data Scientists for the development of these models, and another team responsible for recommendations, search and APIs on the streaming portal.
- Building and planning of ETL pipelines for contract and usage data as well as for the computation of features consumed by machine learning models.
- Automation of data exports and sinking scores consumed by other services and tools into the central event bus.
Technologies: Python, PySpark, AWS (S3, Kinesis, Athena, EMR, Glue), SQL, Pandas, Kubernetes, Terraform, Git, GitLab
Supported the client's data engineering team with the conceptual and architectural preparation of the extraction of data from an external API to their AWS-based Big Data Lake and the Redshift data warehouse.
The API belongs to a SaaS platform used for campaign management and customer analytics, involving topics from Natural Language Processing (NLP) including for example sentiment analysis and phrase as well as keyword detection within customer text comments.
Activities included:
Technologies: Scala, Spark, AWS, Kafka, Delta Lake, SQL, Natural Language Processing, Redshift
The API belongs to a SaaS platform used for campaign management and customer analytics, involving topics from Natural Language Processing (NLP) including for example sentiment analysis and phrase as well as keyword detection within customer text comments.
Activities included:
- Gathering requirements from project stakeholders and requirement analysis.
- Clarification of issues related to Natural Language Processing and the API design with a contact person from the SaaS platform and stakeholders.
- Extension of documentation in Confluence.
- Preparation of logical data model and conceptual design of the ETL processing pipeline.
- Preparation of proof of concept for streaming data from Kafka cluster to S3 layer in AWS cloud using Databricks Delta Lake framework.
Technologies: Scala, Spark, AWS, Kafka, Delta Lake, SQL, Natural Language Processing, Redshift