25.04.2025 aktualisiert
MB
verifiziert
Premiumkunde
100 % verfügbarIndividual Software Entwicklung & Outsourcing
Vösendorf, Österreich
Skills
JavaAzureCloudGCPGoogle Cloud PlattformDomain-Driven Design (DDD)DataOpsMLOPSdata sciencePrototypingAngularSpring BootData MeshCIOHybrid CloudQuarkusKafkaDDDTeamleitungTeilprojektleiter
Cloud, DevOps, MS Azure, Terraform IaC, GCP, SAP S4/HANA Cloud, OData V2, technical debt, Spring Boot, Azure, GraphQL, SAP S4/HANA, Machine Learning, Python, Data Science, versioning, RabbitMQ, Kubernetes, Google Cloud Platform (GCP), Java 8, Java EE, REST API Design, Docker, Google Cloud Platform, Rundeck, Jenkins CI, Domain Driven Design, Kafka Streams, CQRS, Jira, Confluence, API, Agile methodologies, Scrum, XP, Interfaces, Web development, Java, Unix/Linux, RedHat, EPP, XML, SOAP, OSGI, Eclipse Equinox, Jenkins, SVN, Nexus Sonatype, JAX WS, CRM, Grafana, InfluxDB, Jenkins, Data Mesh
Sprachen
DeutschMutterspracheEnglischverhandlungssicherKroatischMuttersprache
Projekthistorie
Masters students in the lecture "DevOps for ML". It covers the organizational
principles and practical examples of how CI/CD Pipelines at Machine Learning
projects are being implemented.
principles and practical examples of how CI/CD Pipelines at Machine Learning
projects are being implemented.
I’m responsible for the technical (and organizational) design of a new data platform. The goal is to go hybrid and have the innovation and data teams work completely in the cloud, while parts of the legacy software still have to be operated on-prem. One of the main tasks was leading the clients architecture team towards the target architecture to adopt data mesh.
While handing over my projects I worked on as a Solution Architect, I formed and
built a team from scratch that tackles the following topics, enabling the WALTER
GROUP to jump right into the "data driven" path
* Moving from a centralized, monolithic Data Lake, to multiple Domain
oriented Data Mesh that focus on "Data as a product"
* Integrate the ML Python components from the separated Data Science
team into prototypes to gather quick feedback from real life scenarios in
production
* Work closely together with the data science team to tackle: A/B Testing,
versioning of ML code, implementing feedback loops, consuming data
from various data sources like Kafka, RabbitMQ, Cloud Services, etc.
* Operating a hybrid Data Science development platform on Kubernetes
on-prem and the Google Cloud Platform (GCP)
* Implement concepts and guidelines on how to best use BigQuery and the
overall GCP eco system for the different needs of the organization
Team size in total was between 25-30.
built a team from scratch that tackles the following topics, enabling the WALTER
GROUP to jump right into the "data driven" path
* Moving from a centralized, monolithic Data Lake, to multiple Domain
oriented Data Mesh that focus on "Data as a product"
* Integrate the ML Python components from the separated Data Science
team into prototypes to gather quick feedback from real life scenarios in
production
* Work closely together with the data science team to tackle: A/B Testing,
versioning of ML code, implementing feedback loops, consuming data
from various data sources like Kafka, RabbitMQ, Cloud Services, etc.
* Operating a hybrid Data Science development platform on Kubernetes
on-prem and the Google Cloud Platform (GCP)
* Implement concepts and guidelines on how to best use BigQuery and the
overall GCP eco system for the different needs of the organization
Team size in total was between 25-30.