26.11.2025 aktualisiert


Premiumkunde
nicht verfügbarIndustrial AI Consultant | Data Scientist | AI/ML Engineer | MLOps | GenAI | Predictive AI | KI
Berlin, Deutschland
Weltweit
Ph.D. in AstrophysicsÜber mich
Mein Weg von der Astrophysik zur Industrie-KI macht mich zu einem einzigartigen Problemlöser. Ich übersetze komplexe, verrauschte Big Data in messbaren Geschäftswert, indem ich pragmatische & robuste End-to-End KI-Lösungen entwickle, von strategischer Konzeption bis produktiver MLOps-Pipeline.
Skills
APIsKünstliche IntelligenzData AnalysisMicrosoft AzureBash ShellBig DataClusteranalyseDatenbankenContinuous IntegrationInformation EngineeringETLDevopsExperimentierenForecastingGithubPythonPostgresqlMachine LearningMongodbNosqlNumpyLeistungsvorhersageVorbeugende InstandhaltungPredictive ModellingPredictive AnalyticsProduct Information ManagementScipyTransformerSoftwareentwicklungSQLZeitreihenanalyseVisual Studio OnlineSupervised LearningTestenChatbotsLarge Language ModelsPrompt EngineeringGenerative AIJupyterGitFastapiPandasMatplotlibScikit-learnIntegrationstestsKubernetesHuggingFaceXgboostDaskPlotlyMachine Learning OperationsApi DesignErkennung von AnomalienDockerUnsupervised Learning
AI & Machine Learning
- Anwendungsbereiche:
- Predictive Maintenance (PdM) & Anomaly Detection
- Time-Series Forecasting & Statistische Modellierung
- Prozessoptimierung & Effizienzsteigerung
- Methoden:
- Supervised Learning (Regression, Classification, Gradient Boosting)
- Unsupervised Learning (Clustering, Dimensionality Reduction)
- Explainable AI (XAI)
- Tools:
- Python, Scikit-learn, XGBoost, Pandas, NumPy, SHAP, SciPy
Generative AI & NLP
- Architekturen & Konzepte:
- Retrieval-Augmented Generation (RAG)
- Prompt Engineering & LLM-Integration
- Vector Search & Semantische Ähnlichkeit
- Chatbots & KI-Assistenten
- Tools:
- LLM APIs (Groq, OpenAI, Mistral), Vector Databases (LanceDB), Sentence Transformers, LangChain, Streamlit, Hugging Face
MLOps & Data Engineering
- Konzepte:
- End-to-End ETL Pipeline Architektur
- CI/CD für Machine Learning
- Modell-Deployment & -Serving
- Experiment Tracking & Monitoring
- Big Data Processing
- Tools:
- Docker, Kubernetes, GitHub Actions, MLflow, Seldon Core, NATS, Dask, SQL (PostgreSQL), NoSQL (MongoDB), Azure Cloud
Core Technologies & Software Development
- Sprachen & Frameworks:
- Python, SQL, Bash
- API Development (FastAPI, REST)
- Entwicklung & Tools:
- Git, Jupyter, VS Code
- Datenvisualisierung (Matplotlib, Seaborn, Plotly)
- Testing (Unit & Integration Tests)
Sprachen
DeutschMutterspracheEnglischMutterspracheFranzösischgutSpanischgut
Projekthistorie
Project Goal
Advise a mobile app CEO by leading an R&D initiative to validate an AI-driven product vision. The goal was to define a clear, data-informed strategy and technical roadmap for using NLP to enhance user interaction, boost engagement, and de-risk a significant future investment.
Contributions
- Led a strategic feasibility study to evaluate the viability of using advanced NLP and sentiment analysis models to achieve key business objectives.
- Conducted in-depth analysis of anonymized user data to uncover actionable patterns, which directly informed the AI development and data acquisition strategy.
- Developed foundational data processing pipelines and machine learning baselines to establish clear, quantitative benchmarks for evaluating more complex AI solutions.
- Authored and presented a comprehensive AI roadmap to executive leadership, translating complex technical findings into a clear, phased, and actionable business plan.
- Delivered a data-driven proposal that clarified the technical path, de-risked the development process, and shaped the company's long-term AI feature strategy.
Tools
Python, Scikit-learn, Pandas, NumPy, NLTK, spaCy, Jupyter, Git, GitHub, SQL, Matplotlib, Seaborn, prompt engineering, LLM APIs, Sentiment Analysis
Project Goal
Enhance a core industrial SaaS product by architecting and deploying a next-gen, scalable anomaly detection system. The objective was to significantly boost diagnostic accuracy, ensure high reliability through robust MLOps, and increase end-user trust in the AI's predictions.
Contributions
- Led the complete development lifecycle, from designing and benchmarking a suite of advanced anomaly detection algorithms to their deployment in a live production environment.
- Architected and implemented a robust, scalable MLOps framework, fully automating the training, deployment, and continuous monitoring of machine learning models.
- Engineered a novel synthetic fault-injection system for rigorous, automated validation, guaranteeing the high reliability required for mission-critical industrial applications.
- Integrated model explainability techniques to deliver transparent, actionable insights into the root causes of anomalies, significantly increasing end-user trust and adoption.
- Optimized system performance and stability by resolving complex backend challenges related to concurrency, memory usage, and service timeouts under high load.
- Drove code quality initiatives, including enhancing CI/CD pipelines and leading a major upgrade of the core algorithmic codebase.
Tools
Python, Docker, Kubernetes, Scikit-learn, Seldon Core, SHAP, GitHub Actions, NATS, Pandas, NumPy, SQL, REST APIs, MLflow, Flask, Git, Bash, Azure, Grafana, PostgreSQL, Jupyter
Project Goal
Build an end-to-end, production-ready Generative AI assistant to transform a large, unstructured knowledge base into an interactive conversational resource. The aim was to enable users to get instant, accurate, & context-aware answers through a natural language interface.
Contributions
- Architected & implemented a complete, end-to-end Retrieval-Augmented Generation (RAG) pipeline, from initial data ingestion to the final user-facing application.
- Engineered a scalable data ingestion & processing system to handle large volumes of unstructured text, creating an optimized knowledge base using vector embeddings.
- Designed & implemented an advanced, multi-stage information retrieval system, combining semantic search with traditional methods and sophisticated reranking to maximize relevance.
- Developed a robust backend service to integrate large language models (LLMs) via APIs, employing advanced prompt engineering to ensure accurate, reliable, and context-grounded answer synthesis.
- Containerized the entire full-stack application & its dependencies for reproducible, scalable deployment, and managed the deployment process to a cloud platform.
- Established a comprehensive monitoring & evaluation framework to rigorously assess system performance and track user interactions for continuous improvement.
Tools
Python, Docker, Generative AI (RAG), LLM APIs, Vector Databases, Streamlit, MongoDB, Sentence Transformers, FastAPI, GitHub Actions, Pandas, NumPy, Scikit-learn, SQL, REST APIs, Bash