EVENFLOW tools designed to support your work, helping in building smarter, more transparent, and more resilient AI systems
- EVENFLOW Scalability Toolkit – Scalable AI infrastructure for high-volume data environments
The Scalability Toolkit is a set of four open-source components that are designed to enhance the scalability and efficiency of Machine Learning (ML) workflows. The toolkit is specifically designed for distributed and high-volume environments, and it enables streamlined training and prediction. Its four integrated components are:
- Synopses-based Training Optimisation, which accelerates ML training using data summaries and Bayesian optimisation.
- Synopses Data Engine as a Service (SDEaaS), which provides efficient stream processing on Apache Flink and Dask for real-time data summarisation.
- Advanced Distributed-Parallel Training, which reduces training time and communication lag via data-driven synchronisation.
- SuBiTO, which integrates all components into one solution for production-grade ML pipelines.
Who is it for?
- Data Scientists and ML Engineers who handle large datasets (or even continuous data streams)
- Enterprises that deploy AI models, at scale, in production environments
- Research labs who work on AI model efficiency and scalability
- DevOps teams who want to reduce infrastructure load during AI training
Why use it?
- Reduces training times without sacrificing model accuracy
- Improves resource efficiency and CPU usage
- Compatible with widely used platforms (Apache Flink, Kafka)
- The modular design enables using one or all components, depending on the users’ need
How to access the tool?
The toolkit is fully open-source and publicly available:
🔗 EVENFLOW Scalability Toolkit on GitHub
Each component is accompanied by:
- Implementation documentation
- APIs and use examples
- Instructions for integrating with real-time data systems
Who is involved?
Developed and maintained by Athena Research Centre (ARC) within the EVENFLOW project framework.
- Reasoning-Based Forecast Interpreter – Explainable AI for trustworthy event forecasting
The Reasoning-Based Forecast Interpreter is a software component which allows adding specific rules to AI models to enable better predictions in event forecasting applications. The concept behind it is a hybrid neuro-symbolic framework, which also allows users to trace the logic behind an AI system’s decisions, especially in scenarios that involve multiple time-dependent variables. Hence, the tool addresses the “black-box” nature of deep learning, making complex forecasts more understandable and, potentially, more trustworthy.
Who is it for?
- Data Scientists and AI developers who need to justify or debug forecasting models
- Research institutions focused on Explainable AI (XAI) research
- Business analysts who want to interpret model outputs for decision-making
- Compliance officers in regulated sectors (e.g., healthcare, finance) who require model explainability
Why use it?
- Brings clarity and transparency to neural model outputs
- Helps users in analyzing mispredictions and retrain models more effectively
- Promotes the trust of stakeholders in automated decision systems
How to access the tool?The component is available as an open-source project on the EVENFLOW GitHub:
🔗 https://github.com/EVENFLOW-project-EU
Additional engagement includes:
- Conference presentations
- Academic publications
- Collaboration offers research teams for model testing and feedback
- Further development will continue via research proposals and an open-source community of developers and users focused on XAI.
Who is involved?
Developed and maintained by the National Centre for Scientific Research ‘Demokritos’ (NCSR) within the EVENFLOW project framework.
- EVENFLOW Verification Tool – Trustworthy AI forecasting starts with verification
The Verification Toolkit is an open-source component that helps users to verify the robustness and reliability of complex neural models used in Complex Event Forecasting (CEF). The component implements novel methods for the validation of online neuro-symbolic learning, enabling AI systems to make stable predictions even when input data changes slightly or behaves unexpectedly. Therefore, the verification toolkit addresses the challenges of high-dimensional, time-sensitive data streams typical in domains like manufacturing, healthcare, energy, and finance.
Who is it for?
- AI Engineers working on neural models for event prediction
- Verification experts involved in critical industries (e.g., predictive maintenance, fraud detection)
- Research teams and start-ups who seek advanced tooling to validate AI model performance
Why use it?
- Supports AI models and makes them more resilient to noise and potential input variations
- Reduces the risk of sudden AI models’ failures and promotes trust in automated forecasting systems
- Integrates with ease into existing machine learning pipelines
How to access the tool?
The toolkit is available as an open-source project on the EVENFLOW GitHub:
🔗 https://github.com/EVENFLOW-project-EU
Users and contributors can explore:
- Documentation
- Use cases and examples
Who is involved?
Developed and maintained by Imperial College London (ICL) within the EVENFLOW project framework. The National Centre for Scientific Research ‘Demokritos’ (NCSR) is also involved.

