To receive news from us, please register to the mailing list. You will receive information about our upcoming seminars and the link to participate. Our seminars are on Mondays, 16.30-17.30 CET, and we typically send the link on Monday morning.
EURO – The Association of European Operational Research Societies has a new instrument: the EURO Online Seminar Series (EURO OSS), and this is the page of the EURO OSS on Operational Research and Machine Learning, with website https://euroorml.euro-online.org/ and link to register to the mailinglist at https://forms.gle/YWLb6EPKRQnQert68.
The EURO OSS on Operational Research and Machine Learning is an online seminar series with the goal to brand the role of Operational Research in Artificial Intelligence. The format is a weekly session that takes place every Monday, 16.30-17.30 (CET). It is 100% online-access, and it has leading speakers from Operational Research, as well as neighbouring areas, that will cover important topics such as explainability, fairness, fraud, privacy, etc. We also have the YOUNG EURO OSS on Operational Research and Machine Learning. In each YOUNG session, three junior academics will show their latest results in this burgeoning area.
The EURO OSS on Operational Research and Machine Learning is organized by Emilio Carrizosa (IMUS – Instituto de Matemáticas de la Universidad de Sevilla) and Dolores Romero Morales (CBS – Copenhagen Business School) with the collaboration of PhD students Nuria Gómez-Vargas (IMUS) and Thomas Halskov (CBS). The Online Seminar Series is free thanks to the support given by EURO, as well as Universidad de Sevilla (US) and Copenhagen Business School (CBS). This is gratefully acknowledged.
For the academic year 2024/25, we have confirmed the participation of the following speakers (in alphabetical order)
YOUNG EURO OSS on Operational Research and Machine Learning speakers
The EURO OSS on Operational Research and Machine Learning is the sequel of a weekly online event that we have run January 2021-May2024, https://congreso.us.es/mlneedsmo/. We had more than 100 speakers mainly from Europe and North America. There were more than 2000 colleagues registered to receive updates. For some of our speakers we have had more than 200 attendees, but there are also quite a few colleagues that watch the videos instead at our YT channel. This YT channel has more than 1000 subscribers, and some of the talks in our Online Seminar Series have more than 1,000 views so far.
Using AI for Fraud Detection: Recent Research Insights and Emerging Opportunities
To receive the link to attend the seminar, please register to the mailinglist here.
Abstract:
Typically, organizations lose around five percent of their revenue to fraud. In this presentation, we explore advanced AI techniques to address this issue. Drawing on our recent research, we begin by examining cost-sensitive fraud detection methods, such as CS-Logit which integrates the economic imbalances inherent in fraud detection into the optimization of AI models. We then move on to data engineering strategies that enhance the predictive capabilities of both the data and AI models through intelligent instance and feature engineering. We also delve into network data, showcasing our innovative research methods like Gotcha and CATCHM for effective data featurization. A significant focus is placed on Explainable AI (XAI), which demystifies high-performance AI models used in fraud detection, aiding in the development of effective fraud prevention strategies. We provide practical examples from various sectors including credit card fraud, anti-money laundering, insurance fraud, tax evasion, and payment transaction fraud. Furthermore, we discuss the overarching issue of model risk, which encompasses everything from data input to AI model deployment. Throughout the presentation, the speaker will thoroughly discuss his recent research, conducted in partnership with leading global financial institutions such as BNP Paribas Fortis, Allianz, ING, and Ageas.
Verifying message-passing neural networks via topology-based bounds tightening
To receive the link to attend the seminar, please register to the mailinglist here.
Speaker: Prof Ruth Misener
BASF / Royal Academy of Engineering Research Chair in Data Driven Optimisation
Department of Computing
Imperial College London
Abstract:
Since graph neural networks (GNNs) are often vulnerable to attack, we need to know when we can trust them. We develop a computationally effective approach towards providing robust certificates for message-passing neural networks (MPNNs) using a Rectified Linear Unit (ReLU) activation function. Because our work builds on mixed-integer optimization, it encodes a wide variety of subproblems, for example it admits (i) both adding and removing edges, (ii) both global and local budgets, and (iii) both topological perturbations and feature modifications. Our key technology, topology-based bounds tightening, uses graph structure to tighten bounds. We also experiment with aggressive bounds tightening to dynamically change the optimization constraints by tightening variable bounds. To demonstrate the effectiveness of these strategies, we implement an extension to the open-source branch-and-cut solver SCIP. We test on both node and graph classification problems and consider topological attacks that both add and remove edges.
This work is joint with Christopher Hojny, Shiqiang Zhang, and Juan Campos.
I YOUNG Session
To receive the link to attend the seminar, please register to the mailinglist here.
K-anonymous counterfactual explanations
Speaker 1: Sofie Goethals
Postdoctoral researcher at Columbia Business School, USA
University of Antwerp, Belgium
Abstract:
Black-box machine learning models are used in an increasing number of high-stakes domains, and this creates a growing need for Explainable AI (XAI). However, the use of XAI in machine learning introduces privacy risks, which currently remain largely unnoticed. Therefore, we explore the possibility of an explanation linkage attack, which can occur when deploying instance-based strategies to find counterfactual explanations. To counter such an attack, we propose k-anonymous counterfactual explanations and introduce pureness as a metric to evaluate the validity of these k-anonymous counterfactual explanations. Our results show that making the explanations, rather than the whole dataset, k-anonymous, is beneficial for the explanation quality.
Neur2BiLO: Neural Bilevel Optimization
Speaker 2: Esther Julien
PhD Student at TU Delft, The Netherlands
Abstract:
Bilevel optimization deals with nested problems in which a leader takes the first decision to minimize their objective function while accounting for a follower’s best-response reaction. Constrained bilevel problems with integer variables are particularly notorious for their hardness. While exact solvers have been proposed for mixed-integer linear bilevel optimization, they tend to scale poorly with problem size and are hard to generalize to the non-linear case. On the other hand, problem-specific algorithms (exact and heuristic) are limited in scope. Under a data-driven setting in which similar instances of a bilevel problem are solved routinely, our proposed framework, Neur2BiLO, embeds a neural network approximation of the leader’s or follower’s value function, trained via supervised regression, into an easy-to-solve mixed-integer program. Neur2BiLO serves as a heuristic that produces high-quality solutions extremely fast for linear/non-linear, integer/mixed-integer bilevel problems.
TBA
Speaker 3: Kimberly Yu
Abstract:
TBA
Need to relax - but perhaps later? Reflections on modeling sparsity and mixed-binary nonconvex problems
To receive the link to attend the seminar, please register to the mailinglist here.
Abstract:
In some ML communities, researchers claim that obtaining local solutions of optimality criteria is often sufficient to provide a meaningful and accurate data model in real-world analytics. However, this is simply incorrect and sometimes dangerously misleading, particularly when it comes to highly structured problems involving non-convexity such as discrete decisions (binary variables). This talk will advocate the necessity of research efforts in the quest for global solutions and strong rigorous bounds for quality guarantees, showcased on one of the nowadays most popular domains — cardinality-constrained models. These models try to achieve fairness, transparency and explainability in AI applications, ranging from Math.Finance/Economics to social and life sciences.
From a computational viewpoint, it may be tempting to replace the zero-norm (number of nonzero variables) with surrogates, for the benefit of tractability. We argue that these relaxations come too early. Instead, we propose to incorporate the true zero-norm into the base model and treat this either by MILP relaxations or else by lifting to tractable conic optimization models. Both in practice and in theory, these have proved to achieve much stronger bounds than the usual LP-based ones, and therefore they may, more reliably and based upon exact arguments, assess the quality of proposals coming from other techniques in a more precise way. With some effort invested in the theory (aka later relaxations), the resulting models are still scalable and would guarantee computational performance closer to reality and/or optimality.
Large scale Minimum Sum of Squares Clustering with optimality guarantees
To receive the link to attend the seminar, please register to the mailinglist here.
Abstract:
Minimum Sum of Squares clustering is one of the most important problems in unsupervised learning and aims to partition n observations into k clusters to minimize the sum of squared distances from the points to the centroid of their cluster. The MSSC commonly arises in a wide range of disciplines and applications, and since it is an unsupervised task it is vital to certify the quality of the produced solution. Recently, exact approaches able to solve small-medium size instances have been proposed, but when the dimension of the instance grows, they become impractical. In this work, we try and exploit the ability to solve small instances to certify the quality of heuristic solutions for large-scale datasets. The fundamental observation is that the minimum MSSC of the union of disjoint subsets is greater than or equal to the sum of the MSSC of the individual subsets. This implies that summing up the optimal value of the MSSC on disjoint subsets of a dataset provides a lower bound on the optimal WSS for the entire dataset. The quality of the lower bound is strongly dependent on how the dataset is partitioned into disjoint subsets. To improve this lower bound, we aim to maximize the minimum WSS of each subset by creating subsets of points with high dissimilarity. This approach, known as anticlustering or maximally diverse grouping problem in the literature, seeks to form highly heterogeneous partitions of a given dataset. By approximately solving an anticlustering problem, we develop a certification process to validate clustering solutions obtained using a heuristic algorithm. We test our method on large-scale instances with datasets ranging from 2,000 to 10,000 data points and containing 2 to 500 features. Our procedure consistently achieved a gap between the clustering solution and the lower bound ranging from 1% to 5%.
Learning to Branch and to Cut: Enhancing Non-Linear Optimization with Machine Learning
To receive the link to attend the seminar, please register to the mailinglist here.
Speaker: Prof Bissan Ghaddar
Associate Professor, Management Science & Sustainability
John. M. Thompson Chair in Engineering Leadership & Innovation
Ivey Business School, Canada
Abstract:
The use of machine learning techniques to improve the performance of branch-and-bound optimization algorithms is a very active area in the context of mixed integer linear problems, but little has been done for non-linear optimization. To bridge this gap, we develop a learning framework for spatial branching and show its efficacy in the context of the Reformulation-Linearization Technique for polynomial optimization problems. The proposed learning is performed offline, based on instance-specific features and with no computational overhead when solving new instances. Novel graph-based features are introduced, which turn out to play an important role in the learning. Experiments on different benchmark instances from MINLPLib and QPLIB show that the learning-based branching selection and learning-based constraint generation significantly outperform the standard approaches.
Pragmatic OR: Solving large scale optimization problems in fast moving environments
To receive the link to attend the seminar, please register to the mailinglist here.
Speaker: Prof Rubén Ruiz
Principal Applied Scientist at Amazon Web Services (AWS)
Professor of Statistics and Operations Research, Universitat Politècnica de València (on leave)
Abstract:
This talk examines the gap between academic Operations Research and real-world industrial applications, particularly in environments like Amazon and AWS where sheer scale and delivery speed are important factors to consider. While academic research often prioritizes complex algorithms and optimal solutions, large-scale industrial problems demand more pragmatic approaches. These real-world scenarios frequently involve multiple objectives, soft constraints, and rapidly evolving business requirements that require flexibility and quick adaptation.
We argue for the use of heuristic solvers and simplified modeling techniques that prioritize speed, adaptability, and ease of implementation over strict optimality or complex approaches. This angle is particularly valuable when dealing with estimated input data, where pursuing optimality may be less meaningful.
The presentation will showcase various examples, including classical routing and scheduling problems, as well as more complex scenarios like virtual machine placement in Amazon EC2. These cases illustrate how pragmatic methods can effectively address real-world challenges, offering robust and maintainable solutions that balance performance with operational efficiency. The goal is to demonstrate that in many industrial applications, a small optimality gap is an acceptable trade-off for significantly improved flexibility and reduced operational overhead.
From Data to Donations: Optimal Fundraising Campaigns for Non-Profit Organizations
To receive the link to attend the seminar, please register to the mailinglist here.
Speaker: Prof Wolfram Wiesemann
Professor of Analytics & Operations
Imperial College Business School
Imperial College
UK
Abstract:
Non-profit organizations play an essential role in addressing global challenges, yet their financial sustainability often depends on raising funds through resource-intensive campaigns. We partner with a major international non-profit to develop and test data-driven approaches to enhance the efficiency of their fundraising efforts. The organization conducts multiple annually recurring campaigns, with thematic links between them. These connections enable us to predict a donor’s propensity to contribute to one campaign based on their behavior in others. This structure is common among non-profits but not readily utilized by conventional multi-armed bandit algorithms. To leverage these inter-campaign patterns, we design two algorithms that integrate concepts from the multi-armed bandit and clustering literature. We analyze the theoretical properties of both algorithms, and we empirically validate their effectiveness on both synthetic and real data from our partner organization.
EURO Online Seminar Series on Operational Research and Machine Learning is proudly powered by WordPress