FRICO 2024 - 27th Workshop on Future Research in Combinatorial Optimization
September 9th - 13th, 2024, Magdeburg, Germany

27th Workshop on Future Research in Combinatorial Optimization will take place from September 9th to September 13th, 2024 at the Otto von Guericke University Magdeburg (OVGU), Germany.

FRICO is directed at PhD students from the following research areas:

Every participant will be assigned a slot; either for a 20-minute presentation (excluding 5 minutes for setup and questions), or a 5-minute elevator pitch talk. It is especially encouraged to present ongoing research and open problems.

FRICO 2024 does not have a conference fee!
This is made possible by our sponsors.

During the 'industry sessions' on Wednesday 11th and Thursday 12th the sponsors will inform us about practical application of combinatorial optimization in their companies.

We express special gratitude to Siemens AG, TNG Technology Consulting GmbH and Hannover Rück SE for being the main sponsors of FRICO2024.

To get a general impression of FRICO, visit the webpage of FRICO 2023, which took place in Eindhoven, or have a look at the history of FRICO. If you have any questions, please contact any of the organizers or write an email to frico2024@ovgu.de.

Main organizers

External organizers

The team thanks Ines Brückner, Johannes Jesse, Susanne Hess and Volker Kaibel and other university staff for their support and assistance in organizing this workshop. We are grateful to Maryia Kukharenka for her help with FRICO 2024 design. Additional thanks to our working students Jonas Danker and Frederic Horn for their assistance.

FRICO 2024 has concluded!

We would like to thank all attendees for their participation.

The best talk award of FRICO 2024 goes to Jamico Schade from TU Munich for his talk "Firefighters vs Burning Trees: A Pursuit-Evasion Variant". The newly awarded best elevator pitch goes to Helena Petri from RPTU Kaiserslautern-Landau. Congratulations!

We are happy to announce that FRICO 2025 will take place at the RWTH Aachen.


Program

For a PDF please click here (version September 6, 2024).

Timetable

Please see the detailed session schedule with speakers and abstracts below.

Monday Tuesday Wednesday Thursday Friday
8:30 Registration
9:00 Opening 3 talks 4 talks Industry talks 4 talks
9:30 2 talks
10:00
10:30 Coffee break Coffee break Coffee break
11:00 Elevator pitch session Elevator pitch session Coffee break Industry talks Coffee break, selection of best speaker, closing
11:30 2 talks 2 talks 2 talks
12:00
12:30 Lunch Lunch Lunch Lunch Lunch
13:00
13:30
14:00 3 talks 3 talks Industry talks 3 talks Departure
14:30
15:00
15:30 Coffee break Coffee break Coffee break
16:00 Open problems session Working on open problems Coffee break Working on open problems
16:30 Working on open problems
17:00 Working on open problems
17:30 Social event2
18:00 onw. Pizza + pub quiz1 2 Open air BBQ2 Conference Dinner2

1Participants are invited to bring their own (board) games to play! Some board games and playing cards are available at the venue.
2Please check the table below for the event location.

Session schedule and speakers

Click on the title or speaker to view the abstract of the talk. Click on the day to expand or collapse.

8:30 Registration
9:00 Opening
9:30
A totally unimodular matrix is a matrix with all subdeterminants equal to +1, -1 or 0. If a polyhedron is defined by a totally unimodular matrix, then every vertex is integral. In this case solving the LP-relaxation gives the exact integer solution. But what if only a part of the system describing the polyhedron involves a totally unimodular matrix? Do statements about the LP and the SDP relaxations then exist? In this talk we will address these questions.
In this talk, we explore implied integrality of mixed integer programs. A variable is implied integer if its integrality is implied by the constraints of the mixed-integer program and the integrality of a subset of the variables. We relate implied integers to several topics in combinatorial optimization. First, we highlight how implied integrality can arise from (totally) unimodular submatrices of the constraint matrix. Secondly, we relate implied integers to a lattice formed by the equality constraints of the mixed-integer program. Finally, we characterize implied integrality for set packing and covering problems, and use these characterizations to prove the computational complexity of detecting implied integers. To conclude, we examine the impact of implied integers in practice. Our results show that we can detect 12 times as many implied integers compared to the state-of-the-art methods.
10:30 Coffee break
11:00
We present a new SAT based algorithm for packing Steiner trees with additional constraints. Such problems arise as routing problems in chip design with complex design rules. Our approach uses a modification of the standard CDCL SAT solving algorithm and incorporates strategies similar to ripup-and-reroute. The resulting algorithm is significantly faster than ILP routers, produces good routings in practice, and guarantees to find a solution if it exists.
New system architectures for cyber-physical systems (CPS) lead to highly complex scheduling problems. Existing scheduling algorithms are either not optimal or they do not solve these problems efficiently in practice. Due the presence of multiple constraints, the scheduling of periodic non-preemptive tasks onto heterogeneous resources is np-complete in many cases. Nevertheless, by using prior knowledge, we are able to set up a scheduling algorithm which leads to a highly reduced runtime in practice. As a solution, we propose our informed search approach, which includes our criticality-driven exploration method and our search space reduction method. Our exploration method selects tasks based on their assumed criticality taking into account multiple constraints. It is based on our bounds technique, which continuously incorporates precedence constraints in the search process and limits the search space. Furthermore, we verify the temporal correctness by our two-step schedulability analysis. Thus, our solution enables efficient exploration of the search space while providing optimality with regard to feasibility.

Link:
We investigate the behavior of an integer linear program when the objective function changes. The set of all objective vectors under which the given integer solution stays optimal forms the normal cone at the solution. This cone is described by all active constraints of the integer hull. Since the integer hull is not known in general and expensive to calculate, we are looking for an efficient way to calculate such cones or rather its polar, the radial cone. The size of such cone give us insights in how sensitive an integer solution is to changes in the objective. We will present an oracle-based approach which guarantees to calculate all facet-defining inequalities of the radial cone. The general idea of our algorithm is to utilize an optimization oracle to calculate – preferably neighboring – solutions and incrementally extend a current sub cone of the radial cone with these solutions. The algorithm based on previous work from Matthias Walter, the ‘Investigating Polyhedra by Oracle’ framework, as well as the well-established Beneath-Beyond Method for incrementally calculating convex hulls. We will give a brief explanation of the algorithm, present some numerical results and raise questions about the usage of oracle-based algorithms on the calculation of further parameter sets with identical optima.
Network creation games are well-established for investigating the decentralized formation of communication networks, like the Internet or social networks. In these games selfish agents that correspond to network nodes strategically create costly edges to maximize their centrality in the formed network. We depart from this by focusing on the simpler objective of maximizing the 2-neighborhood. This is natural for social networks, as an agent's connection benefit is typically provided by her neighbors and their neighbors but not by strangers further away. For this natural model, we investigate the existence, the structure and the quality both of Nash equilibria and greedy equilibria. [...]
11:30
We consider an optimization problem over a discrete time horizon in which at each point in time, an element of a particular combinatorial set X has to be chosen and over time, the ‘variation’, i.e. the change in the selected elements must not exceed a certain constant. For the measurement of the variation we propose three different ways: either time-point wise, component-wise or in total. First, we show that even for a rather simple structure, namely a selection-type structure, the problem versions may differ significantly in their complexity. As a natural next question, we focus on an extension of our problem to a setting in which the set X is accessible only via a membership or linear optimization oracle. Thus, we shift from a complexity-theoretical perspective towards an information-theoretical one and show that even in simple cases, a polynomial number of oracle calls does not suffice to solve our problem.
In coding theory a binary code is a subset of the set $\{0,1\}^n$ for a natural number $n$. The elements of a code are called codewords and the distance of two codewords is the number of indices where their entries are different. If every pair of different codewords of a code has at least distance $d$, the code can detect up to $d-1$ and correct up to $\lfloor(d-1)/2\rfloor$ errors that may occur when a codeword is sent through a noisy channel. An interesting problem in coding theory is to find the maximum number $A(n,d)$ of codewords a code can have for a fixed length n while still being able to correct up to $\lfloor(d-1)/2\rfloor$ errors. Because $A(n,d)$ is generally hard to calculate researchers have been studying optimization problems that provide upper bounds for $A(n,d)$, for example the LP of Delsarte(1973) or the SDP of Schrijver(2005). Recently there has been a lot of interest in codes of a specific class, the so called doubly constant-weight codes. In this talk I am going to present improved linear programming bounds for the maximum size of error correcting codes of this class.
12:30 Lunch
14:00
Consider a parametric optimization problem with multiple objective functions depending on some parameters. There are several algorithms developed for solving parametric optimization problems in the single objective case. However, there is not much research in parametric optimization problems with multiple objectives. This research gap in parametric multi-objective problems has encouraged our work in this field. Our main focus is on linear bi-objective problems with a single parameter. We use the weight set decomposition to explain the behaviour of the linear bi-objective problem with the variation in the parameter value since the weighted sum scalarization is often used to solve multi-objective problems. We investigate two special cases with the same parameter; first on one of the objectives and second on both the objectives. We prove that there is a one-to-one correspondence of the solution to the parametric problem, to the solution of the tri-objective problem using the weight set decomposition. We come up with structural insights to the solution of the parametric bi-objective problem with respect to extreme weights of the weight set of the tri-objective problem.
In linear parametric programming problems (PPP), several objective functions are merged into a single objective function by forming a linear combination. Then, an optimal solution should be found for every possible combination of coefficients (parameters) of the linear combination within a given set. Parametric programming is related to numerous other areas such as multi-objective optimization and sensitivity analysis. It appears in a variety of application areas ranging from waste management and fleet planning to model-predictive control and process synthesis under uncertainty. Solving a PPP requires a set of solutions that contains an optimal solution for each possible combination of parameter values. It is well-known that, for many important discrete optimization problems, the number of solutions needed is exponential in the input size of the problem. Consequently, this motivates the development of approximation algorithms. Considering the limited literature available, we use a central tool to develop approximation algorithms for PPP. We describe the connection between convex approximation in multi-objective optimization and approximation of PPPs with positive parameters. Through this connection, existing results from multi-objective optimization can be transferred to the parametric setting. The potential of this approach becomes apparent when cardinality bounds of specific linear 1-PPPs are taken into consideration.
We combine fundamental linear programs with interdiction in a parametric context. Standard LPs only involve one party, in our setting those are the Beagle Boys (de: die Panzerknacker), who wish to solve a linear program in order to minimize the security value of Uncle Scrooge's (de: Onkel Dagobert) famous Money Bin. We extend this setting by an opposing party, namely Scrooge himself, whose aim is to worsen the Beagle Boys' objective to keep his money safe. To do so, Scrooge invests some budget in order to reduce the Beagle Boys' feasible space. Given a budget, we are now confronted with a linear programming interdiction problem (LPIP). Furthermore, we still lack the budget, since we don't know how much Uncle Scrooge is willing to invest to interdict the Beagle Boys. That's why our goal is to solve LPIP for each budget. We call this the parametric linear programming interdiction problem where the parameter is the budget to be invested. The function mapping the budget to the objective value of the associated non-parametric problem is called optimal value function. For Scrooge, in his role as decision maker, the optimal value function is particularly interesting. It allows for a quick and complete overview and thus provides an intuitive understanding on how efficient the investment of any budget is. We understand our work as a first approach towards a more fundamental understanding on interdiction problems of various kinds. To lay a robust groundwork for our theory, we analyse the problem's structure and feasible space. Furthermore, we investigate the non-parametric objective function as well as the structure of the optimal value function. Finally, we discuss the practical side on how the parametric linear programming interdiction problem can be solved.
15:30 Coffee break
16:00 Open problems session
17:00 Working on open problems
18:00
Takes place in the conference room (Building 03, Room 315).
9:00
Have you ever shared a pizza with a so-called friend and felt like you got less pizza? In this talk, I will introduce you to a class of pizza-sharing games and explore with you how unfair these games are and how we can be smart about our strategy to maximize the pizza we get.
In the Tariff Zone Assignment problem, an operator is tasked with partitioning a transport network (i.e., a graph) into tariff zones in order to maximize revenue in a linear pricing model. In this model, customers pay a price proportional to the number of zones traveled through, up to some maximum accepted price. If this price gets exceeded, customers drop out of the model and thus generate no revenue at all. In this talk, first we will look at the complexity of this problem and show APX-hardness on star graphs and strong NP-hardness on paths. These complexity results motivate the introduction of a variant of the problem, which we will show to be polynomial time solvable when restricted to paths. Finally, we will discuss solution approaches for the original problem when restricted to paths. This is joint, ongoing work with Lennart Kauther, Sven Müller and Britta Peis.
We examine the Hazmat Transport Network Design Problem (HNDP), a well-researched bilevel problem in network design featuring linking interdiction constraints. In this problem, the first level of decision-making is governed by city authorities, and the second level by truck drivers transporting hazardous goods. City authorities can prohibit certain streets for hazmat transport, and truck drivers react by selecting the shortest path that adheres to these restrictions. The objective for the authorities is to forbid roads to guide drivers towards less risky routes. We analyze the variant where the objective function at the second level is subject to uncertainty, focusing on interval-based uncertainty sets. We present examples demonstrating that, within this setting, the classical assumption that the worst-case solution is located at the endpoints of the uncertainty interval does not hold. We also prove that the subproblem of finding the robust solution at the second level, given a fixed network, is NP-hard if the uncertainty can be realized at any point within the interval. Additionally, we present some initial MIP-based approaches for solving this problem class.
10:30 Coffee break
11:00
Homogeneous metric spaces can be understood by their spectra (the set of non-zero lengths). The well-known four-values condition, which determines whether some universal space is homogeneous, is preserved under metric spectra equivalence (any bijection that respects triangle inequalities). Conant conjectured that one can choose well-behaved representatives of a finite metric spectrum up to equivalence. While it is well known that every spectra S⊆R+ is equivalent to a spectra T⊆N, it has remained open if T could also maintain a desirable combinatorial form. In particular, Conant questioned if T={t1,...,tn}< could be taken such that 2i−1≤ti≤2n−1 and proved positive answer for n≤6. In this paper, we come to two partial answers. The first is that the largest element tn can be chosen such that tn≤2n. Approximating a full solution, we also observe T with the combinatorial form 2i≤ti≤2n+1. Using these results, we computationally check the conjecture for n=7.

We will talk about the problem of identifying and enumerating finite models of the first group of the Hilbert's axioms of geometry.
Finding the fastest path between two vertices in a graph with non-negative edge weights is a fundamental problem in combinatorial optimization. Often, many fastest paths need to be computed in a road network that does not change (e.g., to find good approximate solutions to vehicle routing problems). Therefore, it can be beneficial to invest some time in preprocessing the graph to speed up subsequent path computations by several orders of magnitude.One of these preprocessing techniques, which uses the special structure of road networks, is contraction hierarchies, introduced by Geisberger, Sanders, Schultes, and Delling. In contraction hierarchies, all vertices are assigned an importance, and shortcuts are inserted. This results in a graph where optimal travel times can be achieved using paths that are first strictly increasing and then strictly decreasing in importance.As the fastest paths in street networks already roughly exhibit a hierarchical structure (with the roads decreasing in size close to the start and the target of a fastest path, and mainly highways being used in the middle of a fastest path), only a few shortcuts need to be inserted if the vertex importances are chosen well. This intuition can be formalized using the notion of highway dimension introduced by Ittai, Delling, Fiat, Goldberg, and Werneck. This concept can be generalized to digraphs with time-dependent travel times, i.e., edge weights that change depending on the time that an edge is used.Finally, different ways to compute fastest paths in a contraction hierarchy will be discussed. In particular, new variants of the Stall-on-Demand technique, which allows further reduction in the runtime of fastest path computations in a contraction hierarchy, will be introduced and theoretical guarantees for the performance of some of these variants are shown.
In the classic computer game of Snake, a snake must eat apples while avoiding collisions with its own body. Each time the snake eats an apple, it grows in length, making it increasingly difficult to move around. Inspired by this game, we defined a variant of Snake played on a simple graph. In this version, the snake forms a simple path and moves around to eat apples that appear on vertices. The snake grows in length with each apple it consumes. The aim of the snake is to keep eating apples until it is large enough to cover all vertices. A graph on which the snake can always win, regardless of apple placement, is called a "snake-win" graph. For some graph classes, we characterize the snake-win graphs. We also identify some general properties of all snake-win graphs. Lastly, we show that determining whether a graph is snake-win or not is NP-hard.
The Roman empire needs to be defended! For this we can station up to 2 legions per region. But each region without a legion should have a neighbour with 2 legions such that it can send one over if necessary. Unfortunately, we don't have that much legions left and want to use as few as possible. Such an assignment of numbers in {0,1,2} to vertices of a graph is called a Roman dominating function. The minimum sum of legions needed for a graph is called the Roman domination number. We will have a look at the relation between the Roman domniation number of a graph and that of its spanning trees. In particular, we want to find spanning trees with either a minimum or maximum Roman domination number.
11:30
Roman domination is a reasonably well-known variant of domination on finite graphs. We generalise the concept to infinite graphs embedded in a probability space, where we try to find a dominating set of minimum measure (rather than minimum cardinality).

Link:
We investigate the following problem: Given a connected Graph G whose nodes are on fire. In each turn, a group of k firefighters can extinguish k burning nodes. Afterwards, the fire spreads from all burning nodes to their neighbours, before the firefighters again extinguish some nodes. How many firefighters are needed to fully extinguish G? We call this number the firefighter number of G. We give some general results regarding this topic, as well as almost tight upper and lower bounds to the firefighter numbers of complete binary trees. This is shared work with Julius Althoetmar and Torben Schürenberg.
12:30 Lunch
14:00
Loosely speaking, the Heesch number of a tile is the largest number of layers with which we can surround a tile using congruent copies of itself, is infinite if the tile tiles the plane and resents some sort of a measure as to how far towards a tiling we can advance. In this talk we will first give a historical overview of the progress on the original Heesch's problem, culminating in a recent discovery of a tile whose Heesch number is 6. We will then talk about the sets of tiles, where we will introduce two definitions of a Heesch number of a set of tiles that arise in the literature, and show our recent construction that solves one of those definitions. For the other definition, we will briefly discuss that, due to the undecidability of the Domino problem, the function H(n), presenting the largest Heesch number of all sets of n tiles, cannot be bounded by any computable function and as a consequence, by a result of Ollinger about the existence of a set of 5 tiles for which it is undecidable to determine whether it tiles the plane, we deduce that for each k >= 5 and each integer n, there exists a set of k tiles whose Heesch number is n. We will also show a solution to the variation of Heesch's problem for disconnected tiles, and show a solution to a relaxed version of the problem, in which we only require tiles to be similar to the original tile, rather than congruent. In each of those cases we construct tiles with arbitrarily large Heesch numbers. We will finish off with a few more open questions about convex tiles and unbounded tiles.

Link:
Complexes of oriented matroids (COMs) have been recently introduced as a common generalization of oriented matroids, affine oriented matroids, and lopsided sets. They can be simply described by a finite groundset and a set of sign vectors together with two axioms. After a short introduction to this combinatorial structure, we will show that the determinant of a bilinear form introduced by Varchenko, called Varchenko Matrix, has an elegant factorization formula for COMs.

Link:
The number of edges that cross greatly influences how well we can understand what is going on in a particular drawing of a graph. Applying further restrictions, e.g. limiting the number of crossings each edge is involved in, can further improve the readability of the resulting drawing. But it might also increase the total number of crossings needed, thereby decreasing readability. For a given restriction, the crossing ratio is the supremum over all graphs of the ratio between the minimum number of crossings required with and without the restriction. We propose a very general proof framework that allows us to obtain asymptotically tight bounds on crossing ratios. The restriction-specific part of a proof then typically boils down to a couple of lines. We show the strength of our approach by giving improved or first bounds for several restrictions.

Link:
15:30 Coffee break
16:00 Working on open problems
18:00
9:00
Cutting planes are frequently used for solving large integer programs. A common strategy is to derive cutting planes from building blocks or substructure of the integer program. In this presentation, we focus on knapsack constraints that arise from single row relaxations. Among the most popular classes derived from knapsack constraints are lifted minimal cover inequalities. Yet the separation problem for these inequalities is NP-hard, and one usually separates them heuristically, therefore not fully exploiting their potential. In practice however, it turns out that many knapsack constraints only have few different coefficients. This motivates the concept of sparse knapsacks where the number of different coefficients is a small constant, independent of the number of variables present. For such knapsacks, we observe that there are only polynomially many different classes of structurally equivalent minimal covers. This opens the door to specialized techniques for using lifted minimal cover inequalities. In this presentation we will discuss two such techniques, which are based on specialized sorting methods. On the one hand, we present new separation routines that separate equivalence classes of inequalities rather than individual inequalities. On the other hand, we derive compact extended formulations that express all minimal cover inequalities by means of a polynomial number of constraints. These extended formulations are based on tailored sorting networks that express our separation algorithm by linear inequalities. We conclude the presentation by a numerical investigation of the different techniques for popular benchmarking instances.
A common approach to handle symmetry in MIPs in order to improve solution performance is through symmetry handling inequalities. We consider a particular type of these - so-called Schreier-Sims (SST) cuts - and apply them on the maximum weight stable set problem. A central question that arises is how the addition of SST cuts affects the complexity of the stable set problem. We explore the potential interaction between the polyhedral structure of the stable set problem and SST cuts. We note that the stable set problem with additional SST cuts agrees with the stable set problem on mixed graphs. Specifically, for undirected graphs with SST cuts, we demonstrate how to transform the corresponding mixed graphs into closely related undirected graphs. We explain the connection between these graphs and the polyhedral description of the stable set problem with SST cuts. In this context, we examine how SST cuts should be added to a trivially perfect graph such that the resulting mixed graph remains perfect and discuss different conditions.
The generalized min-cost flow problem is a variation of the classical min-cost flow problem with losses on the arcs. In contrast to the classical problem, the flow amount is multiplied with the arc loss when traversing an arc. The formulation is motivated by the modelling of energy systems where losses arise naturally. When applying a column generation approach on a path-based formulation, the pricing problem has the structure of a shortest path problem with losses. Due to the multiplicative behavior of these losses, a classical shortest path algorithm cannot be used to generate paths. In particular, the costs on arcs depend on previously selected arcs and their respective losses. The pricing problem can be solved via a dynamic programming scheme. Based on the corresponding Bellman equation, we formulate conditions on the input data under which the dynamic programming algorithm correctly computes the shortest paths. Additionally, we compare the performance of the column generation approach with other solution methods in practice.
Despite over 75 years of study, it remains open whether there is a version of the simplex method that guaranteed to solve linear programs in polynomial time. The key difficulty for this problem is that there is a great diversity of simplex methods all determined by a choice of pivot rule. In this talk, I will discuss two broad, well studied classes of pivot rules: shadow rules and steepest edge rules. I will then argue that any pivot rule from these classes takes exponentially many steps in the worst case.

Link:
11:00 Coffee break
11:30
Partial inverse combinatorial optimization problems are bilevel optimization problems in which the leader can modify weights in the follower’s objective such that the follower includes a given set of elements in the solution of their combinatorial problem. First, we focus on a class of partial inverse combinatorial optimization problems in which weights can only be increased and where the combinatorial problem is solvable in polynomial time. We present a new branch-and-bound scheme to solve these problems. Branching on follower variables, the scheme relies on two different subproblems that are basically complete inverse problems with the same combinatorial problem on similar graphs, which are known to be solvable in polynomial time. Computational experiments on the partial inverse shortest path problem with only weight increases suggest that for dense graphs our branch-and-bound scheme outperforms an MPCC reformulation as well as a decomposition scheme. In the second part of the talk, we consider work in progress on partial inverse combinatorial optimization problems with NP-complete follower problem like the partial inverse travelling salesperson problem.

Link:
We consider a two-stage optimization problem under uncertainty with sparsity constraints, motivated by a common challenge in packaging logistics: minimizing the volume of transported air by optimizing the size and number of available packaging boxes, given the stochastic distribution of the demand for order items. In the first stage, we select the optimal sized packaging boxes. This stage is modeled as a binary selection problem, where the objective function is the expected value of the second stage. The uncertainty lies in the item demands that must be satisfied in the second stage. In the second stage, the actual demanded items are assigned to the selected boxes. This stage is formulated as an assignment problem, with the goal of minimizing the unused volume in the selected boxes. The objective function in the second stage evaluates the quality of the item-to-box assignment. Additionally, we incorporate sparsity constraints in the first stage, which limit the number of different box types used in the assignment. To solve this problem, we propose a cutting plane approach using Benders Decomposition. This method effectively handles the demand and assignment constraints and provides an optimal solution to the optimization problem.
12:30 Lunch
14:00 Industry talk (Hannover Re)
15:00 Industry talk (atesio)
15:30 Industry talk (Gurobi)
16:00 Coffee break
16:30 Working on open problems
18:00
Will be served at Alex Magdeburg. You could join the group of people that will start walking there from Building 02 at 17:30.
9:00 Industry talk (Siemens)
10:00 Industry talk (IVU)
10:30 Coffee break
11:00 Industry talk (TNG)
12:00 Industry talk (S2data)
12:30 Lunch
14:00
In this presentation we talk about the assortment optimization problem: A firm has to decide which products to offer to maximize its revenue. We assume the customers to be heterogenous and use the nested logit choice model which means that alternatives can be correlated. Because the problem is nonlinear and nonconvex we create an iterative approximation scheme that generates a solution with a constant approximation quality.
Evolving logistics demands necessitate adaptable solutions. This research explores the Pickup and Delivery Problem (PDP), an extension of the Vehicle Routing Problem (VRP) that tackles the complexities of modern logistics, where both pickups and deliveries are crucial. PDPs are critical for industries like e-commerce, where efficient management directly impacts cost and customer satisfaction. This study addresses the challenges of PDPs, including alternative delivery locations and time window constraints. We propose a mixed-integer linear programming model beyond traditional approaches by simultaneously optimizing for delivery lateness and overall transportation costs. Through this investigation, we aim to develop innovative PDP strategies that enhance logistics effectiveness. Our findings hold the potential to contribute to advancements in supply chain management and transportation optimization, leading to a more efficient and cost-effective logistics landscape.
We explore combinatorial extensions to the Periodic Event Scheduling Problem (PESP) with a focus on railway timetabling, integrating the assignment of activities to infrastructure elements, like platforms. Our methods are centered on mixed-integer programming formulations and bipartite matching sub-problems in the case of equivalent platforms. It turns out to be highly beneficial to allow assignment patterns with higher period than the standard timetable itself, but bounding this "platforming period" poses a challenge. Furthermore, while sufficient, the generalization framework does not mirror all insights that are used in the case of simpler infrastructural settings. We present our findings and these open questions, aiming to foster discussion and collaboration in combinatorial optimization and scheduling.

Currently submitted at ATMOS 2024 (link will be available if accetped)
15:30 Coffee break
16:00 Working on open problems
17:30
The social event of FRICO 2024 is a boat trip on Elbe operated by Weisse Flotte GMbH. You could join the group of people that will start walking to the boat station from Building 02 at 17:00. Otherwise, please be at the designated location by 17:50 latest.
9:00
The field of nurse rostering is a broad topic and involves many problems differing in complexity and structure. An easy formulation is used in the Days-On-Days-Off Scheduling Problem, which includes few constraints on the length of consecutive working and off-day periods. This problem was introduced in 2013 and claimed to be NP-hard. We proved that the decision version of this problem is indeed NP-hard and succeeded in developing algorithms for the polynomial time subproblems.
In a packing problem, we are given a collection I of n items, each one with a given profit. Our goal is to compute a maximum profit subset of these items that satisfies a given set of packing constraints which depend on the problem at hand. Several approximation algorithms for well-studied NP-hard packing problems are based on dynamic programs (DPs) that solve the given problem directly or an auxiliary (packing) problem that the given problem is reduced to. Examples for this include algorithms for Knapsack, Geometric Knapsack, Independent Set of Rectangles, and Maximum Throughput Scheduling. Often, it is desirable to impose additional global constraints to packing problems, e.g., due to fairness considerations, diversification, quotas, or limited resources. More specifically, suppose that we are given M additional cardinality constraints, where each constraint h imposes that at most a given number b_h of items can be selected from a given subset I_h of I. We address the question how to efficiently handle such constraints. There is a naive way to modify a DP for a packing problem so that it incorporates these additional constraints; however, this increases the running time of the DP by a factor Θ(n²) per cardinality constraint. Hence, this works only for M = O(1). Our main result is a meta-algorithm that, given an exact DP for a packing problem, derives a (randomized) PTAS, i.e., a (1+ε)-approximation for any constant ε > 0, for the same packing problem with additionally M = O(log n/ log log n) cardinality constraints. Our running time is of the form f(M, ε) · n^O(1), i.e., we obtain an efficient parameterized approximation scheme for the parameter M. As an application, we transform several approximation algorithms for packing problems from the literature to approximation algorithms for the same respective problem augmented with O(log n/ log log n) cardinality constraints, while losing only a factor of 1+ε in the approximation ratio. We complement our meta-algorithm with an almost matching hardness result showing that, under gap-ETH, a polynomial time (1+ε)-approximation is not possible in general if M = ω(log n). Closing this small gap is left as an intriguing open problem.
In the Strip Packing Problem a rectangular strip with fixed width and infinite height together with a set of rectangles is given. The goal is to orthogonally pack these rectangles onto the strip such that the height used is minimized. This problem generalizes both Bin Packing and Makespan Minimization, making it significant in various applications such as resource allocation and manufacturing. The bottom-left algorithm is a simple heuristic for the Strip Packing Problem that places the rectangles in the given order at the lowest free position in the strip, selecting the leftmost position in case of ties. Despite its simplicity, the exact approximation ratio of the bottom-left algorithm remains unknown. Brown (1979) demonstrated that even when rectangles are considered in the best possible ordering, the bottom-left algorithm might be a factor of 5/4 removed from the optimum. This talk will discuss the recent paper of Hougardy and Zondervan (2024) that improves this more-than-40-years-old lower bound to 4/3, which holds even if all rectangles are squares. The best-known upper bound on the approximation ratio of the bottom-left algorithm, when all rectangles are squares, is 2 (Baker et al., 1979), which is achieved by ordering the squares by decreasing size. This talk introduces the last-row-full (LRF) ordering, which orders almost all squares by increasing size, except for a few small squares placed at the end. The LRF-ordering also results in a tight 2-approximation, but often yields a provably better-than-2 approximation, hence paving the way for advancements in determining the exact approximation ratio of the bottom-left algorithm.

Link:
Although the 2-opt heuristic for the Travelling Salesperson Problem is well-studied from the perspective of algorithm performance, little is known about the structure of its transition graph. We derive an upper bound on the number of locally optimal tours under this neighborhood (equivalently, number of sinks in the transition graph) for random TSP instances. We also briefly discuss how results such as this may have implications for the performance of metaheuristics, in particular for simulated annealing.
11:00 Coffee break, selection of best speaker, closing
12:30 Lunch
14:00 Departure

Venue

FRICO 2024 is hosted by the Otto von Guericke University Magdeburg, Universitaetsplatz 2, 39106,  Magdeburg
The conference will take place in
Building 03, Room 315 (G03-315).
Lunch will be served in the univerisity cafeteria, Building 27. For daily lunch menu, click here

Hotels

We recommend the following hotels:

You can also look at: airbnb.com and booking.com.

Travel information


Registration is closed.

We accept late applications for the elevator pitch talks until August 5, 2024!

To register for FRICO 2024, please fill the following form or send an email to frico2024@ovgu.de containing the following information:

  • Your full name
  • Your affiliation
  • Title of your talk
  • A short abstract of your talk
  • Preference between a full talk or an elevator pitch
  • A link to the paper your talk is based on (optional)
  • Dietary restrictions (if any).

Within five working days, you will receive a confirmation of your submission. If this is not the case, please contact the organizers directly.

The deadline for registration is July 14. Although we aim to welcome as many participants as possible, there is a limit on the number of talks we can accommodate. Talks will be selected based on how closely they adhere to the research interests of FRICO. You will receive a definitive confirmation of your participation shortly after the deadline.

To see how the information that you provide will be processed, you can check our Privacy Policy.


History of FRICO

Ulrich Pferschy described the origin of the FRICO in the program booklet of the 10th FRICO as follows:

" … However, some people who are taking part in this relaxed exchange of ideas for the first time may ask themselves how such an unstructured event came about. To prevent the formation of false legends, which are sometimes already spread in an hour with wine, I would like to briefly present the official version of the FRICO creation.
In the autumn of 1996, a two-week summer school on the approximation of combinatorial optimization problems took place in Udine (this was the summer school to which the great Papadimitriou had mistakenly arrived a year too early). On the free weekend contained therein, it was obvious to undertake an excursion into the surrounding wine country. At least this was the firm conviction of the two Graz participants, Rüdiger Rudolf and myself. Since excursions of this kind are only half as nice for two (if they are men), we successfully tried to win the apparently like-minded participants Dagmar Handke (Konstanz) and Katja Wolf (Cologne) as companions. The beautiful excursion ended in a trattoria in Cividale with a special local dish, which can best be described as a mixture of potatoes, onions, bacon, and cheese; nothing for a weak stomach. The Frico cheese, a Friulian specialty, gives the dish its name. In a cheerful circle, Katja invited all of us to a Frico dinner in Cologne without knowing what she was starting.
However, it was to take over a year before this invitation could be made more concrete. Now the way from Graz to Cologne is quite long, and as optimizers, we tried to combine the pleasant with the useful. Without further ado, we offered to combine the private visit with a lecture at the then ZPR Cologne. And so that it didn′t look as if the people of Cologne can only listen and have nothing to say themselves, Katja immediately obliged a few "locals" to give further lectures. Thus a one-day workshop had developed in the twinkling of an eye. Although the organizer said: "We can′t call it FRICO", I managed to find the acronym that is known today.
The first FRICO Workshop in 1997 was a great success (in contrast to the Frico dinner in the evening, which was canceled from the program in the following years). This moved Professor Schrader to donate a barrel of Kölsch and to "threaten" to continue organizing this event format on his own if we did not. Of course, we couldn′t afford to let that happen, and so the second FRICO was decided in Graz in 1998.
The rest is history."

Previous FRICOs and best talk awards


Contact

Kirill Kukharenko
Institut fuer Mathematische Optimierung
Universitaetsplatz 2
39106, Magdeburg
Germany
Email: frico2024@ovgu.de
Phone: +49 391 67-52671