50th EATCS International Colloquium on Automata, Languages and Programming, ICALP 2023
Fedor Fomin, Petr Golovach, Danil Sagunov and Kirill Simonov
Abstract: Parameterization above (or below) a guarantee is a successful concept in parameterized algorithms. The idea is that many computational problems admit “natural” guarantees bringing to algorithmic questions whether a better solution (above the guarantee) could be obtained efficiently. For example, for every boolean CNF formula on m clauses, there is an assignment that satisfies at least m/2 clauses. How difficult is it to decide whether there is an assignment satisfying more than m/2 + k clauses? Or, if an n-vertex graph has a perfect matching, then its vertex cover is at least n/2. Is there a vertex cover of size at least n/2 + k for some k ≥ 1 and how difficult is it to find such a vertex cover? The above guarantee paradigm has led to several exciting discoveries in the areas of parameterized algorithms and kernelization. We argue that this paradigm could bring forth fresh perspectives on well-studied problems in approximation algorithms. Our example is the longest cycle problem. One of the oldest results in extremal combinatorics is the celebrated Dirac’s theorem from 1952. Dirac’s theorem provides the following guarantee on the length of the longest cycle: for every 2-connected n-vertex graph G with minimum degree δ(G) ≤ n/2, the length of a longest cycle L is at least 2δ(G). Thus the “essential” part in finding the longest cycle is in approximating the “offset” k = L − 2δ(G). The main result of this paper is the above-guarantee approximation theorem for k. Informally, the theorem says that approximating the offset k is not harder than approximating the total length L of a cycle. In other words, for any (reasonably well-behaved) function f, a polynomial time algorithm constructing a cycle of length f(L) in an undirected graph with a cycle of length L, yields a polynomial time algorithm constructing a cycle of length 2δ(G) + Ω(f(k)).
Vaishali Surianarayanan, Daniel Lokshtanov and Saket Saurabh
Abstract: In the {\sc Min
Fedor Fomin, Petr Golovach, Ignasi Sau, Giannos Stamoulis and Dimitrios M. Thilikos
Abstract: We introduce a novel model-theoretic framework inspired from graph modification and based on the interplay between model theory and algorithmic graph minors. The core of our framework is a new compound logic operating with two types of sentences, expressing graph modification: the modulator sentence, defining some property of the modified part of the graph, and the target sentence, defining some property of the resulting graph. In our framework, modulator sentences are in counting monadic second-order logic (CMSOL) and have models of bounded treewidth, while target sentences express first-order logic (FOL) properties along with minor-exclusion. Our logic captures problems that are not definable in first-order logic and, moreover, may have instances of unbounded treewidth. Also, it permits the modeling of wide families of problems involving vertex/edge removals, alternative modulator measures (such as elimination distance or G-treewidth), multistage modifications, and various cut problems. Our main result is that, for this compound logic, model-checking can be done in quadratic time. All derived algorithms are constructive and this, as a byproduct, extends the constructibility horizon of the algorithmic applications of the Graph Minors theorem of Robertson and Seymour. The proposed logic can be seen as a general framework to capitalize on the potential of the irrelevant vertex technique. It gives a way to deal with problem instances of unbounded treewidth, for which Courcelle’s theorem does not apply.
Shi Li
Abstract: We study nearly-linear time approximation algorithms for non-preemptive scheduling problems in two settings: the unrelated machine setting, and the identical machine with job precedence constraints setting, under the well-studied objectives such as makespan and weighted completion time. For many problems, we develop nearly-linear time approximation algorithms with approximation ratios matching the current best ones achieved in polynomial time.
Our main technique is linear programming relaxation. For the unrelated machine setting, we formulate mixed packing and covering LP relaxations of nearly-linear size, and solve them approximately using the nearly-linear time solver of Young. For the makespan objective, we develop a rounding algorithm with
For problems in the identical machine with precedence constraints setting, the precedence constraints can not be formulated as packing or covering constraints. To achieve the nearly-linear running time, we define a polytope for the constraints, and leverage the multiplicative weight update (MWU) method with an oracle which always returns solutions in the polytope.
Along the way of designing the oracle, we encounter the single-commodity maximum flow problem over a directed acyclic graph
Spencer Compton, Slobodan Mitrović and Ronitt Rubinfeld
Abstract: Interval scheduling is a basic problem in the theory of algorithms and a classical task in combinatorial optimization.
We develop a set of techniques for partitioning and grouping jobs based on their starting and ending times, that enable us to view an instance of interval scheduling on \emph{many}
jobs as a union of
multiple interval scheduling instances, each containing only \emph{a few} jobs. Instantiating these techniques in dynamic and local settings of computation leads to several new results.
For
Our techniques are also applicable in a setting where jobs have rewards/weights. For this case we design a fully dynamic \emph{deterministic} algorithm whose worst-case update and query time are
Equivalently, this is \emph{the first} algorithm that maintains a
This is an exponential improvement in
We extend our approaches for interval scheduling on a single machine to the setting with
Sam Coy, Artur Czumaj, Peter Davies and Gopinath Mishra
Abstract: We consider the distributed complexity of the (degree+1)-list coloring problem, in which each node u of degree d(u) is assigned a palette of d(u)+1 colors, and the goal is to find a proper coloring using these color palettes. The (Δ+1)-list coloring problem is a natural generalization of the classical (Δ+1)-coloring and (Δ+1)-list coloring problems, both being benchmark problems extensively studied in distributed and parallel computing.
In this paper we settle the complexity of the (degree+1)-list coloring problem in the Congested Clique model by showing that it can be solved deterministically in a constant number of rounds.
Shimon Kogan and Merav Parter
Abstract: The only known upper bound for this problem is given by an implicit construction of
[Pettie, ICALP 2007] that provides a linear-size emulator with
In this work we improve upon the long standing additive stretch of
The new emulators are based on an extended and optimized toolkit for computing weighted additive emulators with sublinear distance error. Our key construction provides a weighted modification of the well-known Thorup and Zwick emulators [SODA 2006]. We believe that this TZ variant might be of independent interest, especially for providing improved stretch for distant pairs.
Dmitry Paramonov, Gillat Kol, Klim Efremenko and Raghuvansh R. Saxena
Abstract: Single-hop radio networks (SHRN) are a well studied abstraction of communication over a wireless channel. In this model, in every round, each of the n participating parties may decide to broadcast a message to all the others, potentially causing collisions. We consider the SHRN model in the presence of stochastic message drops (i.e., erasures), where in every round, the message received by each party is erased (replaced by ⊥) with some small constant probability, independently.
Our main result is a constant rate coding scheme, allowing one to run protocols designed to work over the (noiseless) SHRN model over the SHRN model with erasures. Our scheme converts any protocol Π of length at most exponential in n over the SHRN model to a protocol Π′ that is resilient to constant fraction of erasures and has length linear in the length of Π.
We mention that for the special case where the protocol Π is non-adaptive, i.e., the order of communication is fixed in advance, such a scheme was known. Nevertheless, adaptivity is widely used and is known to hugely boost the power of wireless channels, which makes handling the general case of adaptive protocols Π both important and more challenging. Indeed, to the best of our knowledge, our result is the first constant rate scheme that converts adaptive protocols to noise resilient ones in any multi-party model.
Chandra Chekuri and Rhea Jain
Abstract: Classical network design models, such as the Survivable Network Design problem (SNDP), are (partly) motivated by robustness to faults under the assumption that any subset of edges upto a specific number can fail. We consider non-uniform fault models where the subset of edges that fail can be specified in different ways. Our primary interest is in the flexible graph connectivity model, in which the edge set is partitioned into safe and unsafe edges. Given parameters p and q, the goal is to find a cheap subgraph that remains p-connected even after the failure of q unsafe edges. We also discuss the bulk-robust model and the relative survivable network design model. While SNDP admits a 2-approximation, the approximability of problems in these more complex models is much less understood even in special cases. We make two contributions.
Our first set of results are in the flexible graph connectivity model. Motivated by a conjecture that a constant factor approximation is feasible when p and q are fixed, we consider two special cases. For the s-t case we obtain an approximation ratio that depends only on p,q whenever p+q > pq/2 which includes (p,2) and (2,q) for all p,q. For the global connectivity case we obtain an O(q)-approximation for (2,q), and an O(p)-approximation for (p,2) and (p,3) for any p, and for
Our second result is a poly-logarithmic approximation for a generalization of the bulk-robust model when the "width" of the given instance (the maximum number of edges that can fail in any particular scenario) is fixed. Via this, we derive corresponding approximations for the flexible graph connectivity model and the relative survivable network design model. We utilize a recent framework due to Chen et al. that was designed for handling group connectivity.
Noam Touitou
Abstract: Clairvoyant network design with deadlines or delay has been studied extensively, culminating in an
In this paper, we present the first nonclairvoyant frameworks for network design with deadlines or delay. These frameworks are nearly optimal: their competitive ratio is
Andrej Bogdanov and Alon Rosen
Abstract: Most
Certifying NBV instances is relevant to the computational complexity of approximating the Sherrington-Kirkpatrick Hamiltonian, i.e. maximizing
We present a non-deterministic interactive certification algorithm for NBV when
Ittai Rubinstein
Abstract: The {\em insertion-deletion channel} takes as input a binary string
In the {\em trace reconstruction problem}, one is given many outputs (called {\em traces}) of the insertion-deletion channel applied to the same input message
De, O'Donnell and Servedio (STOC 2017), and Nazarov and Peres (STOC 2017) showed that any string
Holden, Pemantle, Peres and Zhai (COLT 2018) adapt the techniques used to prove this upper bound, to an algorithm for the average-case trace reconstruction with a sample complexity of
However, it is not clear how to apply their techniques more generally and in particular for the recent worst-case upper bound of
We prove a general reduction from the average-case to smaller instances of a problem similar to worst-case.
Using this reduction and a generalization of Chase's bound, we construct an improved average-case algorithm with a sample complexity of
Additionally, we show that Chase's upper-bound holds for the insertion-deletion channel as well.
Tobias Friedrich, Andreas Göbel, Maximilian Katzmann and Leon Schiller
Abstract: A recent trend in the context of graph theory is to bring theoretical analyses closer to empirical observations, by focusing the studies on random graph models that are used to represent practical instances. There, it was observed that geometric inhomogeneous random graphs (GIRGs) yield good representations of complex real-world networks, by expressing edge probabilities as a function that depends on (heterogeneous) vertex weights and distances in some underlying geometric space that the vertices are distributed in. While most of the parameters of the model are understood well, it was unclear how the dimensionality of the ground space affects the structure of the graphs.
In this paper, we complement existing research into the dimension of geometric random graph models and the ongoing study of determining the dimensionality of real-world networks, by studying how the structure of GIRGs changes as the number of dimensions increases. We prove that, in the limit, GIRGs approach non-geometric inhomogeneous random graphs and present insights on how quickly the decay of the geometry impacts important graph structures. In particular, we study the expected number of cliques of a given size as well as the clique number and characterize phase transitions at which their behavior changes fundamentally. Finally, our insights help in better understanding previous results about the impact of the dimensionality on geometric random graphs.
Shaddin Dughmi, Yusuf Hakan Kalaycı and Neel Patel
Abstract: Motivated by recent progress on stochastic matching with few queries, we embark on a systematic study of the sparsification of stochastic packing problems more generally. Specifically, we consider packing problems where elements are independently active with a given probability
Our focus is on computing sparse sets containing on the order of
First, we exhibit a generic sparsifier of degree
Or Zamir
Abstract: Let
We construct an algorithm
In particular, an algorithm whose running time is a random variable
No information about the distribution of
Michael Whitmeyer and Siddharth Iyer
Abstract: Given a bounded function
It is not hard to see that a random function is such that all its Fourier coefficients are extremely small already, so we focus on a natural class structured functions.
For the natural class of Fourier degree
To complement this result, we show the existence of degree
Along the way, we provide multiple characterizations of the Fourier coefficients of functions restricted to subspaces of
Jakob Bæk Tejs Houen and Mikkel Thorup
Abstract: The \emph{Sparse Johnson-Lindenstrauss Transform} of Kane and Nelson (SODA 2012) provides a linear dimensionality-reducing map
The previous analyses of the Sparse Johnson-Lindenstrauss Transform all assumed access to a
The main contribution of this paper is a more general analysis of the Sparse Johnson-Lindenstrauss Transform with less assumptions on the hash function.
We also show that the \emph{Mixed Tabulation hash function} of Dahlgaard, Knudsen, Rotenberg, and Thorup (FOCS 2015) satisfies the conditions of our analysis, thus giving us the first analysis of a Sparse Johnson-Lindenstrauss Transform that works with a practical hash function.
Ruizhe Zhang and Xinzhi Zhang
Abstract: In 2013, Marcus, Spielman, and Srivastava resolved the famous Kadison-Singer conjecture. It states that for
In this paper, we provide two results for a broader class of polynomials, the hyperbolic polynomials. Furthermore, our results are in two generalized settings:
* The first one shows that the Kadison-Singer result requires a weaker assumption
* The second one relaxes the Kadison-Singer result's distribution assumption to the Strongly Rayleigh distribution.
To the best of our knowledge, the previous results only support determinant polynomials [Anari and Oveis Gharan'14, Kyng, Luh and Song'20]. It is unclear whether they can be generalized to a broader class of polynomials. In addition, we also provide a sub-exponential time algorithm for constructing our results.
Daniel Hader and Matthew Patitz
Abstract: Algorithmic self-assembly occurs when components in a disorganized collection autonomously combine to form structures and, by their design and the dynamics of the system, are forced to intrinsically follow the execution of algorithms. Motivated by applications in DNA-nanotechnology, theoretical investigations in algorithmic tile-based self-assembly have blossomed into a mature theory with research strongly leveraging tools from computational theory, complexity theory, information theory, and graph theory to develop a wide range of models and to show that many are computationally universal, while also exposing a wide variety of powers and limitations of each. In addition to computational universality, the abstract Tile-Assembly Model (aTAM) was shown to be intrinsically universal (FOCS 2012), a strong notion of completeness where a single tile set is capable of simulating the full dynamics of all systems within the model; however, this result fundamentally required non-deterministic tile attachments. This was later confirmed necessary when it was shown that the class of directed aTAM systems, those in which all possible sequences of tile attachments eventually result in the same terminal assembly, is not intrinsically universal (FOCS 2016). Furthermore, it was shown that the non-cooperative aTAM, where tiles only need to match on 1 side to bind rather than 2 or more, is not intrinsically universal (SODA 2014) nor computationally universal (STOC 2017). Building on these results to further investigate the impacts of other dynamics, Hader et al. examined several tile-assembly models which varied across (1) the numbers of dimensions used, (2) restrictions imposed on the diffusion of tiles through space, and (3) whether each system is directed, and determined which models exhibited intrinsic universality (SODA 2020). Such results have shed much light on the roles of various aspects of the dynamics of tile-assembly and their effects on the universality of each model. In this paper we extend that previous work to provide direct comparisons of the various models against each other by considering intrinsic simulations between models. Our results show that in some cases, one model is strictly more powerful than another, and in others, pairs of models have mutually exclusive capabilities. This direct comparison of models helps expose the impacts of these three important aspects of self-assembling systems, and further helps to define a hierarchy of tile-assembly models analogous to the hierarchies studied in traditional models of computation.
Robert Ferens and Marek Szykuła
Abstract: A complete deterministic finite (semi)automaton (DFA) with a set of states
The concept of completely reachable automata appeared several times, in particular, in connection with synchronizing automata; the class contains the \Cerny automata and covers a few separately investigated subclasses.
The notion was introduced by Bondar and Volkov (2016), who also raised the question about the complexity of deciding if an automaton is completely reachable.
We develop a polynomial-time algorithm for this problem, which is based on a new complement-intersecting technique for finding an extending word for a subset of states.
The algorithm works in
Finally, we prove a weak Don's conjecture for this class of automata: a subset of size
This implies a quadratic upper bound in
Olivier Carton, Gaëtan Douéneau-Tabot, Emmanuel Filiot, and Sarah Winter
Abstract: Regular functions of infinite words are (partial) functions realized by deterministic two-way transducers with infinite look-ahead. Equivalently, Alur et. al. have shown that they correspond to functions realized by deterministic Muller streaming string transducers, and to functions defined by MSO-transductions. Regular functions are however not computable in general (for a classical extension of Turing computability to infinite inputs), and we consider in this paper the class of deterministic regular functions of infinite words, realized by deterministic two-way transducers without look-ahead. We prove that it is a well-behaved class of functions: they are computable, closed under composition, characterized by the guarded fragment of MSO-transductions, by deterministic Büchi streaming string transducers, by deterministic two-way transducers with finite look-ahead, and by finite compositions of sequential functions and one fixed basic function called map-copy-reverse.
Bader Abu Radi and Orna Kupferman
Abstract: Nondeterminism is a fundamental notion in Theoretical Computer Science.
A nondeterministic automaton is {\em semantically deterministic\/} (SD) if different nondeterministic choices in the automaton lead to equivalent states. Semantic determinism is interesting as it is a natural relaxation of determinism, and as some applications of automata in formal methods require deterministic automata, yet in fact can use automata with some level of nondeterminism, tightly related to semantic determinism.
In the context of finite words, semantic determinism coincides with determinism, in the sense that every pruning of an SD automaton to a deterministic one results in an equivalent automaton. We study SD automata on infinite words, focusing on B\"uchi, co-\buchi, and weak automata. We show that there, while semantic determinism does not increase the expressive power, the combinatorial and computational properties of SD automata are very different from these of deterministic automata.
In particular, SD \buchi and co-\buchi automata are exponentially more succinct than deterministic ones (in fact, also exponentially more succinct than history-deterministic automata), their complementation involves an exponential blow up, and decision procedures for them like universality and minimization are PSPACE-complete. For weak automata, we show that while an SD weak automaton need not be pruned to an equivalent deterministic one, it can be determinized to an equivalent deterministic weak automaton with the same state space, implying also efficient complementation and decision procedures for SD weak automata.
Mikołaj Bojańczyk and Lê Thành Dũng Nguyễn
Abstract: We consider regular string-to-string functions, i.e. functions that are recognized by copyless streaming string transducers, or any of their equivalent models, such as deterministic two-way automata. We give yet another characterization, which is very succinct: finiteness-preserving functors from the category of semigroups to itself, together with a certain output function that is a natural transformation.
Patricia Bouyer, Nathanaël Fijalkow, Mickael Randour, and Pierre Vandenhove
Abstract: This paper studies two-player zero-sum games played on graphs and makes contributions toward the following question: given an objective, how much memory is required to play optimally for that objective? We study regular objectives, where the goal of one of the two players is that eventually the sequence of colors along the play belongs to some regular language over finite words. We obtain different characterizations of the chromatic memory requirements for such objectives for both players, from which we derive complexity-theoretic statements: deciding whether there exist small memory structures sufficient to play optimally is NP-complete for both players. Some of our characterization results apply to a more general class of objectives: topologically closed and topologically open sets.
Antonio Casares and Pierre Ohlmann
Abstract: This paper is concerned with games of infinite duration played over potentially infinite graphs. Recently, Ohlmann (TheoretiCS 2023) presented a characterisation of objectives admitting optimal positional strategies, by means of universal graphs: an objective is positional if and only if it admits well-ordered monotone universal graphs. We extend Ohlmann’s characterisation to encompass (finite
or infinite) memory upper bounds. We prove that objectives admitting optimal strategies with ε-memory less than m (a memory that cannot be updated when reading an ε-edge) are exactly those which admit well-founded monotone universal graphs whose antichains have size bounded by m. We also give a characterisation of chromatic memory by means of appropriate universal structures. Our results apply to finite as well as infinite memory bounds (for instance, to objectives with finite but unbounded memory, or with countable memory strategies). We illustrate the applicability of our framework by carrying out a few case studies, we provide examples witnessing limitations of our approach, and we discuss general closure properties which follow from our results.
Mohak Goyal, Sukolsak Sakshuwong, Sahasrajit Sarmasarkar and Ashish Goel
Abstract: We study low sample complexity mechanisms in participatory budgeting (PB), where each voter votes for a preferred allocation of funds to various projects, subject to project costs and total spending constraints. We analyze the distortion that PB mechanisms introduce relative to the minimum-social-cost outcome in expectation. The Random Dictator mechanism for this problem obtains a distortion of 2. In a special case where every voter votes for exactly one project, [Fain et al '17] obtain a distortion of 4/3 We show that when PB outcomes are determined as any convex combination of the votes of two voters, the distortion is 2. When three uniformly randomly sampled votes are used, we give a PB mechanism that obtains a distortion of at most 1.66, thus breaking the barrier of 2 with the smallest possible sample complexity.
We give a randomized Nash bargaining scheme where two uniformly randomly chosen voters bargain with the disagreement point as the vote of a voter chosen uniformly at random. This mechanism has a distortion of at most 1.66. We provide a lower bound of 1.38 for the distortion of this scheme. Further, we show that PB mechanisms that output a median of the votes of three voters chosen uniformly at random have a distortion of at most 1.80.
Siddharth Barman and Pooja Kulkarni
Abstract: Cake cutting is a classic model for studying fair division of a heterogeneous, divisible resource among agents with individual preferences. Addressing cake division under a typical requirement that each agent must receive a connected piece of the cake, we develop approximation algorithms for finding envy-free (fair) cake divisions. In particular, this work improves the state-of-the-art additive approximation bound for this fundamental problem. Our results hold for general cake division instances in which the agents' valuations satisfy basic assumptions and are normalized (to have value
Prior work has shown that one can efficiently compute a cake division (with connected pieces) in which the additive envy of any agent is at most 1/3. An efficient algorithm is also known for finding connected cake divisions that are (almost) 1/2-multiplicatively envy-free. Improving the additive approximation guarantee and maintaining the multiplicative one, we develop a polynomial-time algorithm that computes a connected cake division that is both (1/4 + o(1))-additively envy-free and (1/2 - o(1) )-multiplicatively envy-free. Our algorithm is based on the ideas of interval growing and envy-cycle elimination.
In addition, we study cake division instances in which the number of distinct valuations across the agents is parametrically bounded. We show that such cake division instances admit a fully polynomial-time approximation scheme for connected envy-free cake division.
Michal Feldman, Federico Fusco, Simon Mauras and Rebecca Reiffenhäuser
Abstract: We study truthful mechanisms for welfare maximization in online bipartite matching. In our (multi-parameter) setting, every buyer is associated with a (possibly private) desired set of items, and has a private value for being assigned an item in her desired set. Unlike most online matching settings, where agents arrive online, in our setting the items arrive online in an adversarial order while the buyers are present for the entire duration of the process. This poses a significant challenge to the design of truthful mechanisms, due to the ability of buyers to strategize over future rounds.
We provide an almost full picture of the competitive ratios in different scenarios, including myopic vs. non-myopic agents, tardy vs. prompt payments, and private vs. public desired sets.
Among other results, we identify the frontier for which the celebrated
Ishan Agarwal and Richard Cole
Abstract: To guarantee all agents are matched, the classic Deferred Acceptance algorithm needs complete preference lists. In practice, preference lists are short, yet stable matching still works well. This raises two questions:
Why does it work well?
Which proposals should agents include in their preference lists?
We study these questions in a model, introduced by Lee, with preferences based on correlated cardinal utilities: these utilities are based on common public ratings of each agent together with individual private adjustments. Lee showed that for suitable utility functions, in large markets, with high probability, for most agents, all stable matchings yield similar valued utilities. By means of a new analysis, we strengthen Lee's result, showing that in large markets, with high probability, for all but the agents with the lowest public ratings, all stable matchings yield similar valued utilities. We can then deduce that for all but the agents with the lowest public ratings, each agent has an easily identified length O(log n) preference list that includes all of its stable matches, addressing the second question above. We note that this identification uses an initial communication phase.
We extend these results to settings where the two sides have unequal numbers of agents, to many-to-one settings, e.g. employers and workers, and we also show the existence of an epsilon-Bayes-Nash equilibrium in which every agent makes relatively few proposals. These results all rely on a new technique for sidestepping the conditioning between the tentative matching events that occur over the course of a run of the Deferred Acceptance algorithm. We complement these theoretical results with an experimental study.
Jin-Yi Cai and Ben Young
Abstract: Recently, Mančinska and Roberson proved that two graphs
Minglong Qin and Penghui Yao
Abstract: This paper considers the decidability of fully quantum nonlocal games with noisy maximally entangled states. Fully quantum nonlocal games are a generalization of nonlocal games, where both questions and answers are quantum and the referee performs a binary POVM measurement to decide whether they win the game after receiving the quantum answers from the players. The quantum value of a fully quantum nonlocal game is the supremum of the probability that they win the game, where the supremum is taken over all the possible entangled states shared between the players and all the valid quantum operations performed by the players. The seminal work MIP∗ = RE implies that it is undecidable to approximate the quantum value of a fully quantum nonlocal game. This still holds even if the players are only allowed to share (arbitrarily many copies of) maximally entangled states.
This paper investigates the case that the shared maximally entangled states are noisy. We prove that there is a computable upper bound on the copies of noisy maximally entangled states for the players to win a fully quantum nonlocal game with a probability arbitrarily close to the quantum value. This implies that it is decidable to approximate the quantum values of these games. Hence, the hardness of approximating the quantum value of a fully quantum nonlocal game is not robust against the noise in the shared states. This paper is built on the framework for the decidability of non-interactive simulations of joint distributions and generalizes the analogous result for nonlocal games. We extend the theory of Fourier analysis to the space of super-operators and prove several key results including an invariance principle and a dimension reduction for super-operators. These results are interesting in their own right and are believed to have further applications.
Alexandru Gheorghiu, Tony Metger and Alexander Poremba
Abstract: Quantum mechanical effects have enabled the construction of cryptographic primitives that are impossible classically. For example, quantum copy-protection allows for a program to be encoded in a quantum state in such a way that the program can be evaluated, but not copied. Many of these cryptographic primitives are two-party protocols, where one party, Bob, has full quantum computational capabilities, and the other party, Alice, is only required to send random BB84 states to Bob. In this work, we show how such protocols can generically be converted to ones where Alice is fully classical, assuming that Bob cannot efficiently solve the LWE problem. In particular, this means that all communication between (classical) Alice and (quantum) Bob is classical, yet they can still make use of cryptographic primitives that would be impossible if both parties were classical. We apply this conversion procedure to obtain quantum cryptographic protocols with classical communication for unclonable encryption, copy-protection, computing on encrypted data, and verifiable blind delegated computation. The key technical ingredient for our result is a protocol for classically-instructed parallel remote state preparation of BB84 states. This is a multi-round protocol between (classical) Alice and (quantum polynomial-time) Bob that allows Alice to certify that Bob must have prepared n uniformly random BB84 states (up to a change of basis on his space). Furthermore, Alice knows which specific BB84 states Bob has prepared, while Bob himself does not. Hence, the situation at the end of this protocol is (almost) equivalent to one where Alice sent n random BB84 states to Bob. This allows us to replace the step of preparing and sending BB84 states in existing protocols by our remote-state preparation protocol in a generic and modular way.
Honghao Fu, Daochen Wang and Qi Zhao
Abstract: Self-testing is a fundamental feature of quantum mechanics that allows a classical verifier to force untrusted quantum devices to prepare certain states and perform certain measurements on them. The standard approach assumes at least two spatially separated devices. Recently, Metger and Vidick [Quantum, 2021] showed that a single EPR pair of a single quantum device can be self-tested under computational assumptions. In this work, we generalize their results to give the first parallel self-test of N EPR pairs and measurements on them in the single-device setting under the same computational assumptions. We show that our protocol can be passed with probability negligibly close to 1 by an honest quantum device using poly(N) resources. Moreover, we show that any quantum device that fails our protocol with probability at most eps must be poly(N,eps)-close to being honest in the appropriate sense. In particular, our protocol can test any distribution over tensor products of computational or Hadamard basis measurements, making it suitable for applications such as device-independent quantum key distribution under computational assumptions. Moreover, a simplified version of our protocol is the first that can efficiently certify an arbitrary number of qubits of a single cloud quantum computer using only classical communication.
Peyman Afshani, Pingan Cheng, Aniket Basu Roy and Zhewei Wei
Abstract: We study the query version of the heavy hitter and approximate quantile problems.
In the former problem, the input is a parameter
where each point is assigned a color from a set
build a structure such that given any geometric range
we can efficiently find a list of heavy hitters in
i.e., colors that appear at least
as well as their approximate frequencies with an additive error of
In the latter problem, each point is assigned a weight from a totally ordered universe
and the query must output a sequence
such that the
rank
Previously, optimal results were only known for the 1D version of the problem [WY11] but
a few sub-optimal methods were available in higher dimensions [AW17, ACH+12].
We study the problems for two important classes of geometric ranges: 3D halfspace and 3D dominance queries.
It is known that many other important queries can be reduced to these two, namely,
1D interval stabbing or interval containment,
2D three-sided queries, 2D circular as well as 2D
We consider the real RAM model of computation where integer registers of size
For dominance queries, we show optimal solutions for both heavy hitter and approximate
quantile problems: using linear space, we can answer both queries in time
Note that as the output size is
our structure takes on average
For more general halfspace heavy hitter queries, the same optimal query time can be achieved by increasing the
space by an extra
By spending extra
we can also support quantile queries.
We remark that it is hopeless to achieve a similar query bound for dimensions 4
or higher unless significant advances are made in the data structure side of
theory of geometric approximations.
Gramoz Goranci and Monika Henzinger
Abstract: We show an
Furthermore we give an algorithm that maintains an exact maximum
Davide Bilò, Keerti Choudhary, Sarel Cohen, Tobias Friedrich, Simon Krogmann and Martin Schirneck
Abstract: We study the problem of estimating the
We design new
We also provide an information-theoretic lower bound on the space requirement of approximate
Louis Esperet, Nathaniel Harms and Viktor Zamaraev
Abstract: For any hereditary graph class F, we construct optimal adjacency labeling schemes for the classes of subgraphs and induced subgraphs of Cartesian products of graphs in F. As a consequence, we show that, if F admits efficient adjacency labels (or, equivalently, small induced-universal graphs) meeting the information-theoretic minimum, then the classes of subgraphs and induced subgraphs of Cartesian products of graphs in F do too. Our proof uses ideas from randomized communication complexity and hashing, and improves upon recent results of Chepoi, Labourel, and Ratel [Journal of Graph Theory, 2020].
Vladimir Braverman, Robert Krauthgamer, Aditya Krishnan and Shay Sapir
Abstract: Many streaming algorithms provide only a high-probability relative approximation. These two relaxations, of allowing approximation and randomization, seem necessary -- for many streaming problems, both relaxations must be employed simultaneously, to avoid an exponentially larger (and often trivial) space complexity. A common drawback of these randomized approximate algorithms is that independent executions on the same input have different outputs, that depend on their random coins. \emph{Pseudo-deterministic} algorithms combat this issue, and for every input, they output with high probability the same ``canonical'' solution.
We consider perhaps the most basic problem in data streams, of counting the number of items in a stream of length at most
Our approach is based on a problem that we call \emph{Shift Finding}, and may be of independent interest. In this problem, one has query access to a shifted version of a known string
Sudatta Bhattacharya and Michal Koucky
Abstract: In this paper we give an algorithm for streaming
Sanjeev Khanna, Yu Chen and Zihan Tan
Abstract: We consider the design of sublinear space and query complexity algorithms for estimating the cost of a minimum spanning tree (MST) and the cost of a minimum traveling salesman (TSP) tour in a metric on
We next consider the well-studied semi-streaming regime. In this regime, it is straightforward to compute MST cost exactly even in the case where the input stream only contains the edges of a weighted graph that induce the underlying metric (a graph stream), and the main challenging problem is to estimate TSP cost to within a factor that is strictly better than
Finally, we consider the query complexity of estimating metric TSP cost to within a factor that is strictly better than
Prior to our work, such results were only known for the special cases of graphic TSP and
In terms of techniques, our algorithms for metric TSP cost estimation in both streaming and query settings rely on estimating the {\em cover advantage} which intuitively measures the cost needed to turn an MST into an Eulerian graph. One of our main algorithmic contributions is to show that this quantity can be meaningfully estimated by a sublinear number of queries in the query model. On one hand, the fact that a metric stream reveals pairwise distances for all pairs of vertices provably helps algorithmically. On the other hand, it also seems to render useless techniques for proving space lower bounds via reductions from well-known hard communication problems.
Our main technical contribution in lower bounds is to identify and characterize the communication complexity of new problems that can serve as canonical starting point for proving metric stream lower bounds.
Rajarshi Bhattacharjee, Gregory Dexter, Petros Drineas, Cameron Musco and Archan Ray
Abstract: We study the problem of approximating the eigenspectrum of a symmetric matrix
George Kenison, Joris Nieuwveld, Joël Ouaknine, and James Worrell
Abstract: It is a longstanding open problem whether there is an algorithm to decide the Positivity Problem for linear recurrence sequences (LRS) over the integers, namely whether given such a sequence, all its terms are non-negative. Decidability is known for LRS of order
In this paper, we focus on the important subclass of reversible LRS, i.e., those integer LRS
Bartosz Bednarczyk, Ian Pratt-Hartmann, and Daumantas Kojelis
Abstract: We define the adjacent fragment AF of first-order logic, obtained by restricting the sequences of variables occurring as arguments in atomic formulas.
The adjacent fragment generalizes (after a routine renaming) two-variable logic as well as the fluted fragment.
We show that the adjacent fragment has the finite model property, and its satisfiability problem is no harder than for the fluted fragment, and hence Tower-complete.
We further show that any relaxation of the adjacency condition on the allowed order of variables in
argument sequences yields a logic whose satisfiability and finite satisfiability problems are undecidable.
Finally, we study the effect of the adjacency requirement on the well-known guarded fragment GF of first-order logic.
We show that the satisfiability problem for the guarded adjacent fragment (GA) remains TwoExpTime-hard,
thus strengthening the known lower bound for GF, but that, in contrast to GF, the fragment GA
has the Craig Interpolation Property.
Michael Benedikt, Dmitry Chistikov, and Alessio Mansutti
Abstract: We investigate expansions of Presburger arithmetic (PA), i.e.,
the theory of the integers with addition and order, with additional
structure related to exponentiation: either a function that takes a
number to the power of 2, or a predicate P for the powers of 2.
The latter theory, denoted as PA(Pow), was introduced by Buchi as a
first attempt at characterising the sets of tuples of numbers that
can be expressed using finite automata; Buchi's method does not give
an elementary upper bound, and the complexity of this theory has been
open. The former theory, denoted as PA(Exp), was shown decidable by
Semenov; while the decision procedure for this theory differs
radically from the automata-based method proposed by Buchi, the method
is also non-elementary. And in fact, the theory with the power function
has a non-elementary lower bound. In this paper, we show that while
Semenov's and Buchi's approaches yield non-elementary blow-ups for PA(Pow),
the theory is in fact decidable in triply exponential time,
similar to the best known quantifier-elimination algorithm for PA.
We also provide a NEXPTIME upper bound for the existential fragment of
PA(Exp), a step towards a finer-grained analysis of its complexity.
Both these results are established by analysing a single parameterized
satisfiability algorithm for PA(Exp), which can be specialized to either
the setting of PA(Pow) or the existential theory of PA(Exp).
Besides the new upper bounds for the existential theory of PA(Exp) and
PA(Pow), we believe our algorithm provides new intuition for the decidability
of these theories, and for the features that lead to non-elementary blow-ups.
Markus Lohrey and Andreas Rosowski
Abstract: We prove that it is
Finally, we deduce from our result on the diameter that it is
Dylan Hyatt-Denesik, Afrouz Ameli and Laura Sanita
Abstract: This paper addresses a graph optimization problem, called the Witness Tree problem, which seeks a spanning tree of a graph minimizing a certain non-linear objective function. This problem is of interest because it plays a crucial role in the analysis of the best approximation algorithms for two fundamental network design problems: Steiner Tree and Node-Tree Augmentation. We will show how a wiser choice of witness trees leads to an improved approximation for Node-Tree Augmentation, and for Steiner Tree in special classes of graphs.
Mohit Garg, Felix Hommelsheim and Nicole Megow
Abstract: We consider the matching augmentation problem (MAP), where a matching of a graph needs to be extended into a
Ishan Bansal, Joe Cheriyan, Logan Grout and Sharat Ibrahimpur
Abstract: We address long-standing open questions raised by Williamson, Goemans, Vazirani and Mihail pertaining to the design of approximation algorithms for problems in network design via the primal-dual method (Combinatorica 15(3):435-454, 1995). Williamson, et al., prove an approximation guarantee of two for connectivity augmentation problems where the connectivity requirements can be specified by so-called uncrossable functions. They state: ``Extending our algorithm to handle non-uncrossable functions remains a challenging open problem. The key feature of uncrossable functions is that there exists an
optimal dual solution which is laminar. This property characterizes uncrossable functions \dots\ A larger open issue is to explore further the power of the primal-dual approach for obtaining approximation algorithms for other combinatorial optimization problems.''
Our main result proves a 16-approximation guarantee via the primal-dual method for a class of functions that generalizes the notion of an uncrossable function. We mention that the support of every optimal dual solution could be non-laminar for instances that can be handled by our methods.
We present a few applications of our main result to problems in the area of network design.
(a) A 16-approximation algorithm for augmenting the family of near-minimum cuts of a graph
The previous best approximation guarantee was
(b) A 20-approximation algorithm for the model of
The previous best approximation guarantee was
(c) A
Given an undirected graph
The previous best approximation guarantee was
Zachary Friggstad and Ramin Mousavi
Abstract: We present an
Claire Mathieu and Hang Zhou
Abstract: In the unsplittable capacitated vehicle routing problem (UCVRP) on trees, we are given a rooted tree with edge weights and a subset of vertices of the tree called terminals. Each terminal is associated with a positive demand between 0 and 1. The goal is to find a minimum length collection of tours starting and ending at the root of the tree such that the demand of each terminal is covered by a single tour (i.e., the demand cannot be split), and the total demand of the terminals in each tour does not exceed the capacity of 1.
For the special case when all terminals have equal demands, a long line of research culminated in a quasi-polynomial time approximation scheme [Jayaprakash and Salavatipour, SODA 2022] and a polynomial time approximation scheme [Mathieu and Zhou, TALG 2022].
In this work, we study the general case when the terminals have arbitrary demands. Our main contribution is a polynomial time
Timothy M. Chan, Qizheng He and Yuancheng Yu
Abstract: We study the time complexity of the discrete
Takehiro Ito, Yuni Iwamasa, Naonori Kakimura, Yusuke Kobayashi, Shunichi Maezawa, Yuta Nozaki, Yoshio Okamoto and Kenta Ozeki
Abstract: In this paper, we consider a transformation of
Siu-Wing Cheng and Haoqiang Huang
Abstract: We propose
Yi-Jun Chang
Abstract: An orthogonal drawing is an embedding of a plane graph into a grid. In a seminal work of Tamassia (SIAM Journal on Computing, 1987), a simple combinatorial characterization of angle assignments that can be realized as bend-free orthogonal drawings was established, thereby allowing an orthogonal drawing to be combinatorially by listing the angles of all corners. The characterization reduces the need to consider certain geometric aspects, such as edge lengths and vertex coordinates, and simplifies the task of graph drawing algorithm design.
Barth, Niedermann, Rutter, and Wolf (SoCG, 2017) established an analogous combinatorial characterization for ortho-radial drawings, which are a generalization of orthogonal drawings to cylindrical grids. The proof of the characterization is existential and does not result in an efficient algorithm. Niedermann, Rutter, and Wolf (SoCG, 2019) later addressed this issue by developing quadratic-time algorithms for both testing the realizability of a given angle assignment as an ortho-radial drawing without bends and constructing such a drawing.
In this paper, we take it a step further by improving the time complexity of these tasks to near-linear time. We prove a new characterization for ortho-radial drawings based on the concept of a good sequence, which enables us to construct an ortho-radial drawing through a simple greedy algorithm.
Magdalen Dobson and Guy Blelloch
Abstract: We study the connections between sorting and the binary search tree (BST) model, with an aim towards showing that the fields are connected more deeply than is currently appreciated. While any BST can be used to sort by inserting the keys one-by-one, this is a very limited relationship and importantly says nothing about parallel sorting. We show what we believe to be the first formal relationship between the BST model and sorting. Namely, we show that a large class of sorting algorithms, which includes mergesort, quicksort, insertion sort, and almost every instance-optimal sorting algorithm, are equivalent in cost to offline BST algorithms. The main tool we use to bridge the gap between these areas is the geometric interpretation of the BST model introduced by Demaine et al., which finds an equivalence between searches on a BST and point sets in the plane satisfying a certain property. To give an example of the utility of our approach, we introduce the log-interleave bound, a measure of the information-theoretic complexity of a permutation pi, which is within a lg lg n multiplicative factor of a known lower bound in the BST model; we also devise a parallel sorting algorithm with polylogarithmic span that sorts a permutation pi using comparisons proportional to its log-interleave bound. Our aforementioned result on sorting and offline BST algorithms can be used to show existence of an offline BST algorithm whose cost is within a constant factor of the log-interleave bound of any permutation pi.
David Harris and Vladimir Kolmogorov
Abstract: A central problem in computational statistics is to convert a procedure for \emph{sampling} combinatorial objects into a procedure for \emph{counting} those objects, and vice versa. We will consider sampling problems which come from \emph{Gibbs distributions}, which are families of probability distributions over a discrete space
The \emph{partition function} is the normalization factor
We develop a number of algorithms to estimate the counts
Charilaos Efthymiou and Weiming Feng
Abstract: We study the single-site Glauber dynamics for the fugacity \lambda, Hard-core model on the random graph G(n, d/n). We show that for the typical instances of the random graph G(n,d/n) and for fugacity \lambda < \frac{d^d}{(d-1)^{d+1}}, the mixing time of Glauber dynamics is n^{1 + O(1/\log \log n)}.
Our result improves on the recent elegant algorithm in [Bezakova, Galanis, Goldberg and Štefankovič; ICALP 2022]. The algorithm there is an MCMC-based sampling algorithm, but it is not the Glauber dynamics. Our algorithm here is simpler, as we use the classic Glauber dynamics. Furthermore, the bounds on mixing time we prove are smaller than those in Bezakova et al. paper, hence our algorithm is also faster.
The main challenge in our proof is handling vertices with unbounded degrees. We provide stronger results with regard the spectral independence via branching values and show that the our Gibbs distributions satisfy the approximate tensorisation of the entropy. We conjecture that the bounds we have here are optimal for G(n,d/n).
As corollary of our analysis for the Hard-core model, we also get bounds on the mixing time of the Glauber dynamics for the Monomer-dimer model on G(n,d/n). The bounds we get for this model are slightly better than those we have or the Hard-core model.
Konstantinos Zampetakis and Charilaos Efthymiou
Abstract: Motivated by the theory of spin-glasses in physics, we study the so-called reconstruction problem for the related distributions on the tree, and on the sparse random graph G(n,d/n).
Both cases, reduce naturally to studying broadcasting models on the tree, where each edge has its own broadcasting matrix and this matrix is drawn independently from a predefined distribution. In this context, we study the effect of the configuration at the root to that of the vertices at distance h, as h\to\infty.
We establish the reconstruction threshold for the cases where the broadcasting matrices give rise to symmetric, 2-spin Gibbs distributions. This threshold seems to be a natural extension of the well-known Kesten-Stigum bound which arise in the classic version of the reconstruction problem.
Our results imply, as a special case, the reconstruction threshold for the well-known Edward-Anderson model of spin-glasses on the tree.
Also, we extend our analysis to the setting of the Galton-Watson tree, and the random graph G(n,d/n), where we establish the corresponding thresholds. Interestingly, for the Edward-Anderson model on the random graph, we show that the replica symmetry breaking phase transition, established in [Guerra and Toninelli:2004], coincides with the reconstruction threshold.
Compared to the classical Gibbs distributions, the spin-glasses have a lot of unique features. In that respect, their study calls for new ideas, e.g., here we introducing novel estimators for the reconstruction problem etc. Furthermore, note that the main technical challenge in the analysis is the presence of (too) many levels of randomness. We manage to circumvent this problem by utilising recent tools coming from the analysis of Markov chains.
David Eppstein and Daniel Frishberg
Abstract: We prove that the well-studied triangulation flip walk on a convex point set mixes in time O(n^3 log^3 n), the first progress since McShine and Tetali's O(n^5 log n) bound in 1997. In the process we give lower and upper bounds of respectively Omega(1/(sqrt n log n)) and O(1/sqrt n)---asymptotically tight up to an O(log n) factor---for the expansion of the associahedron graph K_n. The upper bound recovers Molloy, Reed, and Steiger's Omega(n^{3/2}) bound on the mixing time of the walk. To obtain these results, we introduce a framework consisting of a set of sufficient conditions under which a given Markov chain mixes rapidly. This framework is a purely combinatorial analogue that in some circumstances gives better results than the projection-restriction technique of Jerrum, Son, Tetali, and Vigoda. In particular, in addition to the result for triangulations, we show quasipolynomial mixing for the k-angulation flip walk on a convex point set, for fixed k >= 4.
Thomas Sauerwald, He Sun and Danny Vagnozzi
Abstract: A closed random walk of length
First, we show that the support of a closed random walk of length
Prashanth Amireddy, Ankit Garg, Neeraj Kayal, Chandan Saha and Bhargav Thankey
Abstract: A recent breakthrough work of Limaye, Srinivasan, and Tavenas [FOCS 2021] proved superpolynomial lower bounds for low-depth arithmetic circuits via a “hardness escalation" approach: they proved lower bounds for low-depth set-multilinear circuits and then lifted the bounds to low-depth general circuits. In this work, we prove superpolynomial lower bounds for low-depth circuits by bypassing the hardness escalation, i.e., the set-multilinearization, step. As set-multilinearization comes with an exponential blow-up in circuit size, our direct proof opens up the possibility of proving an exponential lower bound for low-depth homogeneous circuits by evading a crucial bottleneck. Our bounds hold for the iterated matrix multiplication and the Nisan-Wigderson design polynomials. We also define a subclass of unrestricted depth homogeneous formulas which we call unique parse tree (UPT) formulas, and prove superpolynomial lower bounds for these. This significantly generalizes the superpolynomial lower bounds for regular formulas [Kayal-Saha-Saptharishi, STOC 2014], [Fournier-Limaye-Malod-Srinivasan, STOC 2014].
Lijie Chen, Xin Lyu, Avishay Tal and Hongxun Wu
Abstract: We give the first pseudorandom generators with sub-linear seed length for the following variants of read-once branching programs (roBPs):
First, we show there is an explicit PRG of seed length O(log^2(n/eps) log(n)) that fools unbounded-width unordered permutation branching programs with a single accept state, where
Second, we show there is an explicit PRG fooling unbounded-width unordered regular branching programs with a single accept state with seed length O~(sqrt(n log(n/eps))). Previously, no non-trivial PRG (with seed length less than n) was known for this class (even in the ordered setting). For the ordered case, [Bogdanov-Hoza-Prakriya-Pyne CCC 2022] gave an HSG with seed length O~(log(n)log(1/eps)).
Third, we show there is an explicit PRG fooling width w adaptive branching programs with seed length O(log(n)log^2(nw/eps)). Here, the branching program can choose an input bit to read depending on its current state, while it is guaranteed that on any input x \in {0,1}^n, the branching program reads each input bit exactly once. Previously, no PRG with a non-trivial seed length is known for this class. We remark that there are some functions computable by constant-width adaptive branching programs but not by sub-exponential-width unordered branching programs.
In terms of techniques, we indeed show that the Forbes-Kelly PRG (with the right parameters) from [Forbes-Kelly FOCS 2018] already fools all variants of roBPs above. Our proof adds several new ideas to the original analysis of Forbes-Kelly, and we believe it further demonstrates the versatility of the Forbes-Kelly PRG.
Rotem Oshman and Tal Roth
Abstract: We consider a multiparty setting where
In the worst-case, it is known that the communication complexity of finding the intersection is the same as that of solving set disjointness, regardless of the size of the intersection: the cost of both problems is
In this work we consider a realistic setting where the parties' inputs are independent of one another, that is, the input is drawn from a product distribution. We show that this makes finding the intersection significantly easier than in the worst-case: only
samples from the underlying distribution
Amir Azarmehr and Soheil Behnezhad
Abstract: We study the robust communication complexity of maximum matching. Edges of an arbitrary
n-vertex graph G are randomly partitioned between Alice and Bob independently and uniformly.
Alice has to send a single message to Bob such that Bob can find an (approximate) maximum
matching of the whole graph G. We specifically study the best approximation ratio achievable
via protocols where Alice communicates only O~(n) bits to Bob.
There has been a growing interest on the robust communication model due to its connections
to the random-order streaming model. An algorithm of Assadi and Behnezhad [ICALP’21]
implies a (2/3 + ε_0 ∼ .667)-approximation for a small constant 0 < ε_0 < 10^{−18}, which remains
the best-known approximation for general graphs. For bipartite graphs, Assadi and Behnezhad
[Random’21] improved the approximation to .716 albeit with a computationally inefficient (i.e.,
exponential time) protocol.
In this paper, we study a natural and efficient protocol implied by a random-order streaming
algorithm of Bernstein [ICALP’20] which is based on edge-degree constrained subgraphs (EDCS)
[Bernstein and Stein; ICALP’15]. The result of Bernstein immediately implies that this pro-
tocol achieves an (almost) (2/3 ∼ .666)-approximation in the robust communication model.
We present a new analysis, proving that it achieves a much better (almost) (5/6 ∼ .833)-
approximation. This significantly improves previous approximations both for general and bi-
partite graphs. We also prove that our analysis of Bernstein’s protocol is tight.
Takehiro Ito, Naonori Kakimura, Naoyuki Kamiyama, Yusuke Kobayashi, Shun-Ichi Maezawa, Yuta Nozaki and Yoshio Okamoto
Abstract: We prove that the computation of a combinatorial shortest path between two vertices of a graph associahedron, introduced by Carr and Devadoss, is NP-hard. This resolves an open problem raised by Cardinal. A graph associahedron is a generalization of the well-known associahedron. The associahedron is obtained as the graph associahedron of a path. It is a tantalizing and important open problem in theoretical computer science that the computation of a combinatorial shortest path between two vertices of the associahedron can be done in polynomial time, which is identical to the computation of the flip distance between two triangulations of a convex polygon, and the rotation distance between two rooted binary trees. Our result shows that a certain generalized approach to tackling this open problem is not promising. As a corollary of our theorem, we prove that the computation of a combinatorial shortest path between two vertices of a polymatroid base polytope cannot be done in polynomial time unless P = NP. Since a combinatorial shortest path on the matroid base polytope can be computed in polynomial time, our result reveals an unexpected contrast between matroids and polymatroids.
Jan Dreier, Nikolas Mählmann, Sebastian Siebertz, and Szymon Toruńczyk
Abstract: Monadically stable and monadically NIP classes of structures were initially studied in the context of model theory and defined in logical terms. They have recently attracted attention in the area of structural graph theory, as they generalize notions such as nowhere denseness, bounded cliquewidth, and bounded twinwidth.
Our main result is the - to the best of our knowledge first - purely combinatorial characterization of monadically stable classes of graphs, in terms of a property dubbed flip-widness. A class C of graphs is flip-wide if for every fixed radius r, every sufficiently large set of vertices of a graph G ∈ C contains a large subset of vertices with mutual distance larger than r, where the distance is measured
in some graph G' that can be obtained from G by performing a bounded number of flips that swap edges and non-edges within a subset of vertices. Flip-wideness generalizes the notion of uniform quasi-wideness, which characterizes nowhere dense classes and had a key impact on the combinatorial and algorithmic treatment of nowhere dense classes. To obtain this result, we develop tools that also apply to the more general monadically NIP classes, based on the notion of indiscernible sequences from model theory. We show that in monadically stable and monadically NIP classes indiscernible sequences impose a strong combinatorial structure on their definable neighborhoods. All our proofs are constructive and yield efficient algorithms.
Jakub Gajarský, Nikolas Mählmann, Rose McCarty, Pierre Ohlmann, Michał Pilipczuk, Wojciech Przybyszewski, Sebastian Siebertz, Marek Sokołowski and Szymon Toruńczyk
Abstract: A class of graphs
In this work we provide a characterization of monadic stability in terms of the Flipper game: a game on a graph played by Flipper, who in each round can complement the edge relation between any pair of vertex subsets, and Connector, who in each round is forced to localize the game to a ball of bounded radius. This is an analog of the Splitter game, which characterizes nowhere dense classes of graphs (Grohe, Kreutzer, and Siebertz, J.~ACM~'17).
We give two different proofs of our main result. The first proof is based on tools borrowed from model theory, and it exposes an additional property of monadically stable graph classes that is close in spirit to definability of types. Also, as a byproduct, we show that monadic stability for graph classes coincides with monadic stability of existential formulas with two free variables, and we provide another combinatorial characterization of monadic stability via forbidden patterns. The second proof relies on the recently introduced notion of flip-wideness (Dreier, M\"ahlmann, Siebertz, and Toruńczyk, arXiv~2206.13765) and provides an efficient algorithm to compute Flipper's moves in a winning strategy.
Pierre Ohlmann, Michał Pilipczuk, Wojciech Przybyszewski and Szymon Toruńczyk
Abstract: We use model-theoretic tools originating from stability theory to derive a result we call the Finitary Substitute Lemma, which intuitively says the following. Suppose we work in a stable graph class
We use the Finitary Substitute Lemma to derive two corollaries about the existence of certain canonical decompositions in classes of well-structured graphs.
We prove that in the Splitter game, which characterizes nowhere dense graph classes, and in the Flipper game, which characterizes monadically stable graph classes, there is a winning strategy for Splitter, respectively Flipper, that can be defined in first-order logic from the game history. Thus, the strategy is canonical.
We show that for any fixed graph class
Samuel Braunfeld, Anuj Dawar, Ioannis Eleftheriadis and Aris Papadopoulos
Abstract: We prove that for any monotone class of finite relational structures, the first-order theory of the class is NIP in the sense of stability theory if, and only if, the collection of Gaifman graphs of structures in this class is nowhere dense. This generalises to relational structures a result previously known for graphs and answers an open question posed by Adler and Adler (2014). The result is established by the application of Ramsey-theoretic techniques and shows that the property of being NIP is highly robust for monotone classes. We also show that the model-checking problem for first-order logic is intractable on any class of monotone structures that is not (monadically) NIP. This is a contribution towards the conjecture of Bonnet et al.\ that the hereditary classes of structures admitting fixed-parameter tractable model-checking are precisely those that are monadically NIP.
Michael Lampis
Abstract: Courcelle’s celebrated theorem states that all MSO-expressible properties can be decided in linear time on graphs of bounded treewidth. Unfortunately, the hidden constant implied by this theorem is a tower of exponentials whose height increases with each quantifier alternation in the formula. More devastatingly, this cannot be improved, under standard assumptions, even if we consider the much more restricted problem of deciding FO-expressible properties on trees.
In this paper we revisit this well-studied topic and identify a natural special case where the dependence of Courcelle’s theorem can, in fact, be improved. Specifically, we show that all FO expressible properties can be decided with an elementary dependence on the input formula, if the input graph has bounded pathwidth (rather than treewidth). This is a rare example of treewidth and pathwidth having different complexity behaviors. Our result is also in sharp contrast with MSO logic on graphs of bounded pathwidth, where it is known that the dependence has to be non-elementary, under standard assumptions. Our work builds upon, and generalizes, a corresponding meta-theorem by Gajarský and Hliněný for the more restricted class of graphs of bounded tree-depth.
Manuel Cáceres
Best Student Paper Track A:
Abstract: A minimum chain cover (MCC) of a
In this work we present an algorithm running in time
Ruiwen Dong
Best Student Paper Track B
Abstract: We consider semigroup algorithmic problems in the wreath product
Our paper focuses on two decision problems introduced by Choffrut and Karhum\"{a}ki (2005): the \emph{Identity Problem} (does a semigroup contain the neutral element?) and the \emph{Group Problem} (is a semigroup a group?) for finitely generated sub-semigroups of
We show that both problems are decidable.
Our result complements the undecidability of the \emph{Semigroup Membership Problem} (does a semigroup contain a given element?) in
Tsun-Ming Cheung, Hamed Hatami, Pooya Hatami and Kaave Hosseini
Best Paper Track A:
Abstract: In many practical learning problems, the learning task is tractable because it is only required to predict the labels of the data points with specific properties. For example, in large-margin classifiers, one seeks to classify only the data points bounded away from the decision boundary by a margin.
In a recent article, Alon, Hanneke, Holzman, and Moran (FOCS '21) introduced a unifying framework to study the learnability of classes of such partial concepts. One of the central questions studied in their work is whether the learnability of a partial concept class is always inherited from the learnability of some ``extension'' of it to a total concept class.
They showed that this is not the case for PAC learning but left the problem for the stronger notion of online learnability open.
We answer their question in the negative by constructing a class of partial concepts that is online learnable, but no extension of it to a class of total concepts is online learnable (or even PAC learnable).
Marvin Künnemann, Filip Mazowiecki, Lia Schütze, Henry Sinclair-Banks and Karol Węgrzycki
Best Paper Track B:
Abstract: Seminal results establish that the coverability problem for Vector Addition Systems with States (VASS) is in EXPSPACE (Rackoff, '78) and is EXPSPACE-hard already under unary encodings (Lipton, '76).
More precisely, Rosier and Yen later utilise Rackoff's bounding technique to show that if coverability holds then there is a run of length at most n^(2^(O(d*log(d))), where d is the dimension and n is the size of the given unary VASS.
Earlier, Lipton showed that there exist instances of coverability in d-dimensional unary VASS that are only witnessed by runs of length at least n^(2^(Omega(d))).
The remaining gap was stated as an open problem in '82 by Mayr and Meyer and remained open until now.
Our first result closes this gap.
We improve Rackoff's upper bound, removing the twice-exponentiated log(d) factor, thus matching Lipton's lower bound.
This closes the corresponding gap for the exact space required to decide coverability.
This also yields a deterministic n^(2^(O(d)))-time algorithm for coverability.
Our second result is a matching lower bound, that there does not exist a deterministic n^(2^(O(d)))-time algorithm, conditioned upon the Exponential Time Hypothesis.
When analysing coverability, a standard proof technique is to consider VASS with bounded counters.
Bounded VASS make for an interesting and popular model due to strong connections with timed automata.
Withal, we study a natural setting where the counter bound is linear in the size of the VASS.
Here the trivial exhaustive search algorithm runs in O(n^(d+1))-time.
We give evidence to this being near-optimal.
We prove that in dimension one this trivial algorithm is conditionally optimal, by showing that n^(2-o(1))-time is required under the k-cycle hypothesis.
In general fixed dimension d, we show that n^(d-2-o(1))-time is required under the 3-uniform hyperclique hypothesis.
Miguel Bosch Calvo, Fabrizio Grandoni and Afrouz Jabal Ameli
Best Paper Track A:
Abstract: The 2-Vertex-Connected Spanning Subgraph problem (2VCSS) is among the most basic NP-hard (Survivable) Network Design problems: we are given an (unweighted) undirected graph
Here we present an improved
Yanlin Chen and Ronald de Wolf
Abstract: Lasso and Ridge are important minimization problems in machine learning and statistics. They are versions of linear regression with squared loss where the vector
Xiantao Li and Chunhao Wang
Abstract: We present an efficient quantum algorithm for simulating the dynamics of Markovian open quantum systems. The performance of our algorithm is similar to the previous state-of-the-art quantum algorithm, i.e., it scales linearly in evolution time and poly-logarithmically in inverse precision. However, our algorithm is conceptually cleaner, and it only uses simple quantum primitives without compressed encoding. Our approach is based on a novel mathematical treatment of the evolution map, which involves a higher-order series expansion based on Duhamel's principle and approximating multiple integrals using scaled Gaussian quadrature. Our method easily generalizes to simulating quantum dynamics with time-dependent Lindbladians. Furthermore, our method of approximating multiple integrals using scaled Gaussian quadrature could potentially be used to produce a more efficient approximation of time-ordered integrals, and therefore can simplify existing quantum algorithms for simulating time-dependent Hamiltonians based on a truncated Dyson series.
Thiago Bergamaschi
Abstract: The ground state energy and the free energy of Quantum Local Hamiltonians are fundamental quantities in quantum many-body physics, however, it is QMA-Hard to estimate them in general. In this paper, we develop new techniques to find classical, additive error product-state approximations for these quantities on certain families of Quantum
We present two main technical contributions. First, we discuss a connection between product-state approximations of local Hamiltonians and combinatorial graph property testing. We develop a series of \textit{weak Szemer\'edi regularity} lemmas for
Ryu Hayakawa, Jordi Weggemans, Tomoyuki Morimae, Chris Cade, Marten Folkertsma, Sevag Gharibian and Francois Le Gall
Abstract: Estimating the ground state energy of a local Hamiltonian is a central problem in quantum chemistry. In order to further investigate its complexity and the potential of quantum algorithms for quantum chemistry, Gharibian and Le Gall (STOC 2022) recently introduced the guided local Hamiltonian problem (GLH), which is a variant of the local Hamiltonian problem where an approximation of a ground state (which is called a guiding state) is given as an additional input. Gharibian and Le Gall showed quantum advantage (more precisely, BQP-completeness) for GLH with 6-local Hamiltonians when the guiding state has fidelity (inverse-polynomially) close to 1/2 with a ground state.
In this paper, we optimally improve both the locality and the fidelity parameter: we show that the BQP-completeness persists even with 2-local Hamiltonians, and even when the guiding state has fidelity (inverse-polynomially) close to 1 with a ground state. Moreover, we show that the BQP-completeness also holds for 2-local physically motivated Hamiltonians on a 2D square lattice or a 2D triangular lattice. Beyond the hardness of estimating the ground state energy, we also show BQP-hardness persists when considering estimating energies of excited states of these Hamiltonians instead. Those make further steps towards establishing practical quantum advantage in quantum chemistry.
Paul Beame and Niels Kornerup
Abstract: Cumulative memory---the sum of space used over the steps of a computation---is a fine-grained measure of time-space complexity that was introduced to analyze cryptographic applications more precisely. It is a more accurate cost measure for algorithms with infrequent spikes in memory usage in the context of technologies such as cloud computing that allow dynamic allocation and de-allocation of resources during their execution.
We give the first lower bounds on cumulative memory complexity that apply to general sequential classical algorithms. We also prove the first such bounds for bounded error quantum circuits. Moreover, we develop general paradigms for bounding cumulative memory complexity inspired by the standard paradigms for proving time space tradeoff lower bounds, which only lower bound the maximum space used during an execution.
With these new paradigms we obtain lower bounds on cumulative memory that are essentially as strong as the best time-space tradeoff lower bounds which are known to be tight in many cases. Among many possible applications, we show that any classical sorting algorithm with success probability at least
Petr Hlineny and Jan Jedelský
Abstract: Twin-width is a structural width parameter introduced by Bonnet, Kim, Thomassé and Watrigant [FOCS 2020]. Very briefly, its essence is a gradual reduction (a contraction sequence) of the given graph down to a single vertex while maintaining limited difference of neighbourhoods of the vertices, and it can be seen as widely generalizing several other traditional structural parameters. Having such a sequence at hand allows us to solve many otherwise hard problems efficiently. Graph classes of bounded twin-width, in which appropriate contraction sequences are efficiently constructible, are thus of interest in combinatorics and in computer science. However, we currently do not know in general how to obtain a witnessing contraction sequence of low width efficiently, and published upper bounds on the twin-width in non-trivial cases are often “astronomically large”.
We focus on planar graphs, which are known to have bounded twin-width (already since the introduction of twin-width), but the first explicit “non-astronomical” upper bounds on the twin-width of planar graphs appeared just a year ago; namely the bound of at most 183 by Jacob and Pilipczuk [arXiv, January 2022], and 583 by Bonnet, Kwon and Wood [arXiv, February 2022]. A subsequent manuscript of Bekos et al. improved the bound down to 37 [arXiv, April 2022]. We prove that the twin-width of every planar graph is at most 8, and construct a witnessing contraction sequence in linear time. Note that the currently best lower-bound planar example is of twin-width 7, by Král and Lamaison [arXiv, September 2022]. We also prove that the twin-width of every bipartite planar graph is at most 6, and again construct a witnessing sequence in linear time.
Leslie Ann Goldberg and Marc Roth
Abstract: Given a class of graphs C, the problem ParitySub(C) is defined as follows. The input is a graph H in C together with an arbitrary graph G. The problem is to compute, modulo 2, the number of subgraphs of G that are isomorphic to H. The goal of this research is to determine for which classes C the problem ParitySub(C) is fixed-parameter tractable (FPT), i.e., solvable in time f(|H|)·|G|^{O(1)}.
Curticapean, Dell, and Husfeldt (ESA 2021) conjectured that ParitySub(C) is FPT if and only if the class of allowed patterns C is matching splittable, which means that for some fixed B, every H in C can be turned into a matching (a graph in which every vertex has degree at most 1) by removing at most B vertices.
Assuming the randomised Exponential Time Hypothesis, we prove their conjecture for (I) all hereditary pattern classes C, and (II) all tree pattern classes, i.e., all classes C such that every H in C is a tree.
We also establish almost tight fine-grained upper and lower bounds for the case of hereditary patterns (I).
Michał Wlodarczyk
Abstract: In Chordal/Interval Vertex Deletion we ask how many vertices one needs to remove from a graph to make it chordal (respectively: interval). We study these problems under the parameterization by treewidth (tw) of the input graph G. On the one hand, we present an algorithm for Chordal Vertex Deletion with running time 2^O(tw)|V(G)|, improving upon the running time 2^O(tw^2)|V(G)|^O(1) by Jansen, de Kroon, and Wlodarczyk (STOC'21). When a tree decomposition of width tw is given, then the base of the exponent equals 2^(omega-1)3+1. Our algorithm is based on a novel link between chordal graphs and graphic matroids, which allows us to employ the framework of representative families. On the other hand, we prove that the known 2^O(tw log tw)|V(G)|-time algorithm for Interval Vertex Deletion cannot be improved assuming Exponential Time Hypothesis.
Laure Morelle, Ignasi Sau, Giannos Stamoulis and Dimitrios M. Thilikos
Abstract: Let
This algorithm improves the previous one, given by Sau, Stamoulis, and Thilikos [ICALP 2020, TALG 2022], whose running time was
As a stepping stone for these algorithms, we provide an algorithm that decides whether
Finally, we provide explicit upper bounds on the size of the graphs in the minor-obstruction set of the class of graphs
Hans L. Bodlaender, Carla Groenland and Michał Pilipczuk
Abstract: We investigate the parameterized complexity of \textsc{Binary CSP} parameterized by the vertex cover number and the treedepth of the constraint graph, as well as by a selection of related modulator-based parameters. The main findings are as follows:
We expect that many of the studied classes will be useful in the future for pinpointing the complexity of various structural parameterizations of graph problems.
Lukas Drexler, Jan Eube, Kelin Luo, Heiko Röglin, Melanie Schmidt and Julian Wargalla
Abstract: Motivated by an application from geodesy, we study the connected k-center problem and the connected k-diameter problem. These problems arise from the classical k-center and k-diameter problems by adding a side constraint. For the side constraint, we are given an undirected connectivity graph G on the input points, and a clustering is now only feasible if every cluster induces a connected subgraph in G. Usually in clustering problems one assumes that the clusters are pairwise disjoint. We study this case but additionally also the case that clusters are allowed to be non-disjoint. This can help to satisfy the connectivity constraints.
Our main result is an O(1)-approximation algorithm for the disjoint connected k-center and k-diameter problem for Euclidean spaces of low dimension (constant d) and for metrics with constant doubling dimension. For general metrics, we get an O(log^2(k))-approximation. Our algorithms work by computing a non-disjoint connected clustering first and transforming it into a disjoint connected clustering.
We complement these upper bounds by several upper and lower bounds for variations and special cases of the model.
Yossi Azar and Danny Vainstein
Abstract: We present a novel technique to cluster points in a metric space. A well-known non-parametric objective is to embed the metric space into a simpler structured metric space such as a line (i.e., Linear Arrangement) or a binary tree (i.e., Hierarchical Clustering). Points which are close in the metric space should be mapped to close points/leaves in the line/tree; similarly, points which are far in the metric space should be far in the line or on the tree. In particular we consider the Maximum Linear Arrangement problem (Hassin and Rubinstein 2001) and the Maximum Hierarchical Clustering problem (Vincent Cohen-Addad et al. 2017) applied to metrics.
We design approximation schemes (
Jun-Ting Hsieh and Pravesh Kothari
Abstract: In this note, we describe a
Ilan Doron, Ariel Kulik and Hadas Shachnai
Abstract: We study the budgeted versions of the well known matching and matroid intersection problems. While both problems admit a polynomial-time approximation scheme (PTAS) [Berger et al. (Math. Programming, 2011), Chekuri, Vondrak and Zenklusen (SODA 2011)], it has been an intriguing open question whether these problems admit a fully PTAS (FPTAS), or even an efficient PTAS (EPTAS).
In this paper we answer the second part of this question affirmatively, by presenting an EPTAS for budgeted matching and budgeted matroid intersection. A main component of our scheme is a novel construction of representative sets for desired solutions, whose cardinality depends only on
David Stalfa, Rajmohan Rajaraman and Sheng Yang
Abstract: We study the problem of scheduling precedence-constrained jobs on heterogenous machines in the presence of non-uniform job and machine communication delays. We are given a set of
In a schedule, job
We consider schedules which allow duplication of jobs as well as schedules which do not. When duplication is allowed, we provide an asymptotic
Finally, we consider a more general model, where the delay can be an arbitrary function of the job and the machine executing it: job
This set of results is among the first steps toward cataloging the rich landscape of problems in non-uniform delay scheduling.
Andrzej Dorobisz and Jakub Kozik
Abstract: We investigate local computation algorithms (LCA) for two-coloring of k-uniform hypergraphs. We focus on hypergraph instances that satisfy strengthened assumption of the Lovasz Local Lemma of the form 2^{1−αk}(Δ+1)e<1, where Δ is the bound on the maximum edge degree. The main question which arises here is for how large α there exists a LCA that is able to properly color such hypergraphs in polylogarithmic time per query. We describe briefly how upgrading the classical sequential procedure of Beck from 1991 with Moser and Tardos' RESAMPLE yields polylogarithmic LCA that works for α up to 1/4. Then, we present an improved procedure that solves wider range of instances by allowing α up to 1/3.
Kazusato Oko, Shinsaku Sakaue and Shin-ichi Tanigawa
Abstract: Spectral hypergraph sparsification, an attempt to extend well-known spectral graph sparsification to hypergraphs, has been extensively studied over the past few years. For undirected hypergraphs, Kapralov, Krauthgamer, Tardos, and Yoshida~(2022) have proved an
Daniel Agassy, Dani Dorfman and Haim Kaplan
Abstract: A
One crucial component of SW's algorithm is non-stop version of the cut-matching game of [Khandekar, Rao, Vazirani, JACM 2009] (KRV):The cut player does not stop when it gets from the matching player an unbalanced sparse cut, but continues to play on a trimmed part of the large side. The crux of our improvement is the design of a non-stop version of the cleverer cut player of [Orecchia, Schulman, Vazirani, Vishnoi, STOC 2008] (OSVV)). The cut player of OSSV uses a more sophisticated random walk, a subtle potential function, and spectral arguments. Designing and analysing a non-stop version of this game was an explicit open question asked by SW.
Ishay Haviv
Abstract: A subset of
In the first problem, denoted by Schrijver(n,k,m), we are given an access to a coloring of the stable
In the second problem, called Unfair Independent Set in Cycle, we are given
Eric Rivals, Michelle Sweering and Pengfei Wang
Abstract: Consider words of length n. The set of all periods of a word of length
Frits Vaandrager and Thorsten Wißmann
Abstract: We provide a new perspective on the problem how high-level state machine models with abstract actions can be related to low-level models in which these actions are refined by sequences of concrete actions. We describe the connection between high-level and low-level actions using action codes, a variation of the prefix codes known from coding theory. For each action code R, we introduce a contraction operator α_R that turns a low-level model M into a high-level model, and a refinement operator ϱ_R that transforms a high-level model N into a low-level model. We establish a Galois connection ϱ_R (N) ⊑ M ⇔ N ⊑ α_R(M), where ⊑ is the well-known simulation preorder. For conformance, we typically want to obtain an overapproximation of model M. To this end, we also introduce a concretization operator γ_R , which behaves like the refinement operator but adds arbitrary behavior at intermediate points, giving us a second Galois connection α_R (M) ⊑ N ⇔ M ⊑ γ_R(N). Action codes may be used to construct adaptors that translate between concrete and abstract actions during learning and testing of Mealy machines. If Mealy machine M models a black-box system then α_R (M) describes the behavior that can be observed by a learner/tester that interacts with this system via an adaptor derived from code R. Whenever α_R (M) implements (or conforms to) N, we may conclude that M implements (or conforms to) γ_R(N).
Wojciech Rozowski, Tobias Kappé, Dexter Kozen, Todd Schmid and Alexandra Silva
Abstract: We introduce Probabilistic Guarded Kleene Algebra with Tests (ProbGKAT), an extension of GKAT that allows reasoning about uninterpreted imperative programs with probabilistic branching. We give its operational semantics in terms of special class of probabilistic automata. We give a sound and complete Salomaa-style axiomatisation of bisimilarity of ProbGKAT expressions. Finally, we show that bisimilarity of ProbGKAT expressions can be decided in O(n^3 log n) time via a generic partition refinement algorithm.
Michael Blondin and François Ladouceur
Abstract: Population protocols form a well-established model of computation of passively mobile anonymous agents with constant-size memory. It is well known that population protocols compute Presburger-definable predicates, such as absolute majority and counting predicates. In this work, we initiate the study of population protocols operating over arbitrarily large data domains. More precisely, we introduce population protocols with unordered data as a formalism to reason about anonymous crowd computing over unordered sequences of data. We first show that it is possible to determine whether an unordered sequence from an infinite data domain has a datum with absolute majority. We then establish the expressive power of the immediate observation restriction of our model, namely where, in each interaction, an agent observes another agent who is unaware of the interaction.
Fabian Birkmann, Stefan Milius, and Henning Urbat
Abstract: We propose a novel topological perspective on data languages recognizable by orbit-finite nominal monoids. For this purpose we introduce pro-orbit-finite nominal topological spaces, which for bounded support size coincide with nominal Stone spaces and are shown to be dually equivalent to a subcategory of nominal boolean algebras. Recognizable data languages are characterized as topologically clopen sets of pro-orbit-finite words. In addition, we explore the expressive power of
pro-orbit-finite equations by establishing a nominal version of Reiterman’s pseudovariety theorem.
Titouan Carette, Etienne Moutot, Thomas Perez, and Renaud Vilmart
Abstract: We exhibit a strong connection between the matchgate formalism introduced by Valiant and the ZW-calculus of Coecke and Kissinger. This connection provides a natural compositional framework for matchgate theory as well as a direct combinatorial interpretation of the diagrams of ZW-calculus through the perfect matchings of their underlying graphs.
We identify a precise fragment of ZW-calculus, the planar W-calculus, that we prove to be complete and universal for matchgates, that are linear maps satisfying the matchgate identities. Computing scalars of the planar W-calculus corresponds to counting perfect matchings of planar graphs, and so can be carried in polynomial time using the FKT algorithm, making the planar W-calculus an efficiently simulable fragment of the ZW-calculus, in a similar way that the Clifford fragment is for ZX-calculus. This work opens new directions for the investigation of the combinatorial properties of ZW-calculus as well as the study of perfect matching counting through compositional diagrammatical technics.
Anders Aamand, Adam Karczmarz, Jakub Łącki, Nikos Parotsidis, Peter Rasmussen and Mikkel Thorup
Abstract: We present a dynamic algorithm for maintaining the connected and 2-edge-connected components in an undirected graph subject to edge deletions. The algorithm is Monte-Carlo randomized and processes any sequence of edge deletions in
For non-sparse graphs with
Our constant amortized cost for edge deletions in decremental connectivity in non-sparse graphs should be contrasted with an
Petra Berenbrink, Lukas Hintze, Hamed Hosseinpour, Dominik Kaaser and Malin Rau
Abstract: In this paper we study dynamic averaging load balancing on general graphs.
We consider infinite time and dynamic processes, where in every step new load items are assigned to randomly chosen nodes.
A matching is chosen, and the load is averaged over the edges of that matching.
We analyze the discrete case where load items are indivisible, moreover our results also carry over to the continuous case where load items can be split arbitrarily.
For the choice of the matchings we consider three different models, random matchings of linear size, random matchings containing only single edge, and deterministic sequences of matchings covering the whole graph.
We bound the discrepancy which is defined as the difference between the maximum and the minimum load.
Our results cover a broad range of graph classes and, to the best of our knowledge, our analysis is the first result for discrete and dynamic averaging load balancing processes.
As our main technical contribution we develop a drift result that allows us to apply techniques based on the effective resistance in an electrical network to the setting of dynamic load balancing.
Adam Karczmarz and Piotr Sankowski
Abstract: We study the exact fully dynamic shortest paths problem. For real-weighted directed graphs, we show a deterministic fully dynamic data structure with O~(mn^{4/5}) worst-case update time processing arbitrary s,t-distance queries in O~(n^{4/5}) time. This constitutes the first non-trivial update/query tradeoff for this problem in the regime of sparse weighted directed graphs.
Moreover, we give a Monte Carlo randomized fully dynamic reachability data structure processing single-edge updates in O~(n\sqrt{m}) worst-case time and queries in O(\sqrt{m}) time. For sparse digraphs, such a tradeoff has only been previously described with amortized update time~[Roditty and Zwick, SIAM J. Comp. 2008].
Sixue Liu, Zhao Song, Hengjie Zhang, Lichen Zhang and Tianyi Zhou
Abstract: We study the problem of solving linear program in the streaming model. Given a constraint matrix
In addition to our space-efficient IPM, we also give algorithms for solving SDD systems and isolation lemma in
David E. Roberson and Tim Seppelt
Abstract: We show that feasibility of the t-th level of the Lasserre semidefinite programming hierarchy for graph isomorphism can be expressed as a homomorphism indistinguishability relation. In other words, we define a class L_t of graphs such that graphs G and H are not distinguished by the t-th level of the Lasserre hierarchy if and only if they admit the same number of homomorphisms from any graph in L_t. By analysing the treewidth of graphs in L_t we prove that the 3t-th level of Sherali--Adams linear programming hierarchy is as strong as the t-th level of Lasserre. Moreover, we show that this is best possible in the sense that 3t cannot be lowered to 3t-1 for any t. The same result holds for the Lasserre hierarchy with non-negativity constraints, which we similarly characterise in terms of homomorphism indistinguishability over a family L_t^+ of graphs. Additionally, we give characterisations of level-t Lasserre with non-negativity constraints in terms of logical equivalence and via a graph colouring algorithm akin to the Weisfeiler--Leman algorithm. This provides a polynomial time algorithm for determining if two given graphs are distinguished by the t-th level of the Lasserre hierarchy with non-negativity constraints.
Jun-Ting Hsieh, Pravesh Kothari, Aaron Potechin and Jeff Xu
Abstract: Saunderson, Parrilo and Willsky asked the following elegant geometric question: what is the largest m = m(d) such that there is an ellipsoid in Rd that passes through v_1, v_2, . . . , v_m with high probability when vis are chosen independently from the standard Gaussian distribution N(0, Id). The existence of such an ellipsoid is equivalent to the existence of a positive semidefinite matrix X such that v_i^TXv_i = 1 for every 1 ⩽ i ⩽ m — a natural example of a random semidefinite program. SPW conjectured that m = (1 − o(1))d^2/4 with high probability. Very recently, Potechin, Turner, Venkat and Wein [10] and Kane and Diakonikolas [8] proved that m ≳ d^2 / log^{O(1)} (d) via a certain natural, explicit construction.
In this work, we give a substantially tighter analysis of their construction to prove that m ≳ d^2/C for an absolute constant C > 0. This resolves one direction of the SPW conjecture up to a constant. Our analysis proceeds via the method of Graphical Matrix Decomposition that has recently been used to analyze correlated random matrices arising in various areas. Our key new technical tool is a refined method to prove singular value upper bounds on certain correlated random matrices that are tight up to absolute dimension-independent constants. In contrast, all previous methods that analyze such matrices lose logarithmic factors in the dimension.
Hadley Black, Iden Kalemaj and Sofya Raskhodnikova
Abstract: We generalize the celebrated isoperimetric inequality of Khot, Minzer, and Safra~(SICOMP 2018) for Boolean functions to the case of real-valued functions
We apply our generalized isoperimetric inequality to improve algorithms for testing monotonicity and approximating the distance to monotonicity for real-valued functions. Our tester for monotonicity has query complexity
Pan Peng and Yuyang Wang
Abstract: We revisit the relation between two fundamental property testing models for bounded-degree \emph{directed} graphs: the \emph{bidirectional} model in which the algorithms are allowed to query both the outgoing edges and incoming edges of a vertex, and the \emph{unidirectional} model in which only queries to the outgoing edges are allowed. Czumaj, Peng and Sohler [STOC 2016] showed that for directed graphs with both maximum indegree and maximum outdegree upper bounded by
We prove that testing \emph{subgraph-freeness} in which the subgraph contains
Dana Ron and Omer Cohen Sidon
Abstract: In this work, we study the problem of approximating the distance to subsequence-freeness in the sample-based distribution-free model. For a given subsequence (word)
Otherwise,
Denoting by
Pascal Baumann, Moses Ganardi, Rupak Majumdar, Ramanathan Thinniyam Srinivasan, and Georg Zetzsche
Abstract: In the language-theoretic approach to refinement verification, we check that the
language of traces of an implementation all belong to the language of a specification.
We consider the refinement verification problem for asynchronous programs against specifications given by a Dyck
language.
We show that this problem is EXPSPACE-complete---the same complexity as that of language emptiness and for
refinement verification against a regular specification.
Our algorithm uses several novel technical ingredients.
First, we show that checking if the coverability language of a succinctly described
vector addition system with states (VASS) is contained in a Dyck language
is EXPSPACE-complete.
Second, in the more technical part of the proof, we define an ordering on words and show a downward closure construction that
allows replacing the (context-free) language of each task in an asynchronous program by a regular language.
Unlike downward closure operations usually considered in infinite-state verification, our ordering is not a well-quasi-ordering, and we have
to construct the regular language ab initio.
Once the tasks can be replaced, we show a reduction to an appropriate VASS and use our first ingredient.
In addition to the inherent theoretical interest, refinement verification with Dyck specifications captures common
practical resource usage patterns based on reference counting, for which few algorithmic techniques were known.
Thomas Henzinger, Pavol Kebis, Nicolas Mazzocchi, and N. Ege Saraç
Abstract: The operator precedence languages (OPLs) represent the largest known subclass of the context-free languages which enjoys all desirable closure and decidability properties. This includes the decidability of language inclusion, which is the ultimate verification problem. Operator precedence grammars, automata, and logics have been investigated and used, for example, to verify programs with arithmetic expressions and exceptions (both of which are deterministic pushdown but lie outside the scope of the visibly pushdown languages). In this paper, we complete the picture and give, for the first time, an algebraic characterization of the class of OPLs in the form of a syntactic congruence that has finitely many equivalence classes exactly for the operator precedence languages. This is a generalization of the celebrated Myhill-Nerode theorem for the regular languages to OPLs. As one of the consequences, we show that universality and language inclusion for nondeterministic operator precedence automata can be solved by an antichain algorithm. Antichain algorithms avoid determinization and complementation through an explicit subset construction, by leveraging a quasi-order on words, which allows the pruning of the search space for counterexample words without sacrificing completeness. Antichain algorithms can be implemented symbolically, and these implementations are today the best-performing algorithms in practice for the inclusion of finite automata. We give a generic construction of the quasi-order needed for antichain algorithms from a finite syntactic congruence. This yields the first antichain algorithm for OPLs, an algorithm that solves the ExpTime-hard language inclusion problem for OPLs in exponential time.
Javier Esparza and Vincent Grande
Abstract: We study black-box testing for stochastic systems and arbitrary omega-regular specifications, explicitly including liveness properties. We are given a finite-state probabilistic system that we can only execute from the initial state. We have no information on the number of reachable states, or on the probabilities; further, we can only partially observe the states. The only action we can take is to restart the system. We design restart strategies guaranteeing that, if the specification is violated with nonzero probability, then w.p.1 the number of restarts is finite, and the infinite run executed after the last restart violates the specification. This improves on previous work that required full observability. We obtain asymptotically optimal upper bounds on the expected number of steps until the last restart. We conduct experiments on a number of benchmarks, and show that our strategies allow one to find violations in Markov chains much larger than the ones considered in previous work.
Abstract: ICALP had a decisive influence on the development of Theoretical Computer Science in Europe and also on my career. I had my first ICALP paper at ICALP 74 in Saarbruecken and I presented it a few days after joining Saarland University.
Abstract: Over the past 35 years, one branch of formal methods has pushed the envelope from discrete to continuous state spaces, to handle the verification and control of cyber-physical systems.
At the heart of these methods lie correctness certificates that are functions on continuous state spaces, such as abstraction, invariant, barrier, and progress functions.
Entirely independently, a particular data structure for representing continuous nonlinear functions has gained recent prominence: the neural network.
By using neural networks to jointly learn both controllers and correctness certificates for continuous and hybrid state spaces, the complexity of the residual verification tasks can be reduced.
We survey the resulting learner-verifier feedback architecture and explain why we believe that it offers great potential for a systematic symbiosis of deep learning and formal methods, to provide formal guarantees for an important class of AI systems such as robots and autopilots.
Benjamin Aram Berendsohn, Ishay Golinsky, Haim Kaplan and Laszlo Kozma
Abstract: Search trees on trees (STTs) generalize the fundamental binary search tree (BST) data structure: in STTs the underlying search space is an arbitrary tree, whereas in BSTs it is a path. An optimal BST of size
By contrast, optimal STTs are not known to be computable in polynomial time, and the fastest constant-approximation algorithm runs in
In this paper we revisit centroid trees in a general, weighted setting, and we settle both the algorithmic complexity of constructing them, and the quality of their approximation. For constructing a weighted centroid tree, we give an output-sensitive
Tatsuya Terao
Abstract: In the matroid partitioning problem, we are given
Our main result is to present a matroid partition algorithm that uses
Monika Henzinger, Paul Liu, Jan Vondrák and Da Wei Zheng
Abstract: The maximization of submodular functions have found widespread application in areas such as machine learning, combinatorial optimization, and economics, where practitioners often wish to enforce various constraints; the matroid constraint has been investigated extensively due to its algorithmic properties and expressive power. Though tight approximation algorithms for general matroid constraints exist in theory, the running times of such algorithms typically scale quadratically, and are not practical for truly large scale settings. Recent progress has focused on fast algorithms for important classes of matroids given in explicit form. Currently, nearly-linear time algorithms only exist for graphic and partition matroids [EN19]. In this work, we develop algorithms for submodular maximization constrained by graphic, transversal matroids on dense graphs, or laminar matroids in time near-linear in the size of their representation. Our algorithms achieve an optimal approximation of
To achieve near-linear running time, we make use of dynamic data structures that maintain bases with approximate maximum cardinality and weight under certain element updates. These data structures need to support a weight decrease operation and a novel FREEZE operation that allows the algorithm to freeze elements (i.e. force to be contained) in its basis regardless of future data structure operations. For the laminar matroid, we present a new dynamic data structure using the top tree interface of Alstrup et al. [AHLT05] that maintains the maximum weight basis under insertions and deletions of elements in
Karthik Murali and Therese Biedl
Abstract: A graph is called 1-plane if it has an embedding in the plane where each edge is crossed at most once by another edge. A crossing of a 1-plane graph is called an X-crossing if the endpoints of the crossing pair of edges induce a matching. In this paper, we show how to compute the vertex connectivity of a 1-plane graph G without X-crossings in linear time.
To do so, we show that for any two vertices u,v in a minimum separating set S, the distance between u and v in an auxiliary graph \Lambda(G) (obtained by planarizing G and then inserting into each face a new vertex adjacent to all vertices of the face) is small. It hence suffices to search for a minimum separating set in various subgraphs \Lambda_i of \Lambda(G) with small diameter. Since \Lambda(G) is planar, the subgraphs \Lambda_i have small treewidth. Each minimum separating set S then gives rise to a partition of \Lambda_i into three vertex sets with special properties; such a partition can be found via Courcelle's theorem in linear time.
Shyan Akmal and Ce Jin
Abstract: Our work concerns algorithms for a variant of \textsf{Maximum Flow} in unweighted graphs. In the All-Pairs Connectivity (APC) problem, we are given a graph G on n vertices and m edges, and are tasked with computing the size of a minimum st-cut in G for all pairs of vertices (s,t). Significant algorithmic breakthroughs have recently shows that over undirected graphs, APC can be solved in n^{2+o(1)} time, which is essentially optimal. In contrast, the true time complexity of APC over directed graphs remains open: this problem can be solved in O(m^ω) time, where ω ∈ (2, 2.373) is the exponent of matrix multiplication, but no matching conditional lower bound is known.
Following [Abboud et al., ICALP 2019], we study a bounded version of APC called the k-Bounded All Pairs Connectivity (k-APC) problem. In this variant of APC, we are given an integer k in addition to the graph G, and are now tasked with reporting the size of a minimum st-cut only for pairs (s,t) of vertices with min-cut value less than k (if the min st-cut has size at least k, we just can report it is "large" instead of computing the exact value).
Our main result is an O((kn)^ω) time algorithm for k-APC in directed graphs.
This is the first algorithm which solves k-APC faster than the more general APC for all k ≥ 3. This runtime is essentially O(n^ω) for all k ≤ poly(log n), which essentially matches the optimal runtime for the k=1 case of k-APC}, under popular conjectures from fine-grained complexity. Previously, this runtime was only known for k ≤ 2 in general [Georgiadis et al., ICALP 2017], and for k ≤ o((log n)^{1/2}) in the special case of directed acyclic graphs [Abboud et al., ICALP 2019]. Our result employs the same algebraic framework used in previous work, introduced by [Cheung, Lau, and Leung, FOCS 2011]. A direct implementation of this framework involves inverting a large random matrix. Our faster algorithm for k-APC is based off the insight that it suffices to invert a low-rank random matrix instead of a generic random matrix, which yields a speed-up.
We also obtain a new algorithm for a variant of k-APC, the k-Bounded All-Pairs Vertex Connectivity (k-APVC) problem, where we now report the sizes of minimum vertex-cuts instead of edge-cuts. We show how to solve k-APVC in O(k^2n^ω) time. Previous work gave an O((kn)^ω) algorithm for this problem [Abboud et al., ICALP 2019], so our algorithm is faster if ω > 2.
Ilan Cohen and Debmalya Panigrahi
Abstract: Online allocation is a broad class of problems where items arriving online have to be allocated to agents who have a fixed utility/cost for each assigned item so to maximize/minimize some objective. This framework captures a broad range of fundamental problems such as the Santa Claus problem (maximizing minimum utility), Nash welfare maximization (maximizing geometric mean of utilities), makespan minimization (minimizing maximum cost), minimization of
In this paper, we study online allocations in the learning-augmented setting, i.e., where the algorithm has access to some additional (machine-learned) information about the problem instance. We introduce a universal algorithmic framework for learning-augmented online allocation that produces nearly optimal solutions for this broad range of maximization and minimization objectives using only a single learned parameter for every agent. As corollaries of our general framework, we improve prior results of Lattanzi et al. (SODA 2020) and Li and Xian (ICML 2021) for learning-augmented makespan minimization, and obtain the first learning-augmented nearly-optimal algorithms for the other objectives such as Santa Claus, Nash welfare,
Konstantina Mellou, Marco Molinaro and Rudy Zhou
Abstract: Motivated by cloud computing applications, we study the problem of how to optimally deploy new hardware subject to both power and robustness constraints. To model the situation observed in large-scale data centers, we introduce the \emph{Online Demand Scheduling with Failover} problem. There are
In the worst-case model, we design a deterministic
Amirreza Akbari, Navid Eslami, Henrik Lievonen, Darya Melnyk, Joona Särkijärvi and Jukka Suomela
Abstract: In this work, we give a unifying view of locality in four settings: distributed algorithms, sequential greedy algorithms, dynamic algorithms, and online algorithms.
We introduce a new model of computing, called the online-LOCAL model: the adversary reveals the nodes of the input graph one by one, in the same way as in classical online algorithms, but for each new node we get to see its radius-T neighborhood before choosing the output. Instead of looking ahead in time, we have the power of looking around in space.
We compare the online-LOCAL model with three other models: the LOCAL model of distributed computing, where each node produces its output based on its radius-T neighborhood, its sequential counterpart SLOCAL, and the dynamic-LOCAL model, where changes in the dynamic input graph only influence the radius-T neighborhood of the point of change.
The SLOCAL and dynamic-LOCAL models are sandwiched between the LOCAL and online-LOCAL models, with LOCAL being the weakest and online-LOCAL the strongest model. In general, all four models are distinct, but we study in particular locally checkable labeling problems (LCLs), which is a family of graph problems extensively studied in the context of distributed graph algorithms.
We prove that for LCL problems in paths, cycles, and rooted trees, all four models are roughly equivalent: the locality of any LCL problem falls in the same broad class - O(log* n), Theta(log n), or n^{Theta(1)} - in all four models. In particular, this result enables one to generalize prior lower-bound results from the LOCAL model to all four models, and it also allows one to simulate e.g. dynamic-LOCAL algorithms efficiently in the LOCAL model.
We also show that this equivalence does not hold in two-dimensional grids or general bipartite graphs. We provide an online-LOCAL algorithm with locality O(log n) for the 3-coloring problem in bipartite graphs - this is a problem with locality Omega(n^{1/2}) in the LOCAL model and Omega(n^{1/10}) in the SLOCAL model.
Yann Disser, Max Klimm, Kevin Schewior and David Weckbecker
Abstract: We consider the problem of finding an incremental solution to a cardinality-constrained maximization problem that not only captures the solution for a fixed cardinality, but describes how to gradually grow the solution as the cardinality bound increases.
The goal is to find an incremental solution that guarantees a good competitive ratio against the optimum solution for all cardinalities simultaneously.
The central challenge is to characterize maximization problems where this is possible, and to determine the best-possible competitive ratio that can be attained.
A lower bound of
We introduce a continuization technique and identify an optimal incremental algorithm that provides strong evidence that
Using this continuization, we obtain an improved lower bound of
Based on the optimal continuous algorithm combined with a scaling approach, we also provide a
We complement this by a randomized lower bound of
Sharat Ibrahimpur, Manish Purohit, Zoya Svitkina, Erik Vee and Joshua Wang
Abstract: Online caching is among the most fundamental and well-studied problems in the area of online algorithms. Innovative algorithmic ideas and analysis --- including potential functions and primal-dual techniques --- give insight into this still-growing area.
Here, we introduce a novel potential function to upper bound the cost of an online algorithm paired with a new dual-fitting technique to lower bound the cost of an offline optimal algorithm. We apply these techniques to the Caching with Reserves problem recently introduced by Ibrahimpur et al.~\cite{ibrahimpur2022caching} and give an
Nicolas Resch, Chen Yuan and Yihan Zhang
Abstract: In this work we consider the list-decodability and list-recoverability of arbitrary
Our main contribution is to precisely calculate the maximum value of
Technically, proving the Plotkin bound boils down to demonstrating the Schur convexity of a certain function defined on the
Kuan Cheng, Zhengzhong Jin, Xin Li, Zhide Wei and Yu Zheng
Abstract: This work continues the study of linear error correcting codes against adversarial insertion deletion errors (insdel errors). Previously, the work of Cheng, Guruswami, Haeupler, and Li \cite{CGHL21} showed the existence of asymptotically good linear insdel codes that can correct arbitrarily close to
In this work we give new constructions of such codes that meet these bounds, namely, asymptotically good linear insdel codes that can correct arbitrarily close to
Shu Liu, Chaoping Xing and Chen Yuan
Abstract: Despite of numerous results about the list decoding of Hamming-metric codes, development of list decoding on rank-metric codes is not rapid as its counterpart. The limit on list decoding obeys the Gilbert-Varshamov bound in both the metrics. In the case of the Hamming-metric, the Gilbert-Varshamov bound is a trade-off among rate, decoding radius and alphabet size, while in the case of the rank-metric, the Gilbert-Varshamov bound is a trade-off among rate, decoding radius and column-to-row ratio (i.e., the ratio between the numbers of columns and rows). Hence, alphabet size and column-to-row ratio play the similar role for list decodability in each metric. In the case of the Hamming-metric, it is more challenging to list decode codes over smaller alphabets. In contrast, in the case of the rank-metric, it is more difficult to list decode codes with large column-to-row ratio. In particular, it is extremely difficult to list decoding of square matrix rank-metric codes (i.e., the column-to-row ratio is equal to
The main purpose of this paper is to explicitly construct a class of rank-metric codes
and efficiently list decode these codes with decoding radius beyond the decoding radius
Our key idea is to compress an element of the field
Apart from the above algebraic technique, we follow some standard techniques to prune down the list. The algebraic idea enables us to pin down the messages
into a structured subspace of dimension linear in the number
Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi and Kewen Wu
Abstract: We study the problem of performing counting queries at different levels in hierarchical structures while preserving individuals' privacy. Motivated by applications, we propose a new error measure for this problem by considering a combination of multiplicative and additive approximation to the query results.
We examine known mechanisms in differential privacy (DP) and prove their optimality, under this measure, in the pure-DP setting. In the approximate-DP setting, we design new algorithms achieving significant improvements over known ones.
Talya Eden, Sofya Raskhodnikova, Quanquan C. Liu and Adam Smith
Abstract: Many deployments of differential privacy in industry are in the local model, where each party releases its private information via a differentially private randomizer. We study triangle counting in the
noninteractive and interactive local model with edge differential privacy (that, intuitively, requires that
the outputs of the algorithm on graphs that differ in one edge be indistinguishable). In this model, each
party’s local view consists of the adjacency list of one vertex.
In the noninteractive model, we prove that additive
of nodes. This lower bound is our main technical contribution. It uses a reconstruction attack with a
new class of linear queries and a novel mix-and-match strategy of running the local randomizers with
different completions of their adjacency lists. It matches the additive error of the algorithm based
on Randomized Response, proposed by Imola, Murakami and Chaudhuri (USENIX2021) and analyzed
by Imola, Murakami and Chaudhuri (CCS2022) for constant ε. We use a different postprocessing of
Randomized Response and provide tight bounds on the variance of the resulting algorithm.
In the interactive setting, we prove a lower bound of
hardness results were known for interactive, edge-private algorithms in the local model, except for those
that follow trivially from the results for the central model. Our work significantly improves on the state
of the art in differentially private graph analysis in the local model.
Harry Vinall-Smeeth and Christoph Berkholz
Abstract: The task of computing homomorphisms between two finite relational
structures A and B is a well-studied question with
numerous applications. Since the set Hom(A,B) of all
homomorphisms may be very large having a method of representing it in
a succinct way, especially one which enables us to perform efficient
enumeration and counting, could be extremely useful.
One simple yet powerful way of doing so is to decompose
Hom(A,B) using union and Cartesian product.
Such data structures, called d-representations, have been introduced by
Olteanu and Zavodny in the context of
database theory. Their results also imply that if the treewidth of the left-hand
side structure A is bounded, then a d-representation of
polynomial size can be found in polynomial time. We show that for
structures of bounded arity this is optimal: if the treewidth is
unbounded then there are instances where the size of any
d-representation is superpolynomial. Along the way we develop tools
for proving lower bounds on the size of d-representations, in
particular we define a notion of reduction suitable for this context
and prove an almost tight lower bound on the size of d-representations
of all k-cliques in a graph.
Moritz Lichter
Abstract: At the core of the quest for a logic for PTime is a mismatch between algorithms making arbitrary choices and isomorphism-invariant logics. One approach to tackle this problem is witnessed symmetric choice. It allows for choices from definable orbits certified by definable witnessing automorphisms.
We consider the extension of fixed-point logic with counting (IFPC) with witnessed symmetric choice (IFPC+WSC) and a further extension with an interpretation operator (IFPC+WSC+I). The latter operator evaluates a subformula in the structure defined by an interpretation.
When similarly extending pure fixed-point logic (IFP), IFP+WSC+I simulates counting which IFP+WSC fails to do. For IFPC+WSC, it is unknown whether the interpretation operator increases expressiveness and thus allows studying the relation between WSC and interpretations beyond counting.
In this paper, we separate IFPC+WSC from IFPC+WSC+I by showing that IFPC+WSC is not closed under FO-interpretations. By the same argument, we answer an open question of Dawar and Richerby regarding non-witnessed symmetric choice in IFP.
Additionally, we prove that nesting WSC-operators increases the expressiveness using the so-called CFI graphs. We show that if IFPC+WSC+I canonizes a particular class of base graphs, then it also canonizes the corresponding CFI graphs.
This differs from various other logics, where CFI graphs provide difficult instances.
Diptarka Chakraborty, Sourav Chakraborty, Gunjan Kumar and Kuldeep Meel
Abstract: Given a Boolean formula
The primary contribution of this work is to study the relative power of NP oracle and SAT oracle in the context of approximate model counting. The previous techniques proposed in the context of NP oracle are weak to provide strong bounds in the context of SAT oracle since, in contrast to an NP oracle that provides only 1 bit of information, a SAT oracle can provide
Manuel Bodirsky and Simon Knäuer
Abstract: We show that the problem of deciding for a given finite relation algebra A whether the network satisfaction problem for A can be solved by the k-consistency procedure, for some k, is undecidable. For the important class of finite relation algebras A with a normal representation, however, the decidability of this problem remains open. We show that if A is symmetric and has a flexible atom, then the question whether NSP(A) can be solved by k-consistency, for some k, is decidable (even in polynomial time in the number of atoms of A). This result follows from a more general sufficient condition for the correctness of the k-consistency procedure for finite symmetric relation algebras. In our proof we make use of a result of Alexandr Kazda about finite binary conservative structures.
Austen Z. Fan, Paraschos Koutris and Hangdong Zhao
Abstract: We study the fine-grained complexity of evaluating Boolean Conjunctive Queries and their generalization to sum-of-product problems over an arbitrary semiring. For these problems, we present a general \emph{semiring-oblivious} reduction from the
Kurt Mehlhorn
Abstract: A set of indivisible goods (a car, a toothbrush, ...) have to be allocated to a set of agent. Each agent has its own preferences over sets of goods. What constitutes a fair allocation? When can we find a fair allocation efficiently? Can we approximate fair allocations?