Es gibt eine anstehende geplante Wartung des ZIM am 27.11.1024 ab 16:00 Uhr, die diesen Dienst beeinträchtigt: Details hier.
50th EATCS International Colloquium on Automata, Languages and Programming, ICALP 2023
Fedor Fomin, Petr Golovach, Danil Sagunov and Kirill Simonov
Abstract: Parameterization above (or below) a guarantee is a successful concept in parameterized algorithms. The idea is that many computational problems admit “natural” guarantees bringing to algorithmic questions whether a better solution (above the guarantee) could be obtained efficiently. For example, for every boolean CNF formula on m clauses, there is an assignment that satisfies at least m/2 clauses. How difficult is it to decide whether there is an assignment satisfying more than m/2 + k clauses? Or, if an n-vertex graph has a perfect matching, then its vertex cover is at least n/2. Is there a vertex cover of size at least n/2 + k for some k ≥ 1 and how difficult is it to find such a vertex cover? The above guarantee paradigm has led to several exciting discoveries in the areas of parameterized algorithms and kernelization. We argue that this paradigm could bring forth fresh perspectives on well-studied problems in approximation algorithms. Our example is the longest cycle problem. One of the oldest results in extremal combinatorics is the celebrated Dirac’s theorem from 1952. Dirac’s theorem provides the following guarantee on the length of the longest cycle: for every 2-connected n-vertex graph G with minimum degree δ(G) ≤ n/2, the length of a longest cycle L is at least 2δ(G). Thus the “essential” part in finding the longest cycle is in approximating the “offset” k = L − 2δ(G). The main result of this paper is the above-guarantee approximation theorem for k. Informally, the theorem says that approximating the offset k is not harder than approximating the total length L of a cycle. In other words, for any (reasonably well-behaved) function f, a polynomial time algorithm constructing a cycle of length f(L) in an undirected graph with a cycle of length L, yields a polynomial time algorithm constructing a cycle of length 2δ(G) + Ω(f(k)).
Vaishali Surianarayanan, Daniel Lokshtanov and Saket Saurabh
Abstract: In the {\sc Min $k$-Cut} problem, the input is a graph $G$ and an integer $k$. The task is to find a partition of the vertex set of $G$ into $k$ parts, while minimizing the number of edges that go between different parts of the partition. The problem is NP-complete, and admits a simple $3^n \cdot n^{O(1)}$ time dynamic programming algorithm, which can be improved to a $2^n \cdot n^{O(1)}$ time algorithm using the fast subset convolution framework by Bj{\"{o}}rklund et al.~[STOC'07]. In this paper we give an algorithm for {\sc Min $k$-Cut} with running time $O((2-\varepsilon)^n)$, for $\varepsilon > 10^{-50}$. This is the first algorithm for {\sc Min $k$-Cut} with running time $O(c^n)$ for $c < 2$.
Fedor Fomin, Petr Golovach, Ignasi Sau, Giannos Stamoulis and Dimitrios M. Thilikos
Abstract: We introduce a novel model-theoretic framework inspired from graph modification and based on the interplay between model theory and algorithmic graph minors. The core of our framework is a new compound logic operating with two types of sentences, expressing graph modification: the modulator sentence, defining some property of the modified part of the graph, and the target sentence, defining some property of the resulting graph. In our framework, modulator sentences are in counting monadic second-order logic (CMSOL) and have models of bounded treewidth, while target sentences express first-order logic (FOL) properties along with minor-exclusion. Our logic captures problems that are not definable in first-order logic and, moreover, may have instances of unbounded treewidth. Also, it permits the modeling of wide families of problems involving vertex/edge removals, alternative modulator measures (such as elimination distance or G-treewidth), multistage modifications, and various cut problems. Our main result is that, for this compound logic, model-checking can be done in quadratic time. All derived algorithms are constructive and this, as a byproduct, extends the constructibility horizon of the algorithmic applications of the Graph Minors theorem of Robertson and Seymour. The proposed logic can be seen as a general framework to capitalize on the potential of the irrelevant vertex technique. It gives a way to deal with problem instances of unbounded treewidth, for which Courcelle’s theorem does not apply.
Shi Li
Abstract: We study nearly-linear time approximation algorithms for non-preemptive scheduling problems in two settings: the unrelated machine setting, and the identical machine with job precedence constraints setting, under the well-studied objectives such as makespan and weighted completion time. For many problems, we develop nearly-linear time approximation algorithms with approximation ratios matching the current best ones achieved in polynomial time.
Our main technique is linear programming relaxation. For the unrelated machine setting, we formulate mixed packing and covering LP relaxations of nearly-linear size, and solve them approximately using the nearly-linear time solver of Young. For the makespan objective, we develop a rounding algorithm with $(2+\epsilon)$-approximation ratio. For the weighted completion time objective, we prove the LP is as strong as the rectangle LP used by Im and Li. This leads to a nearly-linear time $(1.45 + \epsilon)$-approximation for the problem.
For problems in the identical machine with precedence constraints setting, the precedence constraints can not be formulated as packing or covering constraints. To achieve the nearly-linear running time, we define a polytope for the constraints, and leverage the multiplicative weight update (MWU) method with an oracle which always returns solutions in the polytope.
Along the way of designing the oracle, we encounter the single-commodity maximum flow problem over a directed acyclic graph $G = (V, E)$, where sources and sinks have limited supplies and demands, but edges have infinite capacities. We develop a $\frac{1}{1+\epsilon}$-approximation for the problem in time $O\left(\frac{|E|}{\epsilon}\log |V|\right)$, which may be of independent interest.
Spencer Compton, Slobodan Mitrović and Ronitt Rubinfeld
Abstract: Interval scheduling is a basic problem in the theory of algorithms and a classical task in combinatorial optimization.
We develop a set of techniques for partitioning and grouping jobs based on their starting and ending times, that enable us to view an instance of interval scheduling on \emph{many}
jobs as a union of
multiple interval scheduling instances, each containing only \emph{a few} jobs. Instantiating these techniques in dynamic and local settings of computation leads to several new results.
For $(1+\epsilon)$-approximation of job scheduling of $n$ jobs on a single machine, we develop a fully dynamic algorithm with $O(\frac{\log{n}}{\epsilon})$ update and $O(\log{n})$ query worst-case time. Further, we design a local computation algorithm that uses only $O(\frac{\log{N}}{\epsilon})$ queries when all jobs are length at least $1$ and have starting/ending times within $[0,N]$.
Our techniques are also applicable in a setting where jobs have rewards/weights. For this case we design a fully dynamic \emph{deterministic} algorithm whose worst-case update and query time are $poly(\log n,\frac{1}{\epsilon})$.
Equivalently, this is \emph{the first} algorithm that maintains a $(1+\epsilon)$-approximation of the maximum independent set of a collection of weighted intervals in $poly(\log n,\frac{1}{\epsilon})$ time updates/queries.
This is an exponential improvement in $1/\epsilon$ over the running time of a randomized algorithm of Henzinger, Neumann, and Wiese ~[SoCG, 2020], while also removing all dependence on the values of the jobs' starting/ending times and rewards, as well as removing the need for any randomness.
We extend our approaches for interval scheduling on a single machine to the setting with $M$ machines. In one method of extension, we achieve similar approximation factors at the expense of a slower update time in the dynamic setting.
Sam Coy, Artur Czumaj, Peter Davies and Gopinath Mishra
Abstract: We consider the distributed complexity of the (degree+1)-list coloring problem, in which each node u of degree d(u) is assigned a palette of d(u)+1 colors, and the goal is to find a proper coloring using these color palettes. The (Δ+1)-list coloring problem is a natural generalization of the classical (Δ+1)-coloring and (Δ+1)-list coloring problems, both being benchmark problems extensively studied in distributed and parallel computing.
In this paper we settle the complexity of the (degree+1)-list coloring problem in the Congested Clique model by showing that it can be solved deterministically in a constant number of rounds.
Shimon Kogan and Merav Parter
Abstract: The only known upper bound for this problem is given by an implicit construction of
[Pettie, ICALP 2007] that provides a linear-size emulator with $+\widetilde{O}(n^{1/4})$ stretch. No improvement on this problem has been shown since then.
In this work we improve upon the long standing additive stretch of $\widetilde{O}(n^{1/4})$, by presenting constructions of linear-size emulators with $\widetilde{O}(n^{0.222})$ additive stretch. Our constructions improve the state-of-the-art size vs.\ stretch tradeoff in the entire regime. For example, for every $\epsilon >1/7$, we provide $+n^{f(\epsilon)}$ emulators of size $\widetilde{O}(n^{1+\epsilon})$, for $f(\epsilon)=1/5-3\epsilon/5$. This should be compared with the current bound of $f(\epsilon)=1/4-3\epsilon/4$ by [Pettie, ICALP 2007].
The new emulators are based on an extended and optimized toolkit for computing weighted additive emulators with sublinear distance error. Our key construction provides a weighted modification of the well-known Thorup and Zwick emulators [SODA 2006]. We believe that this TZ variant might be of independent interest, especially for providing improved stretch for distant pairs.
Dmitry Paramonov, Gillat Kol, Klim Efremenko and Raghuvansh R. Saxena
Abstract: Single-hop radio networks (SHRN) are a well studied abstraction of communication over a wireless channel. In this model, in every round, each of the n participating parties may decide to broadcast a message to all the others, potentially causing collisions. We consider the SHRN model in the presence of stochastic message drops (i.e., erasures), where in every round, the message received by each party is erased (replaced by ⊥) with some small constant probability, independently.
Our main result is a constant rate coding scheme, allowing one to run protocols designed to work over the (noiseless) SHRN model over the SHRN model with erasures. Our scheme converts any protocol Π of length at most exponential in n over the SHRN model to a protocol Π′ that is resilient to constant fraction of erasures and has length linear in the length of Π.
We mention that for the special case where the protocol Π is non-adaptive, i.e., the order of communication is fixed in advance, such a scheme was known. Nevertheless, adaptivity is widely used and is known to hugely boost the power of wireless channels, which makes handling the general case of adaptive protocols Π both important and more challenging. Indeed, to the best of our knowledge, our result is the first constant rate scheme that converts adaptive protocols to noise resilient ones in any multi-party model.
Chandra Chekuri and Rhea Jain
Abstract: Classical network design models, such as the Survivable Network Design problem (SNDP), are (partly) motivated by robustness to faults under the assumption that any subset of edges upto a specific number can fail. We consider non-uniform fault models where the subset of edges that fail can be specified in different ways. Our primary interest is in the flexible graph connectivity model, in which the edge set is partitioned into safe and unsafe edges. Given parameters p and q, the goal is to find a cheap subgraph that remains p-connected even after the failure of q unsafe edges. We also discuss the bulk-robust model and the relative survivable network design model. While SNDP admits a 2-approximation, the approximability of problems in these more complex models is much less understood even in special cases. We make two contributions.
Our first set of results are in the flexible graph connectivity model. Motivated by a conjecture that a constant factor approximation is feasible when p and q are fixed, we consider two special cases. For the s-t case we obtain an approximation ratio that depends only on p,q whenever p+q > pq/2 which includes (p,2) and (2,q) for all p,q. For the global connectivity case we obtain an O(q)-approximation for (2,q), and an O(p)-approximation for (p,2) and (p,3) for any p, and for $(p,4)$ when p is even. These are based on an augmentation framework and decomposing the families of cuts that need to be covered int a small number of uncrossable families.
Our second result is a poly-logarithmic approximation for a generalization of the bulk-robust model when the "width" of the given instance (the maximum number of edges that can fail in any particular scenario) is fixed. Via this, we derive corresponding approximations for the flexible graph connectivity model and the relative survivable network design model. We utilize a recent framework due to Chen et al. that was designed for handling group connectivity.
Noam Touitou
Abstract: Clairvoyant network design with deadlines or delay has been studied extensively, culminating in an $O(\log n)$-competitive general framework, where $n$ is the number of possible request types (Azar and Touitou, FOCS 2020). In the nonclairvoyant setting, the problem becomes much harder, as $\Omega(\sqrt{n})$ lower bounds are known for certain problems (Azar et al., STOC 2017). However, no frameworks are known for the nonclairvoyant setting, and previous work focuses only on specific problems, e.g., multilevel aggregation (Le et al., SODA 2023).
In this paper, we present the first nonclairvoyant frameworks for network design with deadlines or delay. These frameworks are nearly optimal: their competitive ratio is $\tilde{O}(\sqrt{n})$, which matches known lower bounds up to logarithmic factors.
Andrej Bogdanov and Alon Rosen
Abstract: Most $n$-dimensional subspaces $\mathcal{A}$ of $\mathbb{R}^m$ are $\Omega(\sqrt{m})$-far from the Boolean cube $\{-1,1\}^m$ when $n < cm$ for some constant $c > 0$. How hard is it to certify that the Nearest Boolean Vector (NBV) is at least $\gamma \sqrt{m}$ far from a given random $\mathcal{A}$?
Certifying NBV instances is relevant to the computational complexity of approximating the Sherrington-Kirkpatrick Hamiltonian, i.e. maximizing $x^TAx$ over the Boolean cube for a matrix $A$ sampled from the Gaussian Orthogonal Ensemble. The connection was discovered by Mohanty, Raghavendra, and Xu (STOC 2020). Improving on their work, Ghosh, Jeronimo, Jones, Potechin, and Rajendran (FOCS 2020) showed that certification is not possible in the sum-of-squares framework when $m \ll n^{1.5}$, even with distance $ \gamma= 0$.
We present a non-deterministic interactive certification algorithm for NBV when $m \gg n \log n$ and $\gamma\ll 1/mn^{1.5}$. The algorithm is obtained by adapting a public-key encryption scheme of Ajtai and Dwork.
Ittai Rubinstein
Abstract: The {\em insertion-deletion channel} takes as input a binary string $x \in\{0, 1\}^n$, and outputs a string $\widetilde{x}$ where some of the bits have been deleted and others inserted independently at random.
In the {\em trace reconstruction problem}, one is given many outputs (called {\em traces}) of the insertion-deletion channel applied to the same input message $x$, and is asked to recover the input message.
De, O'Donnell and Servedio (STOC 2017), and Nazarov and Peres (STOC 2017) showed that any string $x$ can be reconstructed from $\exp(O(n^{1/3}))$ traces.
Holden, Pemantle, Peres and Zhai (COLT 2018) adapt the techniques used to prove this upper bound, to an algorithm for the average-case trace reconstruction with a sample complexity of $\exp(O(\log^{1/3} n))$.
However, it is not clear how to apply their techniques more generally and in particular for the recent worst-case upper bound of $\exp(\widetilde{O}(n^{1/5}))$ shown by Chase(STOC 2021) for the deletion channel.
We prove a general reduction from the average-case to smaller instances of a problem similar to worst-case.
Using this reduction and a generalization of Chase's bound, we construct an improved average-case algorithm with a sample complexity of $\exp(\widetilde{O}(\log^{1/5} n))$.
Additionally, we show that Chase's upper-bound holds for the insertion-deletion channel as well.
Tobias Friedrich, Andreas Göbel, Maximilian Katzmann and Leon Schiller
Abstract: A recent trend in the context of graph theory is to bring theoretical analyses closer to empirical observations, by focusing the studies on random graph models that are used to represent practical instances. There, it was observed that geometric inhomogeneous random graphs (GIRGs) yield good representations of complex real-world networks, by expressing edge probabilities as a function that depends on (heterogeneous) vertex weights and distances in some underlying geometric space that the vertices are distributed in. While most of the parameters of the model are understood well, it was unclear how the dimensionality of the ground space affects the structure of the graphs.
In this paper, we complement existing research into the dimension of geometric random graph models and the ongoing study of determining the dimensionality of real-world networks, by studying how the structure of GIRGs changes as the number of dimensions increases. We prove that, in the limit, GIRGs approach non-geometric inhomogeneous random graphs and present insights on how quickly the decay of the geometry impacts important graph structures. In particular, we study the expected number of cliques of a given size as well as the clique number and characterize phase transitions at which their behavior changes fundamentally. Finally, our insights help in better understanding previous results about the impact of the dimensionality on geometric random graphs.
Shaddin Dughmi, Yusuf Hakan Kalaycı and Neel Patel
Abstract: Motivated by recent progress on stochastic matching with few queries, we embark on a systematic study of the sparsification of stochastic packing problems more generally. Specifically, we consider packing problems where elements are independently active with a given probability $p$, and ask whether one can (non-adaptively) compute a "sparse" set of elements guaranteed to contain an approximately optimal solution to the realized (active) subproblem. We seek structural and algorithmic results of broad applicability to such problems.
Our focus is on computing sparse sets containing on the order of $d$ feasible solutions to the packing problem, where $d$ is linear or at most polynomial in $\frac{1}{p}$. Crucially, we require $d$ to be independent of the number of elements, or any parameter related to the "size" of the packing problem. We refer to $d$ as the "degree" of the sparsifier, as is consistent with graph theoretic degree in the special case of matching.
First, we exhibit a generic sparsifier of degree $\frac{1}{p}$ based on contention resolution. This sparsifier's approximation ratio matches the best contention resolution scheme (CRS) for any packing problem for additive objectives, and approximately matches the best monotone CRS for submodular objectives. Second, we embark on outperforming this generic sparsifier for additive optimization over matroids and their intersections, as well as weighted matching. These improved sparsifiers feature different algorithmic and analytic approaches, and have degree linear in $\frac{1}{p}$. In the case of a single matroid, our sparsifier tends to the optimal solution. In the case of weighted matching, we combine our contention-resolution-based sparsifier with technical approaches of prior work to improve the state of the art ratio from 0.501 to 0.536. Third, we examine packing problems with submodular objectives. We show that even the simplest such problems do not admit sparsifiers approaching optimality. We then outperform our generic sparsifier for some special cases with submodular objectives.
Or Zamir
Abstract: Let $\mathcal{A}$ be an algorithm with expected running time $e^X$, conditioned on the value of some random variable $X$.
We construct an algorithm $\mathcal{A'}$ with expected running time $O\left(e^{E[X]}\right)$, that fully executes $\mathcal{A}$.
In particular, an algorithm whose running time is a random variable $T$ can be converted to one with expected running time $O\left(e^{E[\ln T]}\right)$, which is never worse than $O(E[T])$.
No information about the distribution of $X$ is required for the construction of $\mathcal{A}'$.
Michael Whitmeyer and Siddharth Iyer
Abstract: Given a bounded function $f$ on $\mathbb F_2^n$, this work seeks a large affine subspace $\mathcal U$ such that $f$, when restricted to $\cal U$, has small nontrivial Fourier coefficients.
It is not hard to see that a random function is such that all its Fourier coefficients are extremely small already, so we focus on a natural class structured functions.
For the natural class of Fourier degree $d$ functions $f:\mathbb F_2^n \to [-1,1]$, we show that there exists an affine subspace of dimension at least $ \tilde\Omega(n^{1/d!}k^{-2})$, wherein all of $f$'s nontrivial Fourier coefficients become smaller than $ 2^{-k}$.
To complement this result, we show the existence of degree $d$ functions with coefficients larger than $2^{-d\log n}$ when restricted to any affine subspace of dimension larger than $\Omega(dn^{1/(d-1)})$. In addition, we give explicit examples of functions with analogous but weaker properties.
Along the way, we provide multiple characterizations of the Fourier coefficients of functions restricted to subspaces of $\mathbb F_2^n$ that may be useful in other contexts. Finally, we highlight applications and connections of our results to parity kill number and affine dispersers.
Jakob Bæk Tejs Houen and Mikkel Thorup
Abstract: The \emph{Sparse Johnson-Lindenstrauss Transform} of Kane and Nelson (SODA 2012) provides a linear dimensionality-reducing map $A \in R^{m \times u}$ in $\ell_2$ that preserves distances up to distortion of $1 + \varepsilon$ with probability $1 - \delta$, where $m = O(\varepsilon^{-2} \log 1/\delta)$ and each column of $A$ has $O(\varepsilon m)$ non-zero entries.
The previous analyses of the Sparse Johnson-Lindenstrauss Transform all assumed access to a $\Omega(\log 1/\delta)$-wise independent hash function.
The main contribution of this paper is a more general analysis of the Sparse Johnson-Lindenstrauss Transform with less assumptions on the hash function.
We also show that the \emph{Mixed Tabulation hash function} of Dahlgaard, Knudsen, Rotenberg, and Thorup (FOCS 2015) satisfies the conditions of our analysis, thus giving us the first analysis of a Sparse Johnson-Lindenstrauss Transform that works with a practical hash function.
Ruizhe Zhang and Xinzhi Zhang
Abstract: In 2013, Marcus, Spielman, and Srivastava resolved the famous Kadison-Singer conjecture. It states that for $n$ independent random vectors $v_1,\cdots, v_n$ that have expected squared norm bounded by $\epsilon$ and are in the isotropic position in expectation, there is a positive probability that the determinant polynomial $\det(xI - \sum_{i=1}^n v_iv_i^\top)$ has roots bounded by $(1 + \sqrt{\epsilon})^2$. An interpretation of the Kadison-Singer theorem is that we can always find a partition of the vectors $v_1,\cdots,v_n$ into two sets with a low discrepancy in terms of the spectral norm (in other words, rely on the determinant polynomial).
In this paper, we provide two results for a broader class of polynomials, the hyperbolic polynomials. Furthermore, our results are in two generalized settings:
* The first one shows that the Kadison-Singer result requires a weaker assumption
* The second one relaxes the Kadison-Singer result's distribution assumption to the Strongly Rayleigh distribution.
To the best of our knowledge, the previous results only support determinant polynomials [Anari and Oveis Gharan'14, Kyng, Luh and Song'20]. It is unclear whether they can be generalized to a broader class of polynomials. In addition, we also provide a sub-exponential time algorithm for constructing our results.
Daniel Hader and Matthew Patitz
Abstract: Algorithmic self-assembly occurs when components in a disorganized collection autonomously combine to form structures and, by their design and the dynamics of the system, are forced to intrinsically follow the execution of algorithms. Motivated by applications in DNA-nanotechnology, theoretical investigations in algorithmic tile-based self-assembly have blossomed into a mature theory with research strongly leveraging tools from computational theory, complexity theory, information theory, and graph theory to develop a wide range of models and to show that many are computationally universal, while also exposing a wide variety of powers and limitations of each. In addition to computational universality, the abstract Tile-Assembly Model (aTAM) was shown to be intrinsically universal (FOCS 2012), a strong notion of completeness where a single tile set is capable of simulating the full dynamics of all systems within the model; however, this result fundamentally required non-deterministic tile attachments. This was later confirmed necessary when it was shown that the class of directed aTAM systems, those in which all possible sequences of tile attachments eventually result in the same terminal assembly, is not intrinsically universal (FOCS 2016). Furthermore, it was shown that the non-cooperative aTAM, where tiles only need to match on 1 side to bind rather than 2 or more, is not intrinsically universal (SODA 2014) nor computationally universal (STOC 2017). Building on these results to further investigate the impacts of other dynamics, Hader et al. examined several tile-assembly models which varied across (1) the numbers of dimensions used, (2) restrictions imposed on the diffusion of tiles through space, and (3) whether each system is directed, and determined which models exhibited intrinsic universality (SODA 2020). Such results have shed much light on the roles of various aspects of the dynamics of tile-assembly and their effects on the universality of each model. In this paper we extend that previous work to provide direct comparisons of the various models against each other by considering intrinsic simulations between models. Our results show that in some cases, one model is strictly more powerful than another, and in others, pairs of models have mutually exclusive capabilities. This direct comparison of models helps expose the impacts of these three important aspects of self-assembling systems, and further helps to define a hierarchy of tile-assembly models analogous to the hierarchies studied in traditional models of computation.
Robert Ferens and Marek Szykuła
Abstract: A complete deterministic finite (semi)automaton (DFA) with a set of states $Q$ is \emph{completely reachable} if every non-empty subset of $Q$ can be obtained as the image of the action of some word applied to $Q$.
The concept of completely reachable automata appeared several times, in particular, in connection with synchronizing automata; the class contains the \Cerny automata and covers a few separately investigated subclasses.
The notion was introduced by Bondar and Volkov (2016), who also raised the question about the complexity of deciding if an automaton is completely reachable.
We develop a polynomial-time algorithm for this problem, which is based on a new complement-intersecting technique for finding an extending word for a subset of states.
The algorithm works in $O(|\Sigma|\cdot n^3)$ time, where $n=|Q|$ is the number of states and $|\Sigma|$ is the size of the input alphabet.
Finally, we prove a weak Don's conjecture for this class of automata: a subset of size $k$ is reachable with a word of length at most $2\cdot(n-k)\cdot n$.
This implies a quadratic upper bound in $n$ on the length of the shortest synchronizing words (reset threshold) for the class of completely reachable automata and generalizes earlier upper bounds derived for its subclasses.
Olivier Carton, Gaëtan Douéneau-Tabot, Emmanuel Filiot, and Sarah Winter
Abstract: Regular functions of infinite words are (partial) functions realized by deterministic two-way transducers with infinite look-ahead. Equivalently, Alur et. al. have shown that they correspond to functions realized by deterministic Muller streaming string transducers, and to functions defined by MSO-transductions. Regular functions are however not computable in general (for a classical extension of Turing computability to infinite inputs), and we consider in this paper the class of deterministic regular functions of infinite words, realized by deterministic two-way transducers without look-ahead. We prove that it is a well-behaved class of functions: they are computable, closed under composition, characterized by the guarded fragment of MSO-transductions, by deterministic Büchi streaming string transducers, by deterministic two-way transducers with finite look-ahead, and by finite compositions of sequential functions and one fixed basic function called map-copy-reverse.
Bader Abu Radi and Orna Kupferman
Abstract: Nondeterminism is a fundamental notion in Theoretical Computer Science.
A nondeterministic automaton is {\em semantically deterministic\/} (SD) if different nondeterministic choices in the automaton lead to equivalent states. Semantic determinism is interesting as it is a natural relaxation of determinism, and as some applications of automata in formal methods require deterministic automata, yet in fact can use automata with some level of nondeterminism, tightly related to semantic determinism.
In the context of finite words, semantic determinism coincides with determinism, in the sense that every pruning of an SD automaton to a deterministic one results in an equivalent automaton. We study SD automata on infinite words, focusing on B\"uchi, co-\buchi, and weak automata. We show that there, while semantic determinism does not increase the expressive power, the combinatorial and computational properties of SD automata are very different from these of deterministic automata.
In particular, SD \buchi and co-\buchi automata are exponentially more succinct than deterministic ones (in fact, also exponentially more succinct than history-deterministic automata), their complementation involves an exponential blow up, and decision procedures for them like universality and minimization are PSPACE-complete. For weak automata, we show that while an SD weak automaton need not be pruned to an equivalent deterministic one, it can be determinized to an equivalent deterministic weak automaton with the same state space, implying also efficient complementation and decision procedures for SD weak automata.
Mikołaj Bojańczyk and Lê Thành Dũng Nguyễn
Abstract: We consider regular string-to-string functions, i.e. functions that are recognized by copyless streaming string transducers, or any of their equivalent models, such as deterministic two-way automata. We give yet another characterization, which is very succinct: finiteness-preserving functors from the category of semigroups to itself, together with a certain output function that is a natural transformation.
Patricia Bouyer, Nathanaël Fijalkow, Mickael Randour, and Pierre Vandenhove
Abstract: This paper studies two-player zero-sum games played on graphs and makes contributions toward the following question: given an objective, how much memory is required to play optimally for that objective? We study regular objectives, where the goal of one of the two players is that eventually the sequence of colors along the play belongs to some regular language over finite words. We obtain different characterizations of the chromatic memory requirements for such objectives for both players, from which we derive complexity-theoretic statements: deciding whether there exist small memory structures sufficient to play optimally is NP-complete for both players. Some of our characterization results apply to a more general class of objectives: topologically closed and topologically open sets.
Antonio Casares and Pierre Ohlmann
Abstract: This paper is concerned with games of infinite duration played over potentially infinite graphs. Recently, Ohlmann (TheoretiCS 2023) presented a characterisation of objectives admitting optimal positional strategies, by means of universal graphs: an objective is positional if and only if it admits well-ordered monotone universal graphs. We extend Ohlmann’s characterisation to encompass (finite
or infinite) memory upper bounds. We prove that objectives admitting optimal strategies with ε-memory less than m (a memory that cannot be updated when reading an ε-edge) are exactly those which admit well-founded monotone universal graphs whose antichains have size bounded by m. We also give a characterisation of chromatic memory by means of appropriate universal structures. Our results apply to finite as well as infinite memory bounds (for instance, to objectives with finite but unbounded memory, or with countable memory strategies). We illustrate the applicability of our framework by carrying out a few case studies, we provide examples witnessing limitations of our approach, and we discuss general closure properties which follow from our results.
Mohak Goyal, Sukolsak Sakshuwong, Sahasrajit Sarmasarkar and Ashish Goel
Abstract: We study low sample complexity mechanisms in participatory budgeting (PB), where each voter votes for a preferred allocation of funds to various projects, subject to project costs and total spending constraints. We analyze the distortion that PB mechanisms introduce relative to the minimum-social-cost outcome in expectation. The Random Dictator mechanism for this problem obtains a distortion of 2. In a special case where every voter votes for exactly one project, [Fain et al '17] obtain a distortion of 4/3 We show that when PB outcomes are determined as any convex combination of the votes of two voters, the distortion is 2. When three uniformly randomly sampled votes are used, we give a PB mechanism that obtains a distortion of at most 1.66, thus breaking the barrier of 2 with the smallest possible sample complexity.
We give a randomized Nash bargaining scheme where two uniformly randomly chosen voters bargain with the disagreement point as the vote of a voter chosen uniformly at random. This mechanism has a distortion of at most 1.66. We provide a lower bound of 1.38 for the distortion of this scheme. Further, we show that PB mechanisms that output a median of the votes of three voters chosen uniformly at random have a distortion of at most 1.80.
Siddharth Barman and Pooja Kulkarni
Abstract: Cake cutting is a classic model for studying fair division of a heterogeneous, divisible resource among agents with individual preferences. Addressing cake division under a typical requirement that each agent must receive a connected piece of the cake, we develop approximation algorithms for finding envy-free (fair) cake divisions. In particular, this work improves the state-of-the-art additive approximation bound for this fundamental problem. Our results hold for general cake division instances in which the agents' valuations satisfy basic assumptions and are normalized (to have value $1$ for the cake). Furthermore, the developed algorithms execute in polynomial time under the standard Robertson-Webb query model.
Prior work has shown that one can efficiently compute a cake division (with connected pieces) in which the additive envy of any agent is at most 1/3. An efficient algorithm is also known for finding connected cake divisions that are (almost) 1/2-multiplicatively envy-free. Improving the additive approximation guarantee and maintaining the multiplicative one, we develop a polynomial-time algorithm that computes a connected cake division that is both (1/4 + o(1))-additively envy-free and (1/2 - o(1) )-multiplicatively envy-free. Our algorithm is based on the ideas of interval growing and envy-cycle elimination.
In addition, we study cake division instances in which the number of distinct valuations across the agents is parametrically bounded. We show that such cake division instances admit a fully polynomial-time approximation scheme for connected envy-free cake division.
Michal Feldman, Federico Fusco, Simon Mauras and Rebecca Reiffenhäuser
Abstract: We study truthful mechanisms for welfare maximization in online bipartite matching. In our (multi-parameter) setting, every buyer is associated with a (possibly private) desired set of items, and has a private value for being assigned an item in her desired set. Unlike most online matching settings, where agents arrive online, in our setting the items arrive online in an adversarial order while the buyers are present for the entire duration of the process. This poses a significant challenge to the design of truthful mechanisms, due to the ability of buyers to strategize over future rounds.
We provide an almost full picture of the competitive ratios in different scenarios, including myopic vs. non-myopic agents, tardy vs. prompt payments, and private vs. public desired sets.
Among other results, we identify the frontier for which the celebrated $e/(e-1)$ competitive ratio for the vertex-weighted online matching of Karp, Vazirani and Vazirani extends to truthful agents and online items.
Ishan Agarwal and Richard Cole
Abstract: To guarantee all agents are matched, the classic Deferred Acceptance algorithm needs complete preference lists. In practice, preference lists are short, yet stable matching still works well. This raises two questions:
Why does it work well?
Which proposals should agents include in their preference lists?
We study these questions in a model, introduced by Lee, with preferences based on correlated cardinal utilities: these utilities are based on common public ratings of each agent together with individual private adjustments. Lee showed that for suitable utility functions, in large markets, with high probability, for most agents, all stable matchings yield similar valued utilities. By means of a new analysis, we strengthen Lee's result, showing that in large markets, with high probability, for all but the agents with the lowest public ratings, all stable matchings yield similar valued utilities. We can then deduce that for all but the agents with the lowest public ratings, each agent has an easily identified length O(log n) preference list that includes all of its stable matches, addressing the second question above. We note that this identification uses an initial communication phase.
We extend these results to settings where the two sides have unequal numbers of agents, to many-to-one settings, e.g. employers and workers, and we also show the existence of an epsilon-Bayes-Nash equilibrium in which every agent makes relatively few proposals. These results all rely on a new technique for sidestepping the conditioning between the tentative matching events that occur over the course of a run of the Deferred Acceptance algorithm. We complement these theoretical results with an experimental study.
Jin-Yi Cai and Ben Young
Abstract: Recently, Mančinska and Roberson proved that two graphs $G$ and $G'$ are quantum isomorphic if and only if they admit the same number of homomorphisms from all planar graphs. We extend this result to planar #CSP with any pair of sets $\mathcal{F}$ and $\mathcal{F}'$ of real-valued, arbitrary-arity constraint functions. Graph homomorphism is the special case where each of $\mathcal{F}$ and $\mathcal{F}'$ contains a single symmetric 0-1-valued binary constraint function. Our treatment uses the framework of planar Holant problems. To prove that quantum isomorphic constraint function sets give the same value on any planar #CSP instance, we apply a novel form of holographic transformation of Valiant, using the quantum permutation matrix $\mathcal{U}$ defining the quantum isomorphism. Due to the noncommutativity of $\mathcal{U}$'s entries, it turns out that this form of holographic transformation is only applicable to planar Holant. To prove the converse, we introduce the quantum automorphism group Qut$(\mathcal{F})$ of a set of constraint functions $\mathcal{F}$, and characterize the intertwiners of Qut$(\mathcal{F})$ as the signature matrices of planar Holant$(\mathcal{F}\,|\,\mathcal{EQ})$ quantum gadgets. Then we define a new notion of (projective) connectivity for constraint functions and reduce arity while preserving the quantum automorphism group. Finally, to address the challenges posed by generalizing from 0-1 valued to real-valued constraint functions, we adapt a technique of Lovász in the classical setting for isomorphisms of real-weighted graphs to the setting of quantum isomorphisms.
Minglong Qin and Penghui Yao
Abstract: This paper considers the decidability of fully quantum nonlocal games with noisy maximally entangled states. Fully quantum nonlocal games are a generalization of nonlocal games, where both questions and answers are quantum and the referee performs a binary POVM measurement to decide whether they win the game after receiving the quantum answers from the players. The quantum value of a fully quantum nonlocal game is the supremum of the probability that they win the game, where the supremum is taken over all the possible entangled states shared between the players and all the valid quantum operations performed by the players. The seminal work MIP∗ = RE implies that it is undecidable to approximate the quantum value of a fully quantum nonlocal game. This still holds even if the players are only allowed to share (arbitrarily many copies of) maximally entangled states.
This paper investigates the case that the shared maximally entangled states are noisy. We prove that there is a computable upper bound on the copies of noisy maximally entangled states for the players to win a fully quantum nonlocal game with a probability arbitrarily close to the quantum value. This implies that it is decidable to approximate the quantum values of these games. Hence, the hardness of approximating the quantum value of a fully quantum nonlocal game is not robust against the noise in the shared states. This paper is built on the framework for the decidability of non-interactive simulations of joint distributions and generalizes the analogous result for nonlocal games. We extend the theory of Fourier analysis to the space of super-operators and prove several key results including an invariance principle and a dimension reduction for super-operators. These results are interesting in their own right and are believed to have further applications.
Alexandru Gheorghiu, Tony Metger and Alexander Poremba
Abstract: Quantum mechanical effects have enabled the construction of cryptographic primitives that are impossible classically. For example, quantum copy-protection allows for a program to be encoded in a quantum state in such a way that the program can be evaluated, but not copied. Many of these cryptographic primitives are two-party protocols, where one party, Bob, has full quantum computational capabilities, and the other party, Alice, is only required to send random BB84 states to Bob. In this work, we show how such protocols can generically be converted to ones where Alice is fully classical, assuming that Bob cannot efficiently solve the LWE problem. In particular, this means that all communication between (classical) Alice and (quantum) Bob is classical, yet they can still make use of cryptographic primitives that would be impossible if both parties were classical. We apply this conversion procedure to obtain quantum cryptographic protocols with classical communication for unclonable encryption, copy-protection, computing on encrypted data, and verifiable blind delegated computation. The key technical ingredient for our result is a protocol for classically-instructed parallel remote state preparation of BB84 states. This is a multi-round protocol between (classical) Alice and (quantum polynomial-time) Bob that allows Alice to certify that Bob must have prepared n uniformly random BB84 states (up to a change of basis on his space). Furthermore, Alice knows which specific BB84 states Bob has prepared, while Bob himself does not. Hence, the situation at the end of this protocol is (almost) equivalent to one where Alice sent n random BB84 states to Bob. This allows us to replace the step of preparing and sending BB84 states in existing protocols by our remote-state preparation protocol in a generic and modular way.
Honghao Fu, Daochen Wang and Qi Zhao
Abstract: Self-testing is a fundamental feature of quantum mechanics that allows a classical verifier to force untrusted quantum devices to prepare certain states and perform certain measurements on them. The standard approach assumes at least two spatially separated devices. Recently, Metger and Vidick [Quantum, 2021] showed that a single EPR pair of a single quantum device can be self-tested under computational assumptions. In this work, we generalize their results to give the first parallel self-test of N EPR pairs and measurements on them in the single-device setting under the same computational assumptions. We show that our protocol can be passed with probability negligibly close to 1 by an honest quantum device using poly(N) resources. Moreover, we show that any quantum device that fails our protocol with probability at most eps must be poly(N,eps)-close to being honest in the appropriate sense. In particular, our protocol can test any distribution over tensor products of computational or Hadamard basis measurements, making it suitable for applications such as device-independent quantum key distribution under computational assumptions. Moreover, a simplified version of our protocol is the first that can efficiently certify an arbitrary number of qubits of a single cloud quantum computer using only classical communication.
Peyman Afshani, Pingan Cheng, Aniket Basu Roy and Zhewei Wei
Abstract: We study the query version of the heavy hitter and approximate quantile problems.
In the former problem, the input is a parameter $\varepsilon$, and a set $P$ of $n$ points in $R^d$
where each point is assigned a color from a set $C$ and the goal is to
build a structure such that given any geometric range $\gamma$,
we can efficiently find a list of heavy hitters in $\gamma\cap P$,
i.e., colors that appear at least $\varepsilon |\gamma \cap P|$ times in $\gamma \cap P$
as well as their approximate frequencies with an additive error of $\varepsilon |\gamma \cap P|$.
In the latter problem, each point is assigned a weight from a totally ordered universe
and the query must output a sequence $S$ of $1/\varepsilon$ weights
such that the $i$-th weight in $S$ has approximate rank $i\varepsilon|\gamma\cap P|$, meaning,
rank $i\varepsilon|\gamma\cap P|$ up to an additive error of $\varepsilon|\gamma\cap P|$.
Previously, optimal results were only known for the 1D version of the problem [WY11] but
a few sub-optimal methods were available in higher dimensions [AW17, ACH+12].
We study the problems for two important classes of geometric ranges: 3D halfspace and 3D dominance queries.
It is known that many other important queries can be reduced to these two, namely,
1D interval stabbing or interval containment,
2D three-sided queries, 2D circular as well as 2D $k$-nearest neighbors queries.
We consider the real RAM model of computation where integer registers of size $w$ bits,
$w = \Theta(\log n)$, are also available.
For dominance queries, we show optimal solutions for both heavy hitter and approximate
quantile problems: using linear space, we can answer both queries in time $O(\log n + 1/\varepsilon)$.
Note that as the output size is $\frac{1}{\varepsilon}$, after investing the initial $O(\log n)$ searching time,
our structure takes on average $O(1)$ time to find a heavy hitter or a quantile!
For more general halfspace heavy hitter queries, the same optimal query time can be achieved by increasing the
space by an extra $\log_w\frac{1}{\varepsilon}$ (resp. $\log\log_w\frac{1}{\varepsilon}$) factor in 3D (resp. 2D).
By spending extra $\log^{O(1)}\frac{1}{\varepsilon}$ factors in both time and space,
we can also support quantile queries.
We remark that it is hopeless to achieve a similar query bound for dimensions 4
or higher unless significant advances are made in the data structure side of
theory of geometric approximations.
Gramoz Goranci and Monika Henzinger
Abstract: We show an $(1+\epsilon)$-approximation algorithm for maintaining maximum $s$-$t$ flow under $m$ edge insertions in $m^{1/2+o(1)} \epsilon^{-1/2}$ amortized update time for directed, unweighted graphs. This constitutes the first sublinear dynamic maximum flow algorithm in general sparse graphs with arbitrarily good approximation guarantee.
Furthermore we give an algorithm that maintains an exact maximum $s$-$t$ flow under $m$ edge insertions in an $n$-node graph in $\tilde{O}(n^{5/2})$ total update time. For sufficiently dense graphs, this gives to the first exact incremental algorithm with sub-linear amortized update time for maintaining maximum flows.
Davide Bilò, Keerti Choudhary, Sarel Cohen, Tobias Friedrich, Simon Krogmann and Martin Schirneck
Abstract: We study the problem of estimating the $ST$-diameter of a graph that is subject to a bounded number of edge failures. An \emph{$f$-edge fault-tolerant $ST$-diameter oracle} ($f$-FDO-$ST$) is a data structure that preprocesses a given graph $G$, two sets of vertices $S,T$, and positive integer $f$. When queried with a set $F$ of at most $f$ edges, the oracle returns an estimate $\widehat{D}$ of the $ST$-diameter $diam(G-F,S,T)$, the maximum distance between vertices in $S$ and $T$ in $G\,{-}\,F$. The oracle has stretch $\sigma \geq 1$ if $diam(G-F,S,T) \leq \widehat{D} \leq \sigma~diam(G-F,S,T)$. If $S$ and $T$ both contain all vertices, the data structure is called an \emph{$f$-edge fault-tolerant diameter oracle} ($f$-FDO).
We design new $f$-FDOs and $f$-FDO-$ST$s via reductions from both \emph{all-pairs} and \emph{single-source} \emph{$f$-edge fault-tolerant Distance Sensitivity Oracles} ($f$-DSOs), i.e., oracles that can estimate distances even when up to $f$ edges of $G$ fail. We obtain new tradeoffs between the size of the data structure, stretch guarantee, query and preprocessing times for $f$-FDOs and $f$-FDO-$ST$s by combining our reductions with the all-pairs/single-souce $f$-DSOs of Weimann and Yuster [FOCS 2010], Chechik et al.~[ESA 2010], Baswana and Khanna [STACS 2010], Chechik et al.~[SODA 2017], Brand and Saranurak [FOCS 2019], Duan and Ren [STOC 2020], Bilò et al.~[Algorithmica 2022], Singh and Gupta [ICALP 2018], and Baswana et al.~[SODA 2018].
We also provide an information-theoretic lower bound on the space requirement of approximate $f$-FDOs. We show that there exists a family graphs for which any $f$-FDO with sensitivity $f \ge 2$ and stretch less than $5/3$ requires $\Omega(n^{3/2})$ bits of space, regardless of the query time.
Louis Esperet, Nathaniel Harms and Viktor Zamaraev
Abstract: For any hereditary graph class F, we construct optimal adjacency labeling schemes for the classes of subgraphs and induced subgraphs of Cartesian products of graphs in F. As a consequence, we show that, if F admits efficient adjacency labels (or, equivalently, small induced-universal graphs) meeting the information-theoretic minimum, then the classes of subgraphs and induced subgraphs of Cartesian products of graphs in F do too. Our proof uses ideas from randomized communication complexity and hashing, and improves upon recent results of Chepoi, Labourel, and Ratel [Journal of Graph Theory, 2020].
Vladimir Braverman, Robert Krauthgamer, Aditya Krishnan and Shay Sapir
Abstract: Many streaming algorithms provide only a high-probability relative approximation. These two relaxations, of allowing approximation and randomization, seem necessary -- for many streaming problems, both relaxations must be employed simultaneously, to avoid an exponentially larger (and often trivial) space complexity. A common drawback of these randomized approximate algorithms is that independent executions on the same input have different outputs, that depend on their random coins. \emph{Pseudo-deterministic} algorithms combat this issue, and for every input, they output with high probability the same ``canonical'' solution.
We consider perhaps the most basic problem in data streams, of counting the number of items in a stream of length at most $n$. Morris's counter [CACM, 1978] is a randomized approximation algorithm for this problem that uses $O(\log\log n)$ bits of space, for every fixed approximation factor (greater than $1$). Goldwasser, Grossman, Mohanty and Woodruff [ITCS 2020] asked whether pseudo-deterministic approximation algorithms can match this space complexity. Our main result answers their question negatively, and shows that such algorithms must use $\Omega(\sqrt{\log n / \log\log n})$ bits of space.
Our approach is based on a problem that we call \emph{Shift Finding}, and may be of independent interest. In this problem, one has query access to a shifted version of a known string $F\in\{0,1\}^{3n}$, which is guaranteed to start with $n$ zeros and end with $n$ ones, and the goal is to find the unknown shift using a small number of queries. We provide for this problem an algorithm that uses $O(\sqrt{n})$ queries. It remains open whether $poly(\log n)$ queries suffice; if true, then our techniques immediately imply a nearly-tight $\Omega(\log n/\log\log n)$ space bound for pseudo-deterministic approximate counting.
Sudatta Bhattacharya and Michal Koucky
Abstract: In this paper we give an algorithm for streaming $k$-edit approximate pattern matching which uses space $\widetilde{O}(k^2)$ and time $\widetilde{O}(k^3)$ per arriving symbol. This improves substantially on the recent algorithm of Kociumaka, Porat and Starikovskaya (2021) which uses space $\widetilde{O}(k^5)$ and time $\widetilde{O}(k^8)$ per arriving symbol. In the $k$-edit approximate pattern matching problem we get a pattern $P$ and text $T$ and we want to identify all substrings of the text $T$ that are at edit distance at most $k$ from $P$. In the streaming version of this problem both the pattern and the text arrive in a streaming fashion symbol by symbol and after each symbol of the text we need to report whether there is a current suffix of the text with edit distance at most $k$ from $P$. We measure the total space needed by the algorithm and time needed per arriving symbol.
Sanjeev Khanna, Yu Chen and Zihan Tan
Abstract: We consider the design of sublinear space and query complexity algorithms for estimating the cost of a minimum spanning tree (MST) and the cost of a minimum traveling salesman (TSP) tour in a metric on $n$ points. We start by exploring this estimation task in the regime of $o(n)$ space, when the input is presented as a stream of all $n \choose 2$ entries of the metric in an arbitrary order (a metric stream). For any $\alpha \ge 2$, we show that both MST and TSP cost can be $\alpha$-approximated using $\tilde{O}(n/\alpha)$ space, and moreover, $\Omega(n/\alpha^2)$ space is necessary for this task. We further show that even if the streaming algorithm is allowed $p$ passes over a metric stream, it still requires $\tilde{\Omega}(\sqrt{n/\alpha p^2})$ space.
We next consider the well-studied semi-streaming regime. In this regime, it is straightforward to compute MST cost exactly even in the case where the input stream only contains the edges of a weighted graph that induce the underlying metric (a graph stream), and the main challenging problem is to estimate TSP cost to within a factor that is strictly better than $2$. We show that in graph streams, for any $\epsilon > 0$, any one-pass $(2-\epsilon)$-approximation of TSP cost requires $\Omega(\epsilon^2 n^2)$ space. On the other hand, we show that there is an $\tilde{O}(n)$ space two-pass algorithm that approximates the TSP cost to within a factor of 1.96.
Finally, we consider the query complexity of estimating metric TSP cost to within a factor that is strictly better than $2$ when the algorithm is given access to an $n \times n$ matrix that specifies pairwise distances between $n$ points. The problem of MST cost estimation in this model is well-understood and a $(1+\epsilon)$-approximation is achievable by $\tilde{O}(n/\varepsilon^{O(1)})$ queries. However, for estimating TSP cost, it is known that an analogous result requires $\Omega(n^2)$ queries even for $(1,2)$-TSP, and for general metrics, no algorithm that achieves a better than $2$-approximation with $o(n^2)$ queries is known. We make progress on this task by designing an algorithm that performs $\tilde{O}(n^{1.5})$ distance queries and achieves a strictly better than $2$-approximation when either the metric is known to contain a spanning tree supported on weight-$1$ edges or the algorithm is given access to a minimum spanning tree of the graph.
Prior to our work, such results were only known for the special cases of graphic TSP and $(1,2)$-TSP.
In terms of techniques, our algorithms for metric TSP cost estimation in both streaming and query settings rely on estimating the {\em cover advantage} which intuitively measures the cost needed to turn an MST into an Eulerian graph. One of our main algorithmic contributions is to show that this quantity can be meaningfully estimated by a sublinear number of queries in the query model. On one hand, the fact that a metric stream reveals pairwise distances for all pairs of vertices provably helps algorithmically. On the other hand, it also seems to render useless techniques for proving space lower bounds via reductions from well-known hard communication problems.
Our main technical contribution in lower bounds is to identify and characterize the communication complexity of new problems that can serve as canonical starting point for proving metric stream lower bounds.
Rajarshi Bhattacharjee, Gregory Dexter, Petros Drineas, Cameron Musco and Archan Ray
Abstract: We study the problem of approximating the eigenspectrum of a symmetric matrix $\mathbf A \in \mathbb{R}^{n \times n}$ with bounded entries (i.e., $\|\mathbf A\|_{\infty} \leq 1$). We present a simple sublinear time algorithm that approximates all eigenvalues of $\mathbf{A}$ up to additive error $\pm \epsilon n$ using those of a randomly sampled $\tilde {O}\left (\frac{\log^3 n}{\epsilon^3}\right ) \times \tilde O\left (\frac{\log^3 n}{\epsilon^3}\right )$ principal submatrix. Our result can be viewed as a concentration bound on the complete eigenspectrum of a random submatrix, significantly extending known bounds on just the singular values (the magnitudes of the eigenvalues). We give improved error bounds of $\pm \epsilon \sqrt{nnz(\mathbf{A})}$ and $\pm \epsilon \|\mathbf A\|_F$ when the rows of $\mathbf A$ can be sampled with probabilities proportional to their sparsities or their squared $\ell_2$ norms respectively. Here $nnz(\mathbf{A})$ is the number of non-zero entries in $\mathbf{A}$ and $\|\mathbf A\|_F$ is its Frobenius norm. Even for the strictly easier problems of approximating the singular values or testing the existence of large negative eigenvalues (Bakshi, Chepurko, and Jayaram, FOCS '20), our results are the first that take advantage of non-uniform sampling to give improved error bounds. From a technical perspective, our results require several new eigenvalue concentration and perturbation bounds for matrices with bounded entries. Our non-uniform sampling bounds require a new algorithmic approach, which judiciously zeroes out entries of a randomly sampled submatrix to reduce variance, before computing the eigenvalues of that submatrix as estimates for those of $\mathbf A$. We complement our theoretical results with numerical simulations, which demonstrate the effectiveness of our algorithms in practice.
George Kenison, Joris Nieuwveld, Joël Ouaknine, and James Worrell
Abstract: It is a longstanding open problem whether there is an algorithm to decide the Positivity Problem for linear recurrence sequences (LRS) over the integers, namely whether given such a sequence, all its terms are non-negative. Decidability is known for LRS of order $5$ or less, i.e., for those sequences in which every new term depends linearly on the previous five (or fewer) terms. For \emph{simple} LRS (i.e., those whose characteristic polynomial has no repeated roots), decidability of Positivity is known up to order $9$.
In this paper, we focus on the important subclass of reversible LRS, i.e., those integer LRS $\langle u_n \rangle_{n=0}^\infty$ whose bi-infinite completion $\langle u_n \rangle_{n=-\infty}^\infty$ also takes exclusively integer values; a typical example is the classical Fibonacci (bi-)sequence $\langle \dots, 5, -3, 2, -1, 1, 0, 1, 1, 2, 3, 5, \ldots \rangle$. Our main results are that Positivity is decidable for reversible LRS of order $11$ or less, and for simple reversible LRS of order $17$ or less.
Bartosz Bednarczyk, Ian Pratt-Hartmann, and Daumantas Kojelis
Abstract: We define the adjacent fragment AF of first-order logic, obtained by restricting the sequences of variables occurring as arguments in atomic formulas.
The adjacent fragment generalizes (after a routine renaming) two-variable logic as well as the fluted fragment.
We show that the adjacent fragment has the finite model property, and its satisfiability problem is no harder than for the fluted fragment, and hence Tower-complete.
We further show that any relaxation of the adjacency condition on the allowed order of variables in
argument sequences yields a logic whose satisfiability and finite satisfiability problems are undecidable.
Finally, we study the effect of the adjacency requirement on the well-known guarded fragment GF of first-order logic.
We show that the satisfiability problem for the guarded adjacent fragment (GA) remains TwoExpTime-hard,
thus strengthening the known lower bound for GF, but that, in contrast to GF, the fragment GA
has the Craig Interpolation Property.
Michael Benedikt, Dmitry Chistikov, and Alessio Mansutti
Abstract: We investigate expansions of Presburger arithmetic (PA), i.e.,
the theory of the integers with addition and order, with additional
structure related to exponentiation: either a function that takes a
number to the power of 2, or a predicate P for the powers of 2.
The latter theory, denoted as PA(Pow), was introduced by Buchi as a
first attempt at characterising the sets of tuples of numbers that
can be expressed using finite automata; Buchi's method does not give
an elementary upper bound, and the complexity of this theory has been
open. The former theory, denoted as PA(Exp), was shown decidable by
Semenov; while the decision procedure for this theory differs
radically from the automata-based method proposed by Buchi, the method
is also non-elementary. And in fact, the theory with the power function
has a non-elementary lower bound. In this paper, we show that while
Semenov's and Buchi's approaches yield non-elementary blow-ups for PA(Pow),
the theory is in fact decidable in triply exponential time,
similar to the best known quantifier-elimination algorithm for PA.
We also provide a NEXPTIME upper bound for the existential fragment of
PA(Exp), a step towards a finer-grained analysis of its complexity.
Both these results are established by analysing a single parameterized
satisfiability algorithm for PA(Exp), which can be specialized to either
the setting of PA(Pow) or the existential theory of PA(Exp).
Besides the new upper bounds for the existential theory of PA(Exp) and
PA(Pow), we believe our algorithm provides new intuition for the decidability
of these theories, and for the features that lead to non-elementary blow-ups.
Markus Lohrey and Andreas Rosowski
Abstract: We prove that it is $\Pi_2^{\mathsf{P}}$-complete to verify whether the diameter of a given permutation group $G = \langle A \rangle$ is bounded by a unary encoded number $k$. This solves an open problem from a paper of Even and Goldreich, where the problem was shown to be NP-hard. Verifying whether the diameter is exactly $k$ is complete for the class consisting of all intersections of a $\Pi_2^{\mathsf{P}}$-language and a $\Sigma_2^{\mathsf{P}}$-language. A similar result is shown for the length of a given permutation $\pi$, which is the minimal $k$ such that $\pi$ can be written as a product of at most $k$ generators from $A$. Even and Goldreich proved that it is $\mathsf{NP}$-complete to verify, whether the length of a given $\pi$ is at most $k$ (with $k$ given in unary encoding). We show that it is DP-complete to verify whether the length is exactly $k$.
Finally, we deduce from our result on the diameter that it is $\Pi_2^{\mathsf{P}}$-complete to check whether a given finite automaton with transitions labelled by permutations from $S_n$ produces all permutations from $S_n$.
Dylan Hyatt-Denesik, Afrouz Ameli and Laura Sanita
Abstract: This paper addresses a graph optimization problem, called the Witness Tree problem, which seeks a spanning tree of a graph minimizing a certain non-linear objective function. This problem is of interest because it plays a crucial role in the analysis of the best approximation algorithms for two fundamental network design problems: Steiner Tree and Node-Tree Augmentation. We will show how a wiser choice of witness trees leads to an improved approximation for Node-Tree Augmentation, and for Steiner Tree in special classes of graphs.
Mohit Garg, Felix Hommelsheim and Nicole Megow
Abstract: We consider the matching augmentation problem (MAP), where a matching of a graph needs to be extended into a $2$-edge-connected spanning subgraph by adding the minimum number of edges to it. We present a polynomial-time algorithm with an approximation ratio of $13/8 = 1.625$ improving upon an earlier $5/3$-approximation. The improvement builds on a new $\alpha$-approximation preserving reduction for any $\alpha\geq 3/2$ from arbitrary MAP instances to well-structured instances that do not contain certain forbidden structures like parallel edges, small separators, and contractible subgraphs. We further introduce, as key ingredients, the technique of repeated simultaneous contractions and provide improved lower bounds for instances that cannot be contracted.
Ishan Bansal, Joe Cheriyan, Logan Grout and Sharat Ibrahimpur
Abstract: We address long-standing open questions raised by Williamson, Goemans, Vazirani and Mihail pertaining to the design of approximation algorithms for problems in network design via the primal-dual method (Combinatorica 15(3):435-454, 1995). Williamson, et al., prove an approximation guarantee of two for connectivity augmentation problems where the connectivity requirements can be specified by so-called uncrossable functions. They state: ``Extending our algorithm to handle non-uncrossable functions remains a challenging open problem. The key feature of uncrossable functions is that there exists an
optimal dual solution which is laminar. This property characterizes uncrossable functions \dots\ A larger open issue is to explore further the power of the primal-dual approach for obtaining approximation algorithms for other combinatorial optimization problems.''
Our main result proves a 16-approximation guarantee via the primal-dual method for a class of functions that generalizes the notion of an uncrossable function. We mention that the support of every optimal dual solution could be non-laminar for instances that can be handled by our methods.
We present a few applications of our main result to problems in the area of network design.
(a) A 16-approximation algorithm for augmenting the family of near-minimum cuts of a graph $G$.
The previous best approximation guarantee was $O(\log |V(G)|)$.
(b) A 20-approximation algorithm for the model of $(p,2)$-Flexible Graph Connectivity.
The previous best approximation guarantee was $O(\log |V(G)|)$, where $G$ denotes the input graph.
(c) A $16 \cdot {\lceil k/u_{min} \rceil}$-approximation algorithm for the Cap-$k$-ECSS problem which is as follows:
Given an undirected graph $G = (V,E)$ with edge costs $c$ and edge capacities $u$, find a minimum cost subset of the edges $F\subseteq E$ such that the capacity across any cut in $(V,F)$ is at least $k$;
$u_{min}$ (respectively, $u_{max}$) denote the minimum (respectively, maximum) capacity of an edge in $E$, and w.l.o.g.\ $u_{max} \leq k$.
The previous best approximation guarantee was $\min( O(\log{n}), k, 2u_{max} )$.
Zachary Friggstad and Ramin Mousavi
Abstract: We present an $O(\log k)$-approximation for both the edge-weighted and node-weighted versions of \DST in planar graphs where $k$ is the number of terminals. We extend our approach to \MDST\footnote{In general graphs \MDST and \DST are easily seen to be equivalent but in planar graphs this is not the case necessarily.}, in which we get a $O(R+\log k)$-approximation for planar graphs for where $R$ is the number of roots.
Claire Mathieu and Hang Zhou
Abstract: In the unsplittable capacitated vehicle routing problem (UCVRP) on trees, we are given a rooted tree with edge weights and a subset of vertices of the tree called terminals. Each terminal is associated with a positive demand between 0 and 1. The goal is to find a minimum length collection of tours starting and ending at the root of the tree such that the demand of each terminal is covered by a single tour (i.e., the demand cannot be split), and the total demand of the terminals in each tour does not exceed the capacity of 1.
For the special case when all terminals have equal demands, a long line of research culminated in a quasi-polynomial time approximation scheme [Jayaprakash and Salavatipour, SODA 2022] and a polynomial time approximation scheme [Mathieu and Zhou, TALG 2022].
In this work, we study the general case when the terminals have arbitrary demands. Our main contribution is a polynomial time $(1.5+\epsilon)$-approximation algorithm for the UCVRP on trees. This is the first improvement upon the 2-approximation algorithm more than 30 years ago. Our approximation ratio is essentially best possible, since it is NP-hard to approximate the UCVRP on trees to better than a 1.5 factor.
Timothy M. Chan, Qizheng He and Yuancheng Yu
Abstract: We study the time complexity of the discrete $k$-center problem and related (exact) geometric set cover problems when $k$ or the size of the cover is small. We obtain a plethora of new results:
Takehiro Ito, Yuni Iwamasa, Naonori Kakimura, Yusuke Kobayashi, Shunichi Maezawa, Yuta Nozaki, Yoshio Okamoto and Kenta Ozeki
Abstract: In this paper, we consider a transformation of $k$ disjoint paths in a graph. For a graph and a pair of $k$ disjoint paths $\mathcal{P}$ and $\mathcal{Q}$ connecting the same set of terminal pairs, we aim to determine whether $\mathcal{P}$ can be transformed to $\mathcal{Q}$ by repeatedly replacing one path with another path so that the intermediates are also $k$ disjoint paths. The problem is called \textsc{Disjoint Paths Reconfiguration}. We first show that \textsc{Disjoint Paths Reconfiguration} is $PSPACE$-complete even when $k=2$. On the other hand, we prove that, when the graph is embedded on a plane and all paths in $\mathcal{P}$ and $\mathcal{Q}$ connect the boundaries of two faces, \textsc{Disjoint Paths Reconfiguration} can be solved in polynomial time. The algorithm is based on a topological characterization for rerouting curves on a plane using the algebraic intersection number. We also consider a transformation of disjoint $s$-$t$ paths as a variant. We show that the disjoint $s$-$t$ paths reconfiguration problem in planar graphs can be determined in polynomial time, while the problem is $PSPACE$-complete in general.
Siu-Wing Cheng and Haoqiang Huang
Abstract: We propose $\kappa$-approximate nearest neighbor (ANN) data structures for $n$ polygonal curves with at most $m$ vertices each under the Fr\'{e}chet distance in $\mathbb{R}^d$, where $\kappa \in \{1+\epsilon,3+\epsilon\}$ and $d \geq 2$. We assume that every query curve has at most $k$ vertices, $k \ll m$, and $k$ is given for preprocessing. The query times are $\tilde{O}(k(mn)^{0.5+\epsilon}/\epsilon^{O(d)} + k(d/\epsilon)^{O(dk)})$ for $(1+\epsilon)$-ANN and $\tilde{O}(k(mn)^{0.5+\epsilon}/\epsilon^{O(d)})$ for $(3+\epsilon)$-ANN. The space and expected preprocessing time are $\tilde{O}(k(mnd^d/\epsilon^d)^{O(k+1/\epsilon^2)})$ in both cases. In two and three dimensions, we improve the query times to $\tilde{O}(k/\epsilon^{O(k)})$ for $(1+\epsilon)$-ANN and $\tilde{O}(k)$ for $(3+\epsilon)$-ANN. The space and expected preprocessing time improve to $\tilde{O}(k(mn/\epsilon)^{O(k)})$ in both cases. For ease of presentation, we suppress factors in our bounds that depend purely on $d$. The hidden polylog factors in the big-$\tilde{O}$ notation have powers dependent on $d$.
Yi-Jun Chang
Abstract: An orthogonal drawing is an embedding of a plane graph into a grid. In a seminal work of Tamassia (SIAM Journal on Computing, 1987), a simple combinatorial characterization of angle assignments that can be realized as bend-free orthogonal drawings was established, thereby allowing an orthogonal drawing to be combinatorially by listing the angles of all corners. The characterization reduces the need to consider certain geometric aspects, such as edge lengths and vertex coordinates, and simplifies the task of graph drawing algorithm design.
Barth, Niedermann, Rutter, and Wolf (SoCG, 2017) established an analogous combinatorial characterization for ortho-radial drawings, which are a generalization of orthogonal drawings to cylindrical grids. The proof of the characterization is existential and does not result in an efficient algorithm. Niedermann, Rutter, and Wolf (SoCG, 2019) later addressed this issue by developing quadratic-time algorithms for both testing the realizability of a given angle assignment as an ortho-radial drawing without bends and constructing such a drawing.
In this paper, we take it a step further by improving the time complexity of these tasks to near-linear time. We prove a new characterization for ortho-radial drawings based on the concept of a good sequence, which enables us to construct an ortho-radial drawing through a simple greedy algorithm.
Magdalen Dobson and Guy Blelloch
Abstract: We study the connections between sorting and the binary search tree (BST) model, with an aim towards showing that the fields are connected more deeply than is currently appreciated. While any BST can be used to sort by inserting the keys one-by-one, this is a very limited relationship and importantly says nothing about parallel sorting. We show what we believe to be the first formal relationship between the BST model and sorting. Namely, we show that a large class of sorting algorithms, which includes mergesort, quicksort, insertion sort, and almost every instance-optimal sorting algorithm, are equivalent in cost to offline BST algorithms. The main tool we use to bridge the gap between these areas is the geometric interpretation of the BST model introduced by Demaine et al., which finds an equivalence between searches on a BST and point sets in the plane satisfying a certain property. To give an example of the utility of our approach, we introduce the log-interleave bound, a measure of the information-theoretic complexity of a permutation pi, which is within a lg lg n multiplicative factor of a known lower bound in the BST model; we also devise a parallel sorting algorithm with polylogarithmic span that sorts a permutation pi using comparisons proportional to its log-interleave bound. Our aforementioned result on sorting and offline BST algorithms can be used to show existence of an offline BST algorithm whose cost is within a constant factor of the log-interleave bound of any permutation pi.
David Harris and Vladimir Kolmogorov
Abstract: A central problem in computational statistics is to convert a procedure for \emph{sampling} combinatorial objects into a procedure for \emph{counting} those objects, and vice versa. We will consider sampling problems which come from \emph{Gibbs distributions}, which are families of probability distributions over a discrete space $\Omega$ with probability mass function of the form $\mu^\Omega_\beta(\omega) \propto e^{\beta H(\omega)}$ for $\beta$ in an interval $[\beta_{min}, \beta_{max}]$ and $H( \omega ) \in \{0 \} \cup [1, n]$.
The \emph{partition function} is the normalization factor $Z(\beta)=\sum_{\omega \in\Omega}e^{\beta H(\omega)}$, and the \emph{log partition ratio} is defined as $q = \frac{\log Z(\beta_{max})}{Z(\beta_{min})}$
We develop a number of algorithms to estimate the counts $c_x$ using roughly
$\tilde O( \frac{q}{\epsilon^2})$ samples for general Gibbs distributions and $\tilde O( \frac{n^2}{\epsilon^2} )$ samples for integer-valued distributions (ignoring some second-order terms and parameters), We show this is optimal up to logarithmic factors. We illustrate with improved algorithms for counting connected subgraphs and perfect matchings in a graph.
Charilaos Efthymiou and Weiming Feng
Abstract: We study the single-site Glauber dynamics for the fugacity \lambda, Hard-core model on the random graph G(n, d/n). We show that for the typical instances of the random graph G(n,d/n) and for fugacity \lambda < \frac{d^d}{(d-1)^{d+1}}, the mixing time of Glauber dynamics is n^{1 + O(1/\log \log n)}.
Our result improves on the recent elegant algorithm in [Bezakova, Galanis, Goldberg and Štefankovič; ICALP 2022]. The algorithm there is an MCMC-based sampling algorithm, but it is not the Glauber dynamics. Our algorithm here is simpler, as we use the classic Glauber dynamics. Furthermore, the bounds on mixing time we prove are smaller than those in Bezakova et al. paper, hence our algorithm is also faster.
The main challenge in our proof is handling vertices with unbounded degrees. We provide stronger results with regard the spectral independence via branching values and show that the our Gibbs distributions satisfy the approximate tensorisation of the entropy. We conjecture that the bounds we have here are optimal for G(n,d/n).
As corollary of our analysis for the Hard-core model, we also get bounds on the mixing time of the Glauber dynamics for the Monomer-dimer model on G(n,d/n). The bounds we get for this model are slightly better than those we have or the Hard-core model.
Konstantinos Zampetakis and Charilaos Efthymiou
Abstract: Motivated by the theory of spin-glasses in physics, we study the so-called reconstruction problem for the related distributions on the tree, and on the sparse random graph G(n,d/n).
Both cases, reduce naturally to studying broadcasting models on the tree, where each edge has its own broadcasting matrix and this matrix is drawn independently from a predefined distribution. In this context, we study the effect of the configuration at the root to that of the vertices at distance h, as h\to\infty.
We establish the reconstruction threshold for the cases where the broadcasting matrices give rise to symmetric, 2-spin Gibbs distributions. This threshold seems to be a natural extension of the well-known Kesten-Stigum bound which arise in the classic version of the reconstruction problem.
Our results imply, as a special case, the reconstruction threshold for the well-known Edward-Anderson model of spin-glasses on the tree.
Also, we extend our analysis to the setting of the Galton-Watson tree, and the random graph G(n,d/n), where we establish the corresponding thresholds. Interestingly, for the Edward-Anderson model on the random graph, we show that the replica symmetry breaking phase transition, established in [Guerra and Toninelli:2004], coincides with the reconstruction threshold.
Compared to the classical Gibbs distributions, the spin-glasses have a lot of unique features. In that respect, their study calls for new ideas, e.g., here we introducing novel estimators for the reconstruction problem etc. Furthermore, note that the main technical challenge in the analysis is the presence of (too) many levels of randomness. We manage to circumvent this problem by utilising recent tools coming from the analysis of Markov chains.
David Eppstein and Daniel Frishberg
Abstract: We prove that the well-studied triangulation flip walk on a convex point set mixes in time O(n^3 log^3 n), the first progress since McShine and Tetali's O(n^5 log n) bound in 1997. In the process we give lower and upper bounds of respectively Omega(1/(sqrt n log n)) and O(1/sqrt n)---asymptotically tight up to an O(log n) factor---for the expansion of the associahedron graph K_n. The upper bound recovers Molloy, Reed, and Steiger's Omega(n^{3/2}) bound on the mixing time of the walk. To obtain these results, we introduce a framework consisting of a set of sufficient conditions under which a given Markov chain mixes rapidly. This framework is a purely combinatorial analogue that in some circumstances gives better results than the projection-restriction technique of Jerrum, Son, Tetali, and Vigoda. In particular, in addition to the result for triangulations, we show quasipolynomial mixing for the k-angulation flip walk on a convex point set, for fixed k >= 4.
Thomas Sauerwald, He Sun and Danny Vagnozzi
Abstract: A closed random walk of length $\ell$ on an undirected and connected graph G=(V,E) is a random walk that returns to the start vertex at step $\ell$, and its properties have been very recently related to problems in different mathematical fields, e.g., geometry and combinatorics (Jiang et al., Annals of Mathematics '21) and spectral graph theory (McKenzie et al., STOC '21). For instance, in the context of analyzing the eigenvalue multiplicity of graph matrices, McKenzie et al. shows that, with high probability, the support of a closed random walk of length $\ell\geq 1$ is $\Omega(\ell^{1/5})$ on any bounded-degree graph, and leaves as an open problem whether a stronger bound of $\Omega(\ell^{1/2})$ holds for any regular graph.
First, we show that the support of a closed random walk of length $\ell$ is at least $\Omega(\ell^{1/2} / \sqrt{\log n})$ for any regular or bounded-degree graph on n vertices. Secondly, we prove for every $\ell \geq 1$ the existence of a family of bounded-degree graphs, together with a start vertex such that the support is bounded by $O( \ell^{1/2}/\sqrt{\log n})$. Besides addressing the open problem of McKenzie et al., these two results also establish a subtle separation between closed random walks and open random walks, for which the support on any regular (or bounded-degree) graph is well-known to be $\Omega(\ell^{1/2})$ for all $\ell \geq 1$. For irregular graphs, we prove that even if the start vertex is chosen uniformly, the support of a closed random walk may still be $O(\log \ell)$. This rules out a general polynomial lower bound in \ell for all graphs, which is suggested in McKenzie et al. Finally, we also apply our results on random walks to obtain new bounds on the multiplicity of the second largest eigenvalue of the adjacency matrices of graphs.
Prashanth Amireddy, Ankit Garg, Neeraj Kayal, Chandan Saha and Bhargav Thankey
Abstract: A recent breakthrough work of Limaye, Srinivasan, and Tavenas [FOCS 2021] proved superpolynomial lower bounds for low-depth arithmetic circuits via a “hardness escalation" approach: they proved lower bounds for low-depth set-multilinear circuits and then lifted the bounds to low-depth general circuits. In this work, we prove superpolynomial lower bounds for low-depth circuits by bypassing the hardness escalation, i.e., the set-multilinearization, step. As set-multilinearization comes with an exponential blow-up in circuit size, our direct proof opens up the possibility of proving an exponential lower bound for low-depth homogeneous circuits by evading a crucial bottleneck. Our bounds hold for the iterated matrix multiplication and the Nisan-Wigderson design polynomials. We also define a subclass of unrestricted depth homogeneous formulas which we call unique parse tree (UPT) formulas, and prove superpolynomial lower bounds for these. This significantly generalizes the superpolynomial lower bounds for regular formulas [Kayal-Saha-Saptharishi, STOC 2014], [Fournier-Limaye-Malod-Srinivasan, STOC 2014].
Lijie Chen, Xin Lyu, Avishay Tal and Hongxun Wu
Abstract: We give the first pseudorandom generators with sub-linear seed length for the following variants of read-once branching programs (roBPs):
First, we show there is an explicit PRG of seed length O(log^2(n/eps) log(n)) that fools unbounded-width unordered permutation branching programs with a single accept state, where $n$ is the length of the program. Previously, [Lee-Pyne-Vadhan RANDOM 2022] gave a PRG with seed length Omega(n) for this class. For the ordered case, [Hoza-Pyne-Vadhan ITCS 2021] gave a PRG with seed length O~(log(n)log(1/eps)).
Second, we show there is an explicit PRG fooling unbounded-width unordered regular branching programs with a single accept state with seed length O~(sqrt(n log(n/eps))). Previously, no non-trivial PRG (with seed length less than n) was known for this class (even in the ordered setting). For the ordered case, [Bogdanov-Hoza-Prakriya-Pyne CCC 2022] gave an HSG with seed length O~(log(n)log(1/eps)).
Third, we show there is an explicit PRG fooling width w adaptive branching programs with seed length O(log(n)log^2(nw/eps)). Here, the branching program can choose an input bit to read depending on its current state, while it is guaranteed that on any input x \in {0,1}^n, the branching program reads each input bit exactly once. Previously, no PRG with a non-trivial seed length is known for this class. We remark that there are some functions computable by constant-width adaptive branching programs but not by sub-exponential-width unordered branching programs.
In terms of techniques, we indeed show that the Forbes-Kelly PRG (with the right parameters) from [Forbes-Kelly FOCS 2018] already fools all variants of roBPs above. Our proof adds several new ideas to the original analysis of Forbes-Kelly, and we believe it further demonstrates the versatility of the Forbes-Kelly PRG.
Rotem Oshman and Tal Roth
Abstract: We consider a multiparty setting where $k$ parties have private inputs $X_1,\ldots,X_k \subseteq [n]$ and wish to compute the intersection $\bigcap_{\ell=1}^k X_{\ell}$ of their sets, using as little communication as possible. This task generalizes the well-known problem of set disjointness, where the parties are required only to determine whether the intersection is empty or not.
In the worst-case, it is known that the communication complexity of finding the intersection is the same as that of solving set disjointness, regardless of the size of the intersection: the cost of both problems is $\Omega\left(n \log k + k\right)$ bits in the shared blackboard model, and $\Omega \left(n k\right)$ bits in the coordinator model.
In this work we consider a realistic setting where the parties' inputs are independent of one another, that is, the input is drawn from a product distribution. We show that this makes finding the intersection significantly easier than in the worst-case: only $\tilde{\Theta}( (n^{1-1/k} \left(H(S) + 1\right)^{1/k}) + k)$ bits of communication are required, where ${H}(S)$ is the Shannon entropy of the intersection $S$. We also show that the parties do not need to exactly know the underlying input distribution; if we are given in advance $O(n^{1/k})$
samples from the underlying distribution $\mu$, we can learn enough about $\mu$ to allow us to compute the intersection of an input drawn from $\mu$ using expected communication $\tilde{\Theta}( (n^{1-1/k}E[|{S}|]^{1/k}) + k)$, where $|{S}|$ is the size of the intersection.
Amir Azarmehr and Soheil Behnezhad
Abstract: We study the robust communication complexity of maximum matching. Edges of an arbitrary
n-vertex graph G are randomly partitioned between Alice and Bob independently and uniformly.
Alice has to send a single message to Bob such that Bob can find an (approximate) maximum
matching of the whole graph G. We specifically study the best approximation ratio achievable
via protocols where Alice communicates only O~(n) bits to Bob.
There has been a growing interest on the robust communication model due to its connections
to the random-order streaming model. An algorithm of Assadi and Behnezhad [ICALP’21]
implies a (2/3 + ε_0 ∼ .667)-approximation for a small constant 0 < ε_0 < 10^{−18}, which remains
the best-known approximation for general graphs. For bipartite graphs, Assadi and Behnezhad
[Random’21] improved the approximation to .716 albeit with a computationally inefficient (i.e.,
exponential time) protocol.
In this paper, we study a natural and efficient protocol implied by a random-order streaming
algorithm of Bernstein [ICALP’20] which is based on edge-degree constrained subgraphs (EDCS)
[Bernstein and Stein; ICALP’15]. The result of Bernstein immediately implies that this pro-
tocol achieves an (almost) (2/3 ∼ .666)-approximation in the robust communication model.
We present a new analysis, proving that it achieves a much better (almost) (5/6 ∼ .833)-
approximation. This significantly improves previous approximations both for general and bi-
partite graphs. We also prove that our analysis of Bernstein’s protocol is tight.
Takehiro Ito, Naonori Kakimura, Naoyuki Kamiyama, Yusuke Kobayashi, Shun-Ichi Maezawa, Yuta Nozaki and Yoshio Okamoto
Abstract: We prove that the computation of a combinatorial shortest path between two vertices of a graph associahedron, introduced by Carr and Devadoss, is NP-hard. This resolves an open problem raised by Cardinal. A graph associahedron is a generalization of the well-known associahedron. The associahedron is obtained as the graph associahedron of a path. It is a tantalizing and important open problem in theoretical computer science that the computation of a combinatorial shortest path between two vertices of the associahedron can be done in polynomial time, which is identical to the computation of the flip distance between two triangulations of a convex polygon, and the rotation distance between two rooted binary trees. Our result shows that a certain generalized approach to tackling this open problem is not promising. As a corollary of our theorem, we prove that the computation of a combinatorial shortest path between two vertices of a polymatroid base polytope cannot be done in polynomial time unless P = NP. Since a combinatorial shortest path on the matroid base polytope can be computed in polynomial time, our result reveals an unexpected contrast between matroids and polymatroids.
Jan Dreier, Nikolas Mählmann, Sebastian Siebertz, and Szymon Toruńczyk
Abstract: Monadically stable and monadically NIP classes of structures were initially studied in the context of model theory and defined in logical terms. They have recently attracted attention in the area of structural graph theory, as they generalize notions such as nowhere denseness, bounded cliquewidth, and bounded twinwidth.
Our main result is the - to the best of our knowledge first - purely combinatorial characterization of monadically stable classes of graphs, in terms of a property dubbed flip-widness. A class C of graphs is flip-wide if for every fixed radius r, every sufficiently large set of vertices of a graph G ∈ C contains a large subset of vertices with mutual distance larger than r, where the distance is measured
in some graph G' that can be obtained from G by performing a bounded number of flips that swap edges and non-edges within a subset of vertices. Flip-wideness generalizes the notion of uniform quasi-wideness, which characterizes nowhere dense classes and had a key impact on the combinatorial and algorithmic treatment of nowhere dense classes. To obtain this result, we develop tools that also apply to the more general monadically NIP classes, based on the notion of indiscernible sequences from model theory. We show that in monadically stable and monadically NIP classes indiscernible sequences impose a strong combinatorial structure on their definable neighborhoods. All our proofs are constructive and yield efficient algorithms.
Jakub Gajarský, Nikolas Mählmann, Rose McCarty, Pierre Ohlmann, Michał Pilipczuk, Wojciech Przybyszewski, Sebastian Siebertz, Marek Sokołowski and Szymon Toruńczyk
Abstract: A class of graphs $C$ is monadically stable if for every unary expansion $\widehat{C}$ of $C$, one cannot encode - using first-order transductions - arbitrarily long linear orders in graphs from $\widehat{C}$. It is known that nowhere dense graph classes are monadically stable; these include classes of bounded maximum degree and classes that exclude a fixed topological minor. On the other hand, monadic stability is a property expressed in purely model-theoretic terms that is also suited for capturing structure in dense graphs.
In this work we provide a characterization of monadic stability in terms of the Flipper game: a game on a graph played by Flipper, who in each round can complement the edge relation between any pair of vertex subsets, and Connector, who in each round is forced to localize the game to a ball of bounded radius. This is an analog of the Splitter game, which characterizes nowhere dense classes of graphs (Grohe, Kreutzer, and Siebertz, J.~ACM~'17).
We give two different proofs of our main result. The first proof is based on tools borrowed from model theory, and it exposes an additional property of monadically stable graph classes that is close in spirit to definability of types. Also, as a byproduct, we show that monadic stability for graph classes coincides with monadic stability of existential formulas with two free variables, and we provide another combinatorial characterization of monadic stability via forbidden patterns. The second proof relies on the recently introduced notion of flip-wideness (Dreier, M\"ahlmann, Siebertz, and Toruńczyk, arXiv~2206.13765) and provides an efficient algorithm to compute Flipper's moves in a winning strategy.
Pierre Ohlmann, Michał Pilipczuk, Wojciech Przybyszewski and Szymon Toruńczyk
Abstract: We use model-theoretic tools originating from stability theory to derive a result we call the Finitary Substitute Lemma, which intuitively says the following. Suppose we work in a stable graph class $\mathcal{C}$, and using a first-order formula $\varphi$ with parameters we are able to define, in every graph $G\in \mathcal{C}$, a relation $R$ that satisfies some hereditary first-order assertion $\psi$. Then we are able to find a first-order formula $\varphi'$ that has the same property, but additionally is {\em{finitary}}: there is finite bound $k\in \mathbb{N}$ such that in every graph $G\in \mathcal{C}$, different choices of parameters give only at most $k$ different relations $R$ that can be defined using $\varphi'$.
We use the Finitary Substitute Lemma to derive two corollaries about the existence of certain canonical decompositions in classes of well-structured graphs.
We prove that in the Splitter game, which characterizes nowhere dense graph classes, and in the Flipper game, which characterizes monadically stable graph classes, there is a winning strategy for Splitter, respectively Flipper, that can be defined in first-order logic from the game history. Thus, the strategy is canonical.
We show that for any fixed graph class $\mathcal{C}$ of bounded shrubdepth, there is an $O(n^2)$-time algorithm that given an $n$-vertex graph $G\in \mathcal{C}$, computes in an isomorphism-invariant way a structure $H$ of bounded treedepth in which $G$ can be interpreted. A corollary of this result is an $O(n^2)$-time isomorphism test and canonization algorithm for any fixed class of bounded~shrubdepth.
Samuel Braunfeld, Anuj Dawar, Ioannis Eleftheriadis and Aris Papadopoulos
Abstract: We prove that for any monotone class of finite relational structures, the first-order theory of the class is NIP in the sense of stability theory if, and only if, the collection of Gaifman graphs of structures in this class is nowhere dense. This generalises to relational structures a result previously known for graphs and answers an open question posed by Adler and Adler (2014). The result is established by the application of Ramsey-theoretic techniques and shows that the property of being NIP is highly robust for monotone classes. We also show that the model-checking problem for first-order logic is intractable on any class of monotone structures that is not (monadically) NIP. This is a contribution towards the conjecture of Bonnet et al.\ that the hereditary classes of structures admitting fixed-parameter tractable model-checking are precisely those that are monadically NIP.
Michael Lampis
Abstract: Courcelle’s celebrated theorem states that all MSO-expressible properties can be decided in linear time on graphs of bounded treewidth. Unfortunately, the hidden constant implied by this theorem is a tower of exponentials whose height increases with each quantifier alternation in the formula. More devastatingly, this cannot be improved, under standard assumptions, even if we consider the much more restricted problem of deciding FO-expressible properties on trees.
In this paper we revisit this well-studied topic and identify a natural special case where the dependence of Courcelle’s theorem can, in fact, be improved. Specifically, we show that all FO expressible properties can be decided with an elementary dependence on the input formula, if the input graph has bounded pathwidth (rather than treewidth). This is a rare example of treewidth and pathwidth having different complexity behaviors. Our result is also in sharp contrast with MSO logic on graphs of bounded pathwidth, where it is known that the dependence has to be non-elementary, under standard assumptions. Our work builds upon, and generalizes, a corresponding meta-theorem by Gajarský and Hliněný for the more restricted class of graphs of bounded tree-depth.
Manuel Cáceres
Best Student Paper Track A:
Abstract: A minimum chain cover (MCC) of a $k$-width directed acyclic graph (DAG) $G = (V, E)$ is a set of $k$ chains (paths in the transitive closure) of $G$ such that every vertex appears in at least one chain in the cover. The state-of-the-art solutions for MCC run in time $\tilde{O}(k(|V|+|E|))$ [M\"akinen et~at., TALG], $O(T_{MF}(|E|) + k|V|)$, $O(k^2|V| + |E|)$ [C\'aceres et~al., SODA 2022], $\tilde{O}(|V|^{3/2} + |E|)$ [Kogan and Parter, ICALP 2022] and $\tilde{O}(T_{MCF}(|E|) + \sqrt{k}|V|)$ [Kogan and Parter, SODA 2023], where $T_{MF}(|E|)$ and $T_{MCF}(|E|)$ are the running times for solving maximum flow (MF) and minimum-cost flow (MCF), respectively.
In this work we present an algorithm running in time $O(T_{MF}(|E|) + (|V|+|E|)\log{k})$. By considering the recent result for solving MF [Li et~al., FOCS 2022] our algorithm is the first running in almost linear time. Moreover, our techniques are deterministic and derive a deterministic near-linear time algorithm for MCC if the same is provided for MF. At the core of our solution we use a modified version of the mergeable dictionaries [Farach and Thorup, Algorithmica], [Iacono and \"Ozkan, ICALP 2010] data structure boosted with the SIZE-SPLIT operation and answering queries in amortized logarithmic time, which can be of independent interest.
Ruiwen Dong
Best Student Paper Track B
Abstract: We consider semigroup algorithmic problems in the wreath product $\mathbb{Z} \wr \mathbb{Z}$.
Our paper focuses on two decision problems introduced by Choffrut and Karhum\"{a}ki (2005): the \emph{Identity Problem} (does a semigroup contain the neutral element?) and the \emph{Group Problem} (is a semigroup a group?) for finitely generated sub-semigroups of $\mathbb{Z} \wr \mathbb{Z}$.
We show that both problems are decidable.
Our result complements the undecidability of the \emph{Semigroup Membership Problem} (does a semigroup contain a given element?) in $\mathbb{Z} \wr \mathbb{Z}$ shown by Lohrey, Steinberg and Zetzsche (ICALP 2013), and contributes an important step towards solving semigroup algorithmic problems in general metabelian groups.
Tsun-Ming Cheung, Hamed Hatami, Pooya Hatami and Kaave Hosseini
Best Paper Track A:
Abstract: In many practical learning problems, the learning task is tractable because it is only required to predict the labels of the data points with specific properties. For example, in large-margin classifiers, one seeks to classify only the data points bounded away from the decision boundary by a margin.
In a recent article, Alon, Hanneke, Holzman, and Moran (FOCS '21) introduced a unifying framework to study the learnability of classes of such partial concepts. One of the central questions studied in their work is whether the learnability of a partial concept class is always inherited from the learnability of some ``extension'' of it to a total concept class.
They showed that this is not the case for PAC learning but left the problem for the stronger notion of online learnability open.
We answer their question in the negative by constructing a class of partial concepts that is online learnable, but no extension of it to a class of total concepts is online learnable (or even PAC learnable).
Marvin Künnemann, Filip Mazowiecki, Lia Schütze, Henry Sinclair-Banks and Karol Węgrzycki
Best Paper Track B:
Abstract: Seminal results establish that the coverability problem for Vector Addition Systems with States (VASS) is in EXPSPACE (Rackoff, '78) and is EXPSPACE-hard already under unary encodings (Lipton, '76).
More precisely, Rosier and Yen later utilise Rackoff's bounding technique to show that if coverability holds then there is a run of length at most n^(2^(O(d*log(d))), where d is the dimension and n is the size of the given unary VASS.
Earlier, Lipton showed that there exist instances of coverability in d-dimensional unary VASS that are only witnessed by runs of length at least n^(2^(Omega(d))).
The remaining gap was stated as an open problem in '82 by Mayr and Meyer and remained open until now.
Our first result closes this gap.
We improve Rackoff's upper bound, removing the twice-exponentiated log(d) factor, thus matching Lipton's lower bound.
This closes the corresponding gap for the exact space required to decide coverability.
This also yields a deterministic n^(2^(O(d)))-time algorithm for coverability.
Our second result is a matching lower bound, that there does not exist a deterministic n^(2^(O(d)))-time algorithm, conditioned upon the Exponential Time Hypothesis.
When analysing coverability, a standard proof technique is to consider VASS with bounded counters.
Bounded VASS make for an interesting and popular model due to strong connections with timed automata.
Withal, we study a natural setting where the counter bound is linear in the size of the VASS.
Here the trivial exhaustive search algorithm runs in O(n^(d+1))-time.
We give evidence to this being near-optimal.
We prove that in dimension one this trivial algorithm is conditionally optimal, by showing that n^(2-o(1))-time is required under the k-cycle hypothesis.
In general fixed dimension d, we show that n^(d-2-o(1))-time is required under the 3-uniform hyperclique hypothesis.
Miguel Bosch Calvo, Fabrizio Grandoni and Afrouz Jabal Ameli
Best Paper Track A:
Abstract: The 2-Vertex-Connected Spanning Subgraph problem (2VCSS) is among the most basic NP-hard (Survivable) Network Design problems: we are given an (unweighted) undirected graph $G$. Our goal is to find a subgraph $S$ of $G$ with the minimum number of edges which is $2$-vertex-connected, namely $S$ remains connected after the deletion of an arbitrary node. 2VCSS is well-studied in terms of approximation algorithms, and the current best (polynomial-time) approximation factor is $10/7$ by Heeger and Vygen [SIDMA'17] (improving on earlier results by Khuller and Vishkin [STOC'92] and Garg, Vempala and Singla [SODA'93]).
Here we present an improved $4/3$ approximation. Our main technical ingredient is an approximation preserving reduction to a conveniently structured subset of instances which are ``almost'' 3-vertex-connected. The latter reduction might be helpful in future work.
Yanlin Chen and Ronald de Wolf
Abstract: Lasso and Ridge are important minimization problems in machine learning and statistics. They are versions of linear regression with squared loss where the vector $\theta\in\mathbb{R}^d$ of coefficients is constrained in either $\ell_1$-norm (for Lasso) or in $\ell_2$-norm (for Ridge). We study the complexity of quantum algorithms for finding $\epsilon$-minimizers for these minimization problems. We show that for Lasso we can get a quadratic quantum speedup in terms of $d$ by speeding up the cost-per-iteration of the Frank-Wolfe algorithm, while for Ridge the best quantum algorithms are linear in $d$, as are the best classical algorithms. As a byproduct of our quantum lower bound for Lasso, we also prove the first classical lower bound for Lasso that is tight up to polylog-factors.
Xiantao Li and Chunhao Wang
Abstract: We present an efficient quantum algorithm for simulating the dynamics of Markovian open quantum systems. The performance of our algorithm is similar to the previous state-of-the-art quantum algorithm, i.e., it scales linearly in evolution time and poly-logarithmically in inverse precision. However, our algorithm is conceptually cleaner, and it only uses simple quantum primitives without compressed encoding. Our approach is based on a novel mathematical treatment of the evolution map, which involves a higher-order series expansion based on Duhamel's principle and approximating multiple integrals using scaled Gaussian quadrature. Our method easily generalizes to simulating quantum dynamics with time-dependent Lindbladians. Furthermore, our method of approximating multiple integrals using scaled Gaussian quadrature could potentially be used to produce a more efficient approximation of time-ordered integrals, and therefore can simplify existing quantum algorithms for simulating time-dependent Hamiltonians based on a truncated Dyson series.
Thiago Bergamaschi
Abstract: The ground state energy and the free energy of Quantum Local Hamiltonians are fundamental quantities in quantum many-body physics, however, it is QMA-Hard to estimate them in general. In this paper, we develop new techniques to find classical, additive error product-state approximations for these quantities on certain families of Quantum $k$-Local Hamiltonians. Namely, those which are either dense, have low threshold rank, or are defined on a sparse graph that excludes a fixed minor, building on the methods and the systems studied by Brandão and Harrow, Gharibian and Kempe, and Bansal, Bravyi and Terhal.
We present two main technical contributions. First, we discuss a connection between product-state approximations of local Hamiltonians and combinatorial graph property testing. We develop a series of \textit{weak Szemer\'edi regularity} lemmas for $k$-local Hamiltonians, built on those of Frieze and Kannan and others. We use them to develop \textit{constant time} sampling algorithms, and to characterize the `vertex sample complexity' of the Local Hamiltonian problem, in an analog to a classical result by Alon, de la Vega, Kannan and Karpinski. Second, we build on the information-theoretic product-state approximation techniques by Brandão and Harrow, extending their results to the free energy and to an asymmetric graph setting. We leverage this structure to define families of algorithms for the free energy at low temperatures, and new algorithms for certain sparse graph families.
Ryu Hayakawa, Jordi Weggemans, Tomoyuki Morimae, Chris Cade, Marten Folkertsma, Sevag Gharibian and Francois Le Gall
Abstract: Estimating the ground state energy of a local Hamiltonian is a central problem in quantum chemistry. In order to further investigate its complexity and the potential of quantum algorithms for quantum chemistry, Gharibian and Le Gall (STOC 2022) recently introduced the guided local Hamiltonian problem (GLH), which is a variant of the local Hamiltonian problem where an approximation of a ground state (which is called a guiding state) is given as an additional input. Gharibian and Le Gall showed quantum advantage (more precisely, BQP-completeness) for GLH with 6-local Hamiltonians when the guiding state has fidelity (inverse-polynomially) close to 1/2 with a ground state.
In this paper, we optimally improve both the locality and the fidelity parameter: we show that the BQP-completeness persists even with 2-local Hamiltonians, and even when the guiding state has fidelity (inverse-polynomially) close to 1 with a ground state. Moreover, we show that the BQP-completeness also holds for 2-local physically motivated Hamiltonians on a 2D square lattice or a 2D triangular lattice. Beyond the hardness of estimating the ground state energy, we also show BQP-hardness persists when considering estimating energies of excited states of these Hamiltonians instead. Those make further steps towards establishing practical quantum advantage in quantum chemistry.
Paul Beame and Niels Kornerup
Abstract: Cumulative memory---the sum of space used over the steps of a computation---is a fine-grained measure of time-space complexity that was introduced to analyze cryptographic applications more precisely. It is a more accurate cost measure for algorithms with infrequent spikes in memory usage in the context of technologies such as cloud computing that allow dynamic allocation and de-allocation of resources during their execution.
We give the first lower bounds on cumulative memory complexity that apply to general sequential classical algorithms. We also prove the first such bounds for bounded error quantum circuits. Moreover, we develop general paradigms for bounding cumulative memory complexity inspired by the standard paradigms for proving time space tradeoff lower bounds, which only lower bound the maximum space used during an execution.
With these new paradigms we obtain lower bounds on cumulative memory that are essentially as strong as the best time-space tradeoff lower bounds which are known to be tight in many cases. Among many possible applications, we show that any classical sorting algorithm with success probability at least $1/\text{poly}(n)$ requires cumulative memory $\tilde \Omega(n^2)$, any classical matrix multiplication algorithm requires cumulative memory $\Omega(n^6/T)$, any quantum sorting circuit requires cumulative memory $\Omega(n^3/T)$, and any quantum circuit that finds $k$ disjoint collisions in a random function requires cumulative memory $\Omega(k^3 n/T^2)$.
Petr Hlineny and Jan Jedelský
Abstract: Twin-width is a structural width parameter introduced by Bonnet, Kim, Thomassé and Watrigant [FOCS 2020]. Very briefly, its essence is a gradual reduction (a contraction sequence) of the given graph down to a single vertex while maintaining limited difference of neighbourhoods of the vertices, and it can be seen as widely generalizing several other traditional structural parameters. Having such a sequence at hand allows us to solve many otherwise hard problems efficiently. Graph classes of bounded twin-width, in which appropriate contraction sequences are efficiently constructible, are thus of interest in combinatorics and in computer science. However, we currently do not know in general how to obtain a witnessing contraction sequence of low width efficiently, and published upper bounds on the twin-width in non-trivial cases are often “astronomically large”.
We focus on planar graphs, which are known to have bounded twin-width (already since the introduction of twin-width), but the first explicit “non-astronomical” upper bounds on the twin-width of planar graphs appeared just a year ago; namely the bound of at most 183 by Jacob and Pilipczuk [arXiv, January 2022], and 583 by Bonnet, Kwon and Wood [arXiv, February 2022]. A subsequent manuscript of Bekos et al. improved the bound down to 37 [arXiv, April 2022]. We prove that the twin-width of every planar graph is at most 8, and construct a witnessing contraction sequence in linear time. Note that the currently best lower-bound planar example is of twin-width 7, by Král and Lamaison [arXiv, September 2022]. We also prove that the twin-width of every bipartite planar graph is at most 6, and again construct a witnessing sequence in linear time.
Leslie Ann Goldberg and Marc Roth
Abstract: Given a class of graphs C, the problem ParitySub(C) is defined as follows. The input is a graph H in C together with an arbitrary graph G. The problem is to compute, modulo 2, the number of subgraphs of G that are isomorphic to H. The goal of this research is to determine for which classes C the problem ParitySub(C) is fixed-parameter tractable (FPT), i.e., solvable in time f(|H|)·|G|^{O(1)}.
Curticapean, Dell, and Husfeldt (ESA 2021) conjectured that ParitySub(C) is FPT if and only if the class of allowed patterns C is matching splittable, which means that for some fixed B, every H in C can be turned into a matching (a graph in which every vertex has degree at most 1) by removing at most B vertices.
Assuming the randomised Exponential Time Hypothesis, we prove their conjecture for (I) all hereditary pattern classes C, and (II) all tree pattern classes, i.e., all classes C such that every H in C is a tree.
We also establish almost tight fine-grained upper and lower bounds for the case of hereditary patterns (I).
Michał Wlodarczyk
Abstract: In Chordal/Interval Vertex Deletion we ask how many vertices one needs to remove from a graph to make it chordal (respectively: interval). We study these problems under the parameterization by treewidth (tw) of the input graph G. On the one hand, we present an algorithm for Chordal Vertex Deletion with running time 2^O(tw)|V(G)|, improving upon the running time 2^O(tw^2)|V(G)|^O(1) by Jansen, de Kroon, and Wlodarczyk (STOC'21). When a tree decomposition of width tw is given, then the base of the exponent equals 2^(omega-1)3+1. Our algorithm is based on a novel link between chordal graphs and graphic matroids, which allows us to employ the framework of representative families. On the other hand, we prove that the known 2^O(tw log tw)|V(G)|-time algorithm for Interval Vertex Deletion cannot be improved assuming Exponential Time Hypothesis.
Laure Morelle, Ignasi Sau, Giannos Stamoulis and Dimitrios M. Thilikos
Abstract: Let ${\cal G}$ be a minor-closed graph class and let $G$ be an $n$-vertex graph. We say that $G$ is a {\em $k$-apex} of ${\cal G}$ if $G$ contains a set $S$ of at most $k$ vertices such that $G\setminus S$ belongs to ${\cal G}$. Our first result is an algorithm that decides whether $G$ is a $k$-apex of ${\cal G}$ in time $2^{{\sf poly}(k)}\cdot n^2$.
This algorithm improves the previous one, given by Sau, Stamoulis, and Thilikos [ICALP 2020, TALG 2022], whose running time was $2^{{\sf poly}(k)}\cdot n^3$. The {\em elimination distance} of $G$ to ${\cal G}$, denoted by ${\sf ed}_{\cal G}(G)$, is the minimum number of rounds required to reduce each connected component of $G$ to a graph in ${\cal G}$ by removing one vertex from each connected component in each round. Bulian and Dawar [Algorithmica 2017] proved the existence of an {\sf FPT}-algorithm, with parameter $k$, to decide whether ${\sf ed}_{\cal G}(G)\leq k$. This algorithm is based on the computability of the minor-obstructions and its dependence on $k$ is not explicit. We extend the techniques used in the first algorithm to decide whether ${\sf ed}_{\cal G}(G)\leq k$ in time $2^{2^{2^{{\sf poly}(k)}}}\cdot n^2$. This is the first algorithm for this problem with an explicit parametric dependence in $k$. In the special case where ${\cal G}$ excludes some apex-graph as a minor, we give two alternative algorithms, one running in time $2^{{2^{{\cal O}(k^2\log k)}}}\cdot n^2$ and one running in time $2^{{\sf poly}(k)}\cdot n^3$.
As a stepping stone for these algorithms, we provide an algorithm that decides whether ${\sf ed}_{\cal G}(G)\leq k$ in time $2^{{\cal O}({\sf tw}\cdot k + {\sf tw}\log{\sf tw})}\cdot n$, where ${\sf tw}$ is the treewidth of $G$. This algorithm combines the dynamic programming framework of Reidl, Rossmanith, Villaamil, and Sikdar [ICALP 2014] for the particular case where ${\cal G}$ contains only the empty graph (i.e., for treedepth) with the representative-based techniques introduced by Baste, Sau, and Thilikos [SODA 2020]. In all the algorithmic complexities above, {\sf poly} is a polynomial function whose degree depends on ${\cal G}$, while the hidden constants also depend on ${\cal G}$.
Finally, we provide explicit upper bounds on the size of the graphs in the minor-obstruction set of the class of graphs ${\cal E}_k({\cal G})=\{G \mid {\sf ed}_{\cal G}(G)\leq k\}$.
Hans L. Bodlaender, Carla Groenland and Michał Pilipczuk
Abstract: We investigate the parameterized complexity of \textsc{Binary CSP} parameterized by the vertex cover number and the treedepth of the constraint graph, as well as by a selection of related modulator-based parameters. The main findings are as follows:
We expect that many of the studied classes will be useful in the future for pinpointing the complexity of various structural parameterizations of graph problems.
Lukas Drexler, Jan Eube, Kelin Luo, Heiko Röglin, Melanie Schmidt and Julian Wargalla
Abstract: Motivated by an application from geodesy, we study the connected k-center problem and the connected k-diameter problem. These problems arise from the classical k-center and k-diameter problems by adding a side constraint. For the side constraint, we are given an undirected connectivity graph G on the input points, and a clustering is now only feasible if every cluster induces a connected subgraph in G. Usually in clustering problems one assumes that the clusters are pairwise disjoint. We study this case but additionally also the case that clusters are allowed to be non-disjoint. This can help to satisfy the connectivity constraints.
Our main result is an O(1)-approximation algorithm for the disjoint connected k-center and k-diameter problem for Euclidean spaces of low dimension (constant d) and for metrics with constant doubling dimension. For general metrics, we get an O(log^2(k))-approximation. Our algorithms work by computing a non-disjoint connected clustering first and transforming it into a disjoint connected clustering.
We complement these upper bounds by several upper and lower bounds for variations and special cases of the model.
Yossi Azar and Danny Vainstein
Abstract: We present a novel technique to cluster points in a metric space. A well-known non-parametric objective is to embed the metric space into a simpler structured metric space such as a line (i.e., Linear Arrangement) or a binary tree (i.e., Hierarchical Clustering). Points which are close in the metric space should be mapped to close points/leaves in the line/tree; similarly, points which are far in the metric space should be far in the line or on the tree. In particular we consider the Maximum Linear Arrangement problem (Hassin and Rubinstein 2001) and the Maximum Hierarchical Clustering problem (Vincent Cohen-Addad et al. 2017) applied to metrics.
We design approximation schemes ($1 - \epsilon$ approximation for any constant $\epsilon > 0$) for these objectives. In particular this shows that by considering metrics one may significantly improve former approximations ($0.5$ for Max Linear Arrangement and $0.74$ for Max Hierarchical Clustering). Our main technique, which is called multi-layer peeling, consists of recursively peeling off points which are far from the "core" of the metric space. The recursion ends once the core becomes a sufficiently densely weighted metric space (i.e. the average distance is at least a constant times the diameter) or once it becomes negligible with respect to its inner contribution to the objective. Interestingly, the algorithm in the Linear Arrangement case is much more involved than that in the Hierarchical Clustering case, and uses a significantly more delicate peeling.
Jun-Ting Hsieh and Pravesh Kothari
Abstract: In this note, we describe a $\alpha_{GW} + \tilde{\Omega}(1/d^2)$-factor approximation algorithm for Max-Cut on weighted graphs of degree at most $d$. Here, $\alpha_{GW} \approx 0.878$ is the worst-case approximation ratio of the Goemans-Williamson rounding for Max-Cut. This improves on previous results for unweighted graphs by Feige, Karpinski, and Langberg and Floren. Our guarantee is obtained by a tighter analysis of the solution obtained by applying a natural local improvement procedure to the Goemans-Williamson rounding of the basic SDP strengthened with triangle inequalities.
Ilan Doron, Ariel Kulik and Hadas Shachnai
Abstract: We study the budgeted versions of the well known matching and matroid intersection problems. While both problems admit a polynomial-time approximation scheme (PTAS) [Berger et al. (Math. Programming, 2011), Chekuri, Vondrak and Zenklusen (SODA 2011)], it has been an intriguing open question whether these problems admit a fully PTAS (FPTAS), or even an efficient PTAS (EPTAS).
In this paper we answer the second part of this question affirmatively, by presenting an EPTAS for budgeted matching and budgeted matroid intersection. A main component of our scheme is a novel construction of representative sets for desired solutions, whose cardinality depends only on $\varepsilon$, the accuracy parameter. Thus, enumerating over solutions within a representative set leads to an EPTAS. This crucially distinguishes our algorithms from previous approaches, which rely on exhaustive enumeration over the solution set. Our ideas for constructing representative sets may find use in tackling other budgeted optimization problems, and are thus of independent interest.
David Stalfa, Rajmohan Rajaraman and Sheng Yang
Abstract: We study the problem of scheduling precedence-constrained jobs on heterogenous machines in the presence of non-uniform job and machine communication delays. We are given a set of $n$ unit size precedence-ordered jobs, and a set of $m$ related machines each with size $m_i$ (machine $i$ can execute at most $m_i$ jobs at any time). Each machine $i$ has an associated in-delay $\rho^{\mathrm{in}}_i$ and out-delay $\rho^{\mathrm{out}}_i$. Each job $v$ also has an associated in-delay $\rho^{\mathrm{in}}_v$ and out-delay $\rho^{\mathrm{out}}_v$.
In a schedule, job $v$ may be executed on machine $i$ at time $t$ if each predecessor $u$ of $v$ is completed on $i$ before time $t$ or on any machine $j$ before time $t - (\rho^{\mathrm{in}}_i + \rho^{\mathrm{out}}_j + \rho^{\mathrm{out}}_u + \rho^{\mathrm{in}}_v)$. The objective is to construct a schedule that minimizes makespan, which is the maximum completion time over all jobs.
We consider schedules which allow duplication of jobs as well as schedules which do not. When duplication is allowed, we provide an asymptotic $\mathrm{polylog}(n)$-approximation algorithm. This approximation is further improved in the setting with uniform machine speeds and sizes. Our best approximation for non-uniform delays is provided for the setting with uniform speeds, uniform sizes, and no job delays. For schedules with no duplication, we obtain an asymptotic $\mathrm{polylog}(n)$-approximation for the above model, and a true $\mathrm{polylog}(n)$-approximation for symmetric machine and job delays. These results represent the first polylogarithmic approximation algorithms for scheduling with non-uniform communication delays.
Finally, we consider a more general model, where the delay can be an arbitrary function of the job and the machine executing it: job $v$ can be executed on machine $i$ at time $t$ if all of $v$'s predecessors are executed on $i$ by time $t-1$ or on any machine by time $t - \rho_{v,i}$. We present an approximation-preserving reduction from the Unique Machines Precedence-constrained Scheduling (\textsc{umps}) problem, first defined in [13], to this job-machine delay model. The reduction entails logarithmic hardness for this delay setting, as well as polynomial hardness if the conjectured hardness of \textsc{umps} holds.
This set of results is among the first steps toward cataloging the rich landscape of problems in non-uniform delay scheduling.
Andrzej Dorobisz and Jakub Kozik
Abstract: We investigate local computation algorithms (LCA) for two-coloring of k-uniform hypergraphs. We focus on hypergraph instances that satisfy strengthened assumption of the Lovasz Local Lemma of the form 2^{1−αk}(Δ+1)e<1, where Δ is the bound on the maximum edge degree. The main question which arises here is for how large α there exists a LCA that is able to properly color such hypergraphs in polylogarithmic time per query. We describe briefly how upgrading the classical sequential procedure of Beck from 1991 with Moser and Tardos' RESAMPLE yields polylogarithmic LCA that works for α up to 1/4. Then, we present an improved procedure that solves wider range of instances by allowing α up to 1/3.
Kazusato Oko, Shinsaku Sakaue and Shin-ichi Tanigawa
Abstract: Spectral hypergraph sparsification, an attempt to extend well-known spectral graph sparsification to hypergraphs, has been extensively studied over the past few years. For undirected hypergraphs, Kapralov, Krauthgamer, Tardos, and Yoshida~(2022) have proved an $\varepsilon$-spectral sparsifier of the optimal $O^*(n)$ size, where $n$ is the number of vertices and $O^*$ suppresses the $\varepsilon^{-1}$ and $\log n$ factors. For directed hypergraphs, however, the optimal sparsifier size has not been known. Our main contribution is the first algorithm that constructs an $O^*(n^2)$-size $\varepsilon$-spectral sparsifier for a weighted directed hypergraph. Our result is optimal up to the $\varepsilon^{-1}$ and $\log n$ factors since there is a lower bound of $\Omega(n^2)$ even for directed graphs. We also show the first non-trivial lower bound of $\Omega(n^2/\varepsilon)$ for general directed hypergraphs. The basic idea of our algorithm is borrowed from the spanner-based sparsification for ordinary graphs by Koutis and Xu~(2016). Their iterative sampling approach is indeed useful for designing sparsification algorithms in various circumstances. To demonstrate this, we also present a similar iterative sampling algorithm for undirected hypergraphs that attains one of the best size bounds, enjoys parallel implementation, and can be transformed to be fault-tolerant.
Daniel Agassy, Dani Dorfman and Haim Kaplan
Abstract: A $(\phi,\epsilon)$-expander-decomposition of a graph $G$ (with $n$ vertices and $m$ edges) is a partition of $V$ into clusters $V_1,\ldots,V_k$ with conductance $\Phi(G[V_i]) \ge \phi$, such that there are at most $\epsilon m$ inter-cluster edges. Such a decomposition plays a crucial role in many graph algorithms. We give a randomized $\tilde{O}(m/\phi)$ time algorithm for computing a $(\phi, \phi\log^2 {n})$-expander decomposition. This improves upon the $(\phi, \phi\log^3 {n})$-expander decomposition also obtained in $\tilde{O}(m/\phi)$ time by [Saranurak and Wang, SODA 2019] (SW) and brings the number of inter-cluster edges within logarithmic factor of optimal.
One crucial component of SW's algorithm is non-stop version of the cut-matching game of [Khandekar, Rao, Vazirani, JACM 2009] (KRV):The cut player does not stop when it gets from the matching player an unbalanced sparse cut, but continues to play on a trimmed part of the large side. The crux of our improvement is the design of a non-stop version of the cleverer cut player of [Orecchia, Schulman, Vazirani, Vishnoi, STOC 2008] (OSVV)). The cut player of OSSV uses a more sophisticated random walk, a subtle potential function, and spectral arguments. Designing and analysing a non-stop version of this game was an explicit open question asked by SW.
Ishay Haviv
Abstract: A subset of $[n] = \{1,2,\ldots,n\}$ is called stable if it forms an independent set in the cycle on the vertex set $[n]$. In 1978, Schrijver proved via a topological argument that for all integers $n$ and $k$ with $n \geq 2k$, the family of stable $k$-subsets of $[n]$ cannot be covered by $n-2k+1$ intersecting families. We study two total search problems whose totality relies on this result.
In the first problem, denoted by Schrijver(n,k,m), we are given an access to a coloring of the stable $k$-subsets of $[n]$ with $m = m(n,k)$ colors, where $m \leq n-2k+1$, and the goal is to find a pair of disjoint subsets that are assigned to the same color. While for $m = n-2k+1$ the problem is known to be PPA-complete, we prove that for $m \leq \alpha \cdot n/k$, with $\alpha$ being any fixed constant, the problem admits an efficient algorithm. For $m = \lfloor n/2 \rfloor-2k+1$, we prove that the problem is efficiently reducible to the Kneser problem, for which no hardness result is known. Motivated by the relation between the problems, we investigate the family of unstable $k$-subsets of $[n]$, which might be of independent interest.
In the second problem, called Unfair Independent Set in Cycle, we are given $\ell$ subsets $V_1, \ldots, V_\ell$ of $[n]$, where $\ell \leq n-2k+1$ and $|V_i| \geq 2$ for all $i \in [\ell]$, and the goal is to find a stable $k$-subset $S$ of $[n]$ satisfying the constraints $|S \cap V_i| \leq |V_i|/2$ for $i \in [\ell]$. We prove that the problem is PPA-complete and that its restriction to instances with $n=3k$ is at least as hard as the Cycle plus Triangles problem, for which no efficient algorithm is known. On the contrary, we prove that there exists a constant $c$ for which the restriction of the problem to instances with $n \geq c \cdot k$ can be solved in polynomial time.
Eric Rivals, Michelle Sweering and Pengfei Wang
Abstract: Consider words of length n. The set of all periods of a word of length $n$ is a subset of {0,1,2,...,n-1}. However, any subset of {0,1,2,...,n-1} is not necessarily a valid set of periods. In a seminal paper in 1981, Guibas and Odlyzko have proposed to encode the set of periods of a word into an n long binary string, called an autocorrelation, where a one at position i denotes a period of i. They considered the question of recognizing a valid period set, and also studied the number of valid period sets for length n, denoted $\kappa_n$. They conjectured that $\ln(\kappa_n)$ asymptotically converges to a constant times $\ln^2(n)$. If improved lower bounds for $\ln(\kappa_n)/\ln^2(n)$ were proposed in 2001, the question of a tight upper bound has remained opened since Guibas and Odlyzko's paper. Here, we exhibit an upper bound for this fraction, which implies its convergence and closes this long standing conjecture. Moreover, we extend our result to find similar bounds for the number of correlations: a generalization of autocorrelations which encodes the overlaps between two strings.
Frits Vaandrager and Thorsten Wißmann
Abstract: We provide a new perspective on the problem how high-level state machine models with abstract actions can be related to low-level models in which these actions are refined by sequences of concrete actions. We describe the connection between high-level and low-level actions using action codes, a variation of the prefix codes known from coding theory. For each action code R, we introduce a contraction operator α_R that turns a low-level model M into a high-level model, and a refinement operator ϱ_R that transforms a high-level model N into a low-level model. We establish a Galois connection ϱ_R (N) ⊑ M ⇔ N ⊑ α_R(M), where ⊑ is the well-known simulation preorder. For conformance, we typically want to obtain an overapproximation of model M. To this end, we also introduce a concretization operator γ_R , which behaves like the refinement operator but adds arbitrary behavior at intermediate points, giving us a second Galois connection α_R (M) ⊑ N ⇔ M ⊑ γ_R(N). Action codes may be used to construct adaptors that translate between concrete and abstract actions during learning and testing of Mealy machines. If Mealy machine M models a black-box system then α_R (M) describes the behavior that can be observed by a learner/tester that interacts with this system via an adaptor derived from code R. Whenever α_R (M) implements (or conforms to) N, we may conclude that M implements (or conforms to) γ_R(N).
Wojciech Rozowski, Tobias Kappé, Dexter Kozen, Todd Schmid and Alexandra Silva
Abstract: We introduce Probabilistic Guarded Kleene Algebra with Tests (ProbGKAT), an extension of GKAT that allows reasoning about uninterpreted imperative programs with probabilistic branching. We give its operational semantics in terms of special class of probabilistic automata. We give a sound and complete Salomaa-style axiomatisation of bisimilarity of ProbGKAT expressions. Finally, we show that bisimilarity of ProbGKAT expressions can be decided in O(n^3 log n) time via a generic partition refinement algorithm.
Michael Blondin and François Ladouceur
Abstract: Population protocols form a well-established model of computation of passively mobile anonymous agents with constant-size memory. It is well known that population protocols compute Presburger-definable predicates, such as absolute majority and counting predicates. In this work, we initiate the study of population protocols operating over arbitrarily large data domains. More precisely, we introduce population protocols with unordered data as a formalism to reason about anonymous crowd computing over unordered sequences of data. We first show that it is possible to determine whether an unordered sequence from an infinite data domain has a datum with absolute majority. We then establish the expressive power of the immediate observation restriction of our model, namely where, in each interaction, an agent observes another agent who is unaware of the interaction.
Fabian Birkmann, Stefan Milius, and Henning Urbat
Abstract: We propose a novel topological perspective on data languages recognizable by orbit-finite nominal monoids. For this purpose we introduce pro-orbit-finite nominal topological spaces, which for bounded support size coincide with nominal Stone spaces and are shown to be dually equivalent to a subcategory of nominal boolean algebras. Recognizable data languages are characterized as topologically clopen sets of pro-orbit-finite words. In addition, we explore the expressive power of
pro-orbit-finite equations by establishing a nominal version of Reiterman’s pseudovariety theorem.
Titouan Carette, Etienne Moutot, Thomas Perez, and Renaud Vilmart
Abstract: We exhibit a strong connection between the matchgate formalism introduced by Valiant and the ZW-calculus of Coecke and Kissinger. This connection provides a natural compositional framework for matchgate theory as well as a direct combinatorial interpretation of the diagrams of ZW-calculus through the perfect matchings of their underlying graphs.
We identify a precise fragment of ZW-calculus, the planar W-calculus, that we prove to be complete and universal for matchgates, that are linear maps satisfying the matchgate identities. Computing scalars of the planar W-calculus corresponds to counting perfect matchings of planar graphs, and so can be carried in polynomial time using the FKT algorithm, making the planar W-calculus an efficiently simulable fragment of the ZW-calculus, in a similar way that the Clifford fragment is for ZX-calculus. This work opens new directions for the investigation of the combinatorial properties of ZW-calculus as well as the study of perfect matching counting through compositional diagrammatical technics.
Anders Aamand, Adam Karczmarz, Jakub Łącki, Nikos Parotsidis, Peter Rasmussen and Mikkel Thorup
Abstract: We present a dynamic algorithm for maintaining the connected and 2-edge-connected components in an undirected graph subject to edge deletions. The algorithm is Monte-Carlo randomized and processes any sequence of edge deletions in $O(m + n~poly(\log n))$ total time. Interspersed with the deletions, it can answer queries to whether any two given vertices currently belong to the same (2-edge-)connected component in constant time. Our result is based on a general Monte-Carlo randomized reduction from decremental $c$-edge-connectivity to a variant of fully-dynamic $c$-edge-connectivity on a sparse graph.
For non-sparse graphs with $\Omega(n~poly(\log n))$ edges, our connectivity and $2$-edge-connectivity algorithms handle all deletions in optimal linear total time, using existing algorithms for the respective fully-dynamic problems. This improves upon an $O(m \log(n^2/m) + n~poly(log n))$-time algorithm of Thorup [J.Alg. 1999], which runs in linear time only for graphs with $\Omega(n^2)$ edges.
Our constant amortized cost for edge deletions in decremental connectivity in non-sparse graphs should be contrasted with an $\Omega(\log n/\log\log n)$ worst-case time lower bound in the decremental setting [Alstrup, Husfeldt, and Rauhe FOCS'98] as well as an $\Omega(\log n)$ amortized time lower-bound in the fully-dynamic setting [Patrascu and Demaine STOC'04].
Petra Berenbrink, Lukas Hintze, Hamed Hosseinpour, Dominik Kaaser and Malin Rau
Abstract: In this paper we study dynamic averaging load balancing on general graphs.
We consider infinite time and dynamic processes, where in every step new load items are assigned to randomly chosen nodes.
A matching is chosen, and the load is averaged over the edges of that matching.
We analyze the discrete case where load items are indivisible, moreover our results also carry over to the continuous case where load items can be split arbitrarily.
For the choice of the matchings we consider three different models, random matchings of linear size, random matchings containing only single edge, and deterministic sequences of matchings covering the whole graph.
We bound the discrepancy which is defined as the difference between the maximum and the minimum load.
Our results cover a broad range of graph classes and, to the best of our knowledge, our analysis is the first result for discrete and dynamic averaging load balancing processes.
As our main technical contribution we develop a drift result that allows us to apply techniques based on the effective resistance in an electrical network to the setting of dynamic load balancing.
Adam Karczmarz and Piotr Sankowski
Abstract: We study the exact fully dynamic shortest paths problem. For real-weighted directed graphs, we show a deterministic fully dynamic data structure with O~(mn^{4/5}) worst-case update time processing arbitrary s,t-distance queries in O~(n^{4/5}) time. This constitutes the first non-trivial update/query tradeoff for this problem in the regime of sparse weighted directed graphs.
Moreover, we give a Monte Carlo randomized fully dynamic reachability data structure processing single-edge updates in O~(n\sqrt{m}) worst-case time and queries in O(\sqrt{m}) time. For sparse digraphs, such a tradeoff has only been previously described with amortized update time~[Roditty and Zwick, SIAM J. Comp. 2008].
Sixue Liu, Zhao Song, Hengjie Zhang, Lichen Zhang and Tianyi Zhou
Abstract: We study the problem of solving linear program in the streaming model. Given a constraint matrix $A\in \mathbb{R}^{m\times n}$ and vectors $b\in \mathbb{R}^m, c\in \mathbb{R}^n$, we develop a space-efficient interior point method that optimizes solely on the dual program. To this end, we obtain efficient algorithms for various different problems:
In addition to our space-efficient IPM, we also give algorithms for solving SDD systems and isolation lemma in $\widetilde O(n)$ spaces, which are the cornerstones for our graph results.
David E. Roberson and Tim Seppelt
Abstract: We show that feasibility of the t-th level of the Lasserre semidefinite programming hierarchy for graph isomorphism can be expressed as a homomorphism indistinguishability relation. In other words, we define a class L_t of graphs such that graphs G and H are not distinguished by the t-th level of the Lasserre hierarchy if and only if they admit the same number of homomorphisms from any graph in L_t. By analysing the treewidth of graphs in L_t we prove that the 3t-th level of Sherali--Adams linear programming hierarchy is as strong as the t-th level of Lasserre. Moreover, we show that this is best possible in the sense that 3t cannot be lowered to 3t-1 for any t. The same result holds for the Lasserre hierarchy with non-negativity constraints, which we similarly characterise in terms of homomorphism indistinguishability over a family L_t^+ of graphs. Additionally, we give characterisations of level-t Lasserre with non-negativity constraints in terms of logical equivalence and via a graph colouring algorithm akin to the Weisfeiler--Leman algorithm. This provides a polynomial time algorithm for determining if two given graphs are distinguished by the t-th level of the Lasserre hierarchy with non-negativity constraints.
Jun-Ting Hsieh, Pravesh Kothari, Aaron Potechin and Jeff Xu
Abstract: Saunderson, Parrilo and Willsky asked the following elegant geometric question: what is the largest m = m(d) such that there is an ellipsoid in Rd that passes through v_1, v_2, . . . , v_m with high probability when vis are chosen independently from the standard Gaussian distribution N(0, Id). The existence of such an ellipsoid is equivalent to the existence of a positive semidefinite matrix X such that v_i^TXv_i = 1 for every 1 ⩽ i ⩽ m — a natural example of a random semidefinite program. SPW conjectured that m = (1 − o(1))d^2/4 with high probability. Very recently, Potechin, Turner, Venkat and Wein [10] and Kane and Diakonikolas [8] proved that m ≳ d^2 / log^{O(1)} (d) via a certain natural, explicit construction.
In this work, we give a substantially tighter analysis of their construction to prove that m ≳ d^2/C for an absolute constant C > 0. This resolves one direction of the SPW conjecture up to a constant. Our analysis proceeds via the method of Graphical Matrix Decomposition that has recently been used to analyze correlated random matrices arising in various areas. Our key new technical tool is a refined method to prove singular value upper bounds on certain correlated random matrices that are tight up to absolute dimension-independent constants. In contrast, all previous methods that analyze such matrices lose logarithmic factors in the dimension.
Hadley Black, Iden Kalemaj and Sofya Raskhodnikova
Abstract: We generalize the celebrated isoperimetric inequality of Khot, Minzer, and Safra~(SICOMP 2018) for Boolean functions to the case of real-valued functions $f:\{0,1\}^d\to\mathbb{R}$. Our main tool in the proof of the generalized inequality is a new Boolean decomposition that represents every real-valued function $f$ over an arbitrary partially ordered domain as a collection of Boolean functions over the same domain, roughly capturing the distance of $f$ to monotonicity and the structure of violations of $f$ to monotonicity.
We apply our generalized isoperimetric inequality to improve algorithms for testing monotonicity and approximating the distance to monotonicity for real-valued functions. Our tester for monotonicity has query complexity $\widetilde{O}(\min(r \sqrt{d},d))$, where $r$ is the size of the image of the input function. (The best previously known tester makes $O(d)$ queries, as shown by Chakrabarty and Seshadhri (STOC 2013).) Our tester is nonadaptive and has 1-sided error. We prove a matching lower bound for nonadaptive, 1-sided error testers for monotonicity. We also show that the distance to monotonicity of real-valued functions that are $\alpha$-far from monotone can be approximated nonadaptively within a factor of $O(\sqrt{d\log d})$ with query complexity polynomial in $1/\alpha$ and the dimension $d$. This query complexity is known to be nearly optimal for nonadaptive algorithms even for the special case of Boolean functions. (The best previously known distance approximation algorithm for real-valued functions, by Fattal and Ron (TALG 2010) achieves $O(d\log r)$-approximation.)
Pan Peng and Yuyang Wang
Abstract: We revisit the relation between two fundamental property testing models for bounded-degree \emph{directed} graphs: the \emph{bidirectional} model in which the algorithms are allowed to query both the outgoing edges and incoming edges of a vertex, and the \emph{unidirectional} model in which only queries to the outgoing edges are allowed. Czumaj, Peng and Sohler [STOC 2016] showed that for directed graphs with both maximum indegree and maximum outdegree upper bounded by $d$, any property that can be tested with query complexity $O_{\varepsilon,d}(1)$ in the bidirectional model can be tested with $n^{1-\Omega_{\varepsilon,d}(1)}$ queries in the unidirectional model. In particular, if the proximity parameter $\varepsilon$ approaches $0$, then the query complexity of the transformed tester in the unidirectional model approaches $n$. It was left open if this transformation can be further improved or there exists any property that exhibits such an extreme separation.
We prove that testing \emph{subgraph-freeness} in which the subgraph contains $k$ source components, requires $\Omega(n^{1-\frac{1}{k}})$ queries in the unidirectional model. This directly gives the first explicit properties that exhibit an $O_{\varepsilon,d}(1)$ vs $\Omega(n^{1-f(\varepsilon,d)})$ separation of the query complexities between the bidirectional model and unidirectional model, where $f(\varepsilon,d)$ is a function that approaches $0$ as $\varepsilon$ approaches $0$. Furthermore, our lower bound also resolves a conjecture by Hellweg and Sohler [ESA 2012] on the query complexity of testing $k$-star-freeness.
Dana Ron and Omer Cohen Sidon
Abstract: In this work, we study the problem of approximating the distance to subsequence-freeness in the sample-based distribution-free model. For a given subsequence (word) $w = w_1 \dots w_k$, a sequence (text) $T = t_1 \dots t_n$ is said to contain $w$ if there exist indices $1 \leq i_1 < \dots < i_k \leq n$ such that $t_{i_{j}} = w_j$ for every $1 \leq j \leq k$.
Otherwise, $T$ is $w$-free. Ron and Rosin (ACM TOCT 2022) showed that the number of samples both necessary and sufficient for one-sided error testing of subsequence-freeness in the sample-based distribution-free model is $\Theta(k/\epsilon)$.
Denoting by $Dist(T,w,p)$ the distance of $T$ to $w$-freeness under a distribution $p :[n]\to [0,1]$, we are interested in obtaining an estimate $wDist$, such that $|wDist - Dist(T,w,p)| \leq \delta$ with probability at least $2/3$, for a given distance parameter $\delta$. Our main result is an algorithm whose sample complexity is $\tilde{O}(k^2/\delta^2)$. We first present an algorithm that works when the underlying distribution $p$ is uniform, and then show how it can be modified to work for any (unknown) distribution $p$. We also show that a quadratic dependence on $1/\delta$ is necessary.
Pascal Baumann, Moses Ganardi, Rupak Majumdar, Ramanathan Thinniyam Srinivasan, and Georg Zetzsche
Abstract: In the language-theoretic approach to refinement verification, we check that the
language of traces of an implementation all belong to the language of a specification.
We consider the refinement verification problem for asynchronous programs against specifications given by a Dyck
language.
We show that this problem is EXPSPACE-complete---the same complexity as that of language emptiness and for
refinement verification against a regular specification.
Our algorithm uses several novel technical ingredients.
First, we show that checking if the coverability language of a succinctly described
vector addition system with states (VASS) is contained in a Dyck language
is EXPSPACE-complete.
Second, in the more technical part of the proof, we define an ordering on words and show a downward closure construction that
allows replacing the (context-free) language of each task in an asynchronous program by a regular language.
Unlike downward closure operations usually considered in infinite-state verification, our ordering is not a well-quasi-ordering, and we have
to construct the regular language ab initio.
Once the tasks can be replaced, we show a reduction to an appropriate VASS and use our first ingredient.
In addition to the inherent theoretical interest, refinement verification with Dyck specifications captures common
practical resource usage patterns based on reference counting, for which few algorithmic techniques were known.
Thomas Henzinger, Pavol Kebis, Nicolas Mazzocchi, and N. Ege Saraç
Abstract: The operator precedence languages (OPLs) represent the largest known subclass of the context-free languages which enjoys all desirable closure and decidability properties. This includes the decidability of language inclusion, which is the ultimate verification problem. Operator precedence grammars, automata, and logics have been investigated and used, for example, to verify programs with arithmetic expressions and exceptions (both of which are deterministic pushdown but lie outside the scope of the visibly pushdown languages). In this paper, we complete the picture and give, for the first time, an algebraic characterization of the class of OPLs in the form of a syntactic congruence that has finitely many equivalence classes exactly for the operator precedence languages. This is a generalization of the celebrated Myhill-Nerode theorem for the regular languages to OPLs. As one of the consequences, we show that universality and language inclusion for nondeterministic operator precedence automata can be solved by an antichain algorithm. Antichain algorithms avoid determinization and complementation through an explicit subset construction, by leveraging a quasi-order on words, which allows the pruning of the search space for counterexample words without sacrificing completeness. Antichain algorithms can be implemented symbolically, and these implementations are today the best-performing algorithms in practice for the inclusion of finite automata. We give a generic construction of the quasi-order needed for antichain algorithms from a finite syntactic congruence. This yields the first antichain algorithm for OPLs, an algorithm that solves the ExpTime-hard language inclusion problem for OPLs in exponential time.
Javier Esparza and Vincent Grande
Abstract: We study black-box testing for stochastic systems and arbitrary omega-regular specifications, explicitly including liveness properties. We are given a finite-state probabilistic system that we can only execute from the initial state. We have no information on the number of reachable states, or on the probabilities; further, we can only partially observe the states. The only action we can take is to restart the system. We design restart strategies guaranteeing that, if the specification is violated with nonzero probability, then w.p.1 the number of restarts is finite, and the infinite run executed after the last restart violates the specification. This improves on previous work that required full observability. We obtain asymptotically optimal upper bounds on the expected number of steps until the last restart. We conduct experiments on a number of benchmarks, and show that our strategies allow one to find violations in Markov chains much larger than the ones considered in previous work.
Abstract: ICALP had a decisive influence on the development of Theoretical Computer Science in Europe and also on my career. I had my first ICALP paper at ICALP 74 in Saarbruecken and I presented it a few days after joining Saarland University.
Abstract: Over the past 35 years, one branch of formal methods has pushed the envelope from discrete to continuous state spaces, to handle the verification and control of cyber-physical systems.
At the heart of these methods lie correctness certificates that are functions on continuous state spaces, such as abstraction, invariant, barrier, and progress functions.
Entirely independently, a particular data structure for representing continuous nonlinear functions has gained recent prominence: the neural network.
By using neural networks to jointly learn both controllers and correctness certificates for continuous and hybrid state spaces, the complexity of the residual verification tasks can be reduced.
We survey the resulting learner-verifier feedback architecture and explain why we believe that it offers great potential for a systematic symbiosis of deep learning and formal methods, to provide formal guarantees for an important class of AI systems such as robots and autopilots.
Benjamin Aram Berendsohn, Ishay Golinsky, Haim Kaplan and Laszlo Kozma
Abstract: Search trees on trees (STTs) generalize the fundamental binary search tree (BST) data structure: in STTs the underlying search space is an arbitrary tree, whereas in BSTs it is a path. An optimal BST of size $n$ can be computed for a given distribution of queries in $O(n^2)$ time [Knuth, Acta Inf. 1971] and centroid BSTs provide a nearly-optimal alternative, computable in $O(n)$ time [Mehlhorn, SICOMP 1977].
By contrast, optimal STTs are not known to be computable in polynomial time, and the fastest constant-approximation algorithm runs in $O(n^3)$ time [Berendsohn, Kozma, SODA 2022]. Centroid trees can be defined for STTs analogously to BSTs, and they have been used in a wide range of algorithmic applications. In the unweighted case (i.e., for a uniform distribution of queries), the centroid tree can be computed in $O(n)$ time [Brodal, Fagerberg, Pedersen, Östlin, ICALP 2001; Della Giustina, Prezza, Venturini, SPIRE 2019]. These algorithms, however, do not readily extend to the weighted case. Moreover, no approximation guarantees were previously known for centroid trees in either the unweighted or weighted cases.
In this paper we revisit centroid trees in a general, weighted setting, and we settle both the algorithmic complexity of constructing them, and the quality of their approximation. For constructing a weighted centroid tree, we give an output-sensitive $O(n \log{h}) \subseteq O(n \log{n})$ time algorithm, where $h$ is the height of the resulting centroid tree. If the weights are of polynomial complexity, the running time is $O(n \log\log{n})$. We show these bounds to be optimal, in a general decision tree model of computation. For approximation, we prove that the cost of a centroid tree is at most twice the optimum, and this guarantee is best possible, both in the weighted and unweighted cases. We also give tight, fine-grained bounds on the approximation-ratio for bounded-degree trees and on the approximation-ratio of more general $\alpha$-centroid trees.
Tatsuya Terao
Abstract: In the matroid partitioning problem, we are given $k$ matroids $\mathcal{M}_1 = (V, \mathcal{I}_1), \dots , \mathcal{M}_k = (V, \mathcal{I}_k)$ defined over a common ground set $V$ of $n$ elements, and we need to find a partitionable set $S \subseteq V$ of largest possible cardinality, denoted by $p$. Here, a set $S \subseteq V$ is called partitionable if there exists a partition $(S_1, \dots , S_k)$ of $S$ with $S_i \in \mathcal{I}_i$ for $i = 1, \ldots, k$. In 1986, Cunningham presented a matroid partition algorithm that uses $O(n p^{3/2} + k n)$ independence oracle queries, which was the previously known best algorithm. This query complexity is $O(n^{5/2})$ when $k \leq n$.
Our main result is to present a matroid partition algorithm that uses $\tilde{O}(k^{1/3} n p + k n)$ independence oracle queries, which is $\tilde{O}(n^{7/3})$ when $k \leq n$. This improves upon previous Cunningham's algorithm. To obtain this, we present a new approach \emph{edge recycling augmentation}, which can be attained through new ideas: an efficient utilization of the binary search technique by Nguyen and Chakrabarty-Lee-Sidford-Singla-Wong and a careful analysis of the number of independence oracle queries. Our analysis differs significantly from the one for matroid intersection algorithms, because of the parameter $k$. We also present a matroid partition algorithm that uses $\tilde{O}((n + k) \sqrt{p})$ rank oracle queries.
Monika Henzinger, Paul Liu, Jan Vondrák and Da Wei Zheng
Abstract: The maximization of submodular functions have found widespread application in areas such as machine learning, combinatorial optimization, and economics, where practitioners often wish to enforce various constraints; the matroid constraint has been investigated extensively due to its algorithmic properties and expressive power. Though tight approximation algorithms for general matroid constraints exist in theory, the running times of such algorithms typically scale quadratically, and are not practical for truly large scale settings. Recent progress has focused on fast algorithms for important classes of matroids given in explicit form. Currently, nearly-linear time algorithms only exist for graphic and partition matroids [EN19]. In this work, we develop algorithms for submodular maximization constrained by graphic, transversal matroids on dense graphs, or laminar matroids in time near-linear in the size of their representation. Our algorithms achieve an optimal approximation of $1-1/e-\epsilon$ and both generalize and accelerate the results of Ene and Nguyen [EN19]. In fact, the running time of our algorithm cannot be improved without improving the classic continuous greedy of Badanidiyuru and Vondrák [BV14].
To achieve near-linear running time, we make use of dynamic data structures that maintain bases with approximate maximum cardinality and weight under certain element updates. These data structures need to support a weight decrease operation and a novel FREEZE operation that allows the algorithm to freeze elements (i.e. force to be contained) in its basis regardless of future data structure operations. For the laminar matroid, we present a new dynamic data structure using the top tree interface of Alstrup et al. [AHLT05] that maintains the maximum weight basis under insertions and deletions of elements in $O(\log n)$ time. This data structure needs to support certain subtree query and path update operations that are performed every insertion and deletion that are non-trivial to handle in conjunction. For the transversal matroid the FREEZE operation corresponds to requiring the data structure to keep a certain set $S$ of vertices matched, a property that we call $S$-stability. While there is a large body of work on dynamic matching algorithms, none are $S$-stable and maintain an approximate maximum weight matching under vertex updates. We give the first such algorithm for bipartite graphs with total running time linear (up to log factors) in the number of edges.
Karthik Murali and Therese Biedl
Abstract: A graph is called 1-plane if it has an embedding in the plane where each edge is crossed at most once by another edge. A crossing of a 1-plane graph is called an X-crossing if the endpoints of the crossing pair of edges induce a matching. In this paper, we show how to compute the vertex connectivity of a 1-plane graph G without X-crossings in linear time.
To do so, we show that for any two vertices u,v in a minimum separating set S, the distance between u and v in an auxiliary graph \Lambda(G) (obtained by planarizing G and then inserting into each face a new vertex adjacent to all vertices of the face) is small. It hence suffices to search for a minimum separating set in various subgraphs \Lambda_i of \Lambda(G) with small diameter. Since \Lambda(G) is planar, the subgraphs \Lambda_i have small treewidth. Each minimum separating set S then gives rise to a partition of \Lambda_i into three vertex sets with special properties; such a partition can be found via Courcelle's theorem in linear time.
Shyan Akmal and Ce Jin
Abstract: Our work concerns algorithms for a variant of \textsf{Maximum Flow} in unweighted graphs. In the All-Pairs Connectivity (APC) problem, we are given a graph G on n vertices and m edges, and are tasked with computing the size of a minimum st-cut in G for all pairs of vertices (s,t). Significant algorithmic breakthroughs have recently shows that over undirected graphs, APC can be solved in n^{2+o(1)} time, which is essentially optimal. In contrast, the true time complexity of APC over directed graphs remains open: this problem can be solved in O(m^ω) time, where ω ∈ (2, 2.373) is the exponent of matrix multiplication, but no matching conditional lower bound is known.
Following [Abboud et al., ICALP 2019], we study a bounded version of APC called the k-Bounded All Pairs Connectivity (k-APC) problem. In this variant of APC, we are given an integer k in addition to the graph G, and are now tasked with reporting the size of a minimum st-cut only for pairs (s,t) of vertices with min-cut value less than k (if the min st-cut has size at least k, we just can report it is "large" instead of computing the exact value).
Our main result is an O((kn)^ω) time algorithm for k-APC in directed graphs.
This is the first algorithm which solves k-APC faster than the more general APC for all k ≥ 3. This runtime is essentially O(n^ω) for all k ≤ poly(log n), which essentially matches the optimal runtime for the k=1 case of k-APC}, under popular conjectures from fine-grained complexity. Previously, this runtime was only known for k ≤ 2 in general [Georgiadis et al., ICALP 2017], and for k ≤ o((log n)^{1/2}) in the special case of directed acyclic graphs [Abboud et al., ICALP 2019]. Our result employs the same algebraic framework used in previous work, introduced by [Cheung, Lau, and Leung, FOCS 2011]. A direct implementation of this framework involves inverting a large random matrix. Our faster algorithm for k-APC is based off the insight that it suffices to invert a low-rank random matrix instead of a generic random matrix, which yields a speed-up.
We also obtain a new algorithm for a variant of k-APC, the k-Bounded All-Pairs Vertex Connectivity (k-APVC) problem, where we now report the sizes of minimum vertex-cuts instead of edge-cuts. We show how to solve k-APVC in O(k^2n^ω) time. Previous work gave an O((kn)^ω) algorithm for this problem [Abboud et al., ICALP 2019], so our algorithm is faster if ω > 2.
Ilan Cohen and Debmalya Panigrahi
Abstract: Online allocation is a broad class of problems where items arriving online have to be allocated to agents who have a fixed utility/cost for each assigned item so to maximize/minimize some objective. This framework captures a broad range of fundamental problems such as the Santa Claus problem (maximizing minimum utility), Nash welfare maximization (maximizing geometric mean of utilities), makespan minimization (minimizing maximum cost), minimization of $\ell_p$-norms, and so on. We focus on divisible items (i.e., fractional allocations) in this paper. Even for divisible items, these problems are characterized by strong super-constant lower bounds in the classical worst-case online model.
In this paper, we study online allocations in the learning-augmented setting, i.e., where the algorithm has access to some additional (machine-learned) information about the problem instance. We introduce a universal algorithmic framework for learning-augmented online allocation that produces nearly optimal solutions for this broad range of maximization and minimization objectives using only a single learned parameter for every agent. As corollaries of our general framework, we improve prior results of Lattanzi et al. (SODA 2020) and Li and Xian (ICML 2021) for learning-augmented makespan minimization, and obtain the first learning-augmented nearly-optimal algorithms for the other objectives such as Santa Claus, Nash welfare, $\ell_p$-minimization, etc. We also give tight bounds on the resilience of our algorithms to errors in the learned parameters, and study the learnability of these parameters.
Konstantina Mellou, Marco Molinaro and Rudy Zhou
Abstract: Motivated by cloud computing applications, we study the problem of how to optimally deploy new hardware subject to both power and robustness constraints. To model the situation observed in large-scale data centers, we introduce the \emph{Online Demand Scheduling with Failover} problem. There are $m$ identical devices with capacity constraints. Demands come one-by-one and, to be robust against a device failure, need to be assigned to a pair of devices. When a device fails (in a failover scenario), each demand assigned to it is rerouted to its paired device (which may now run at increased capacity). The goal is to assign demands to the devices to maximize the total utilization subject to both the normal capacity constraints as well as these novel failover constraints. These latter constraints introduce new decision tradeoffs not present in classic assignment problems such as the Multiple Knapsack problem and AdWords.
In the worst-case model, we design a deterministic $\approx \frac{1}{2}$-competitive algorithm, and show this is essentially tight. To circumvent this constant-factor loss, which in the context of big cloud providers represents substantial capital losses, we consider the stochastic arrival model, where all demands come i.i.d. from an unknown distribution. In this model we design an algorithm that achieves a sub-linear additive regret (i.e. as $opt$ or $m$ increases, the multiplicative competitive ratio goes to $1$). This requires a combination of different techniques, including a configuration LP with a non-trivial post-processing step and an online monotone matching procedure introduced by Rhee and Talagrand.
Amirreza Akbari, Navid Eslami, Henrik Lievonen, Darya Melnyk, Joona Särkijärvi and Jukka Suomela
Abstract: In this work, we give a unifying view of locality in four settings: distributed algorithms, sequential greedy algorithms, dynamic algorithms, and online algorithms.
We introduce a new model of computing, called the online-LOCAL model: the adversary reveals the nodes of the input graph one by one, in the same way as in classical online algorithms, but for each new node we get to see its radius-T neighborhood before choosing the output. Instead of looking ahead in time, we have the power of looking around in space.
We compare the online-LOCAL model with three other models: the LOCAL model of distributed computing, where each node produces its output based on its radius-T neighborhood, its sequential counterpart SLOCAL, and the dynamic-LOCAL model, where changes in the dynamic input graph only influence the radius-T neighborhood of the point of change.
The SLOCAL and dynamic-LOCAL models are sandwiched between the LOCAL and online-LOCAL models, with LOCAL being the weakest and online-LOCAL the strongest model. In general, all four models are distinct, but we study in particular locally checkable labeling problems (LCLs), which is a family of graph problems extensively studied in the context of distributed graph algorithms.
We prove that for LCL problems in paths, cycles, and rooted trees, all four models are roughly equivalent: the locality of any LCL problem falls in the same broad class - O(log* n), Theta(log n), or n^{Theta(1)} - in all four models. In particular, this result enables one to generalize prior lower-bound results from the LOCAL model to all four models, and it also allows one to simulate e.g. dynamic-LOCAL algorithms efficiently in the LOCAL model.
We also show that this equivalence does not hold in two-dimensional grids or general bipartite graphs. We provide an online-LOCAL algorithm with locality O(log n) for the 3-coloring problem in bipartite graphs - this is a problem with locality Omega(n^{1/2}) in the LOCAL model and Omega(n^{1/10}) in the SLOCAL model.
Yann Disser, Max Klimm, Kevin Schewior and David Weckbecker
Abstract: We consider the problem of finding an incremental solution to a cardinality-constrained maximization problem that not only captures the solution for a fixed cardinality, but describes how to gradually grow the solution as the cardinality bound increases.
The goal is to find an incremental solution that guarantees a good competitive ratio against the optimum solution for all cardinalities simultaneously.
The central challenge is to characterize maximization problems where this is possible, and to determine the best-possible competitive ratio that can be attained.
A lower bound of $2.18$ and an upper bound of $\varphi + 1 \approx 2.618$ are known on the competitive ratio for monotone and accountable objectives [Bernstein et al.,Math.~Prog., 2022], which capture a wide range of maximization problems.
We introduce a continuization technique and identify an optimal incremental algorithm that provides strong evidence that $\varphi + 1$ is the best-possible competitive ratio.
Using this continuization, we obtain an improved lower bound of $2.246$ by studying a particular recurrence relation whose characteristic polynomial has complex roots exactly beyond the lower bound.
Based on the optimal continuous algorithm combined with a scaling approach, we also provide a $1.772$-competitive randomized algorithm.
We complement this by a randomized lower bound of $1.447$ via Yao's principle.
Sharat Ibrahimpur, Manish Purohit, Zoya Svitkina, Erik Vee and Joshua Wang
Abstract: Online caching is among the most fundamental and well-studied problems in the area of online algorithms. Innovative algorithmic ideas and analysis --- including potential functions and primal-dual techniques --- give insight into this still-growing area.
Here, we introduce a novel potential function to upper bound the cost of an online algorithm paired with a new dual-fitting technique to lower bound the cost of an offline optimal algorithm. We apply these techniques to the Caching with Reserves problem recently introduced by Ibrahimpur et al.~\cite{ibrahimpur2022caching} and give an $O(\log k)$-competitive fractional online algorithm via a marking strategy. We also design a new online rounding algorithm that runs in polynomial time to obtain an $O(\log k)$-competitive randomized integral algorithm. Additionally, we provide a new, simple proof for randomized marking for the classical unweighted paging problem.
Nicolas Resch, Chen Yuan and Yihan Zhang
Abstract: In this work we consider the list-decodability and list-recoverability of arbitrary $q$-ary codes, for all integer values of $q\geq 2$. A code is called $(p,L)_q$-list-decodable if every radius $pn$ Hamming ball contains less than $L$ codewords; $(p,\ell,L)_q$-list-recoverability is a generalization where we place radius $pn$ Hamming balls on every point of a combinatorial rectangle with side length $\ell$ and again stipulate that there be less than $L$ codewords.
Our main contribution is to precisely calculate the maximum value of $p$ for which there exist infinite families of positive rate $(p,\ell,L)_q$-list-recoverable codes, the quantity we call the \emph{zero-rate threshold}. Denoting this value by $p_*$, we in fact show that codes correcting a $p_*+\varepsilon$ fraction of errors must have size $O_{\varepsilon}(1)$, i.e., independent of $n$. Such a result is typically referred to as a ``Plotkin bound.'' To complement this, a standard random code with expurgation construction shows that there exist positive rate codes correcting a $p_*-\varepsilon$ fraction of errors. We also follow a classical proof template (typically attributed to Elias and Bassalygo) to derive from the zero-rate threshold other tradeoffs between rate and decoding radius for list-decoding and list-recovery.
Technically, proving the Plotkin bound boils down to demonstrating the Schur convexity of a certain function defined on the $q$-simplex as well as the convexity of a univariate function derived from it. We remark that an earlier argument claimed similar results for $q$-ary list-decoding; however, we point out that this earlier proof is flawed.
Kuan Cheng, Zhengzhong Jin, Xin Li, Zhide Wei and Yu Zheng
Abstract: This work continues the study of linear error correcting codes against adversarial insertion deletion errors (insdel errors). Previously, the work of Cheng, Guruswami, Haeupler, and Li \cite{CGHL21} showed the existence of asymptotically good linear insdel codes that can correct arbitrarily close to $1$ fraction of errors over some constant size alphabet, or achieve rate arbitrarily close to $1/2$ even over the binary alphabet. As shown in \cite{CGHL21}, these bounds are also the best possible. However, known explicit constructions in \cite{CGHL21}, and subsequent improved constructions by Con, Shpilka, and Tamo \cite{9770830} all fall short of meeting these bounds. Over any constant size alphabet, they can only achieve rate $< 1/8$ or correct $< 1/4$ fraction of errors; over the binary alphabet, they can only achieve rate $< 1/1216$ or correct $< 1/54$ fraction of errors. Apparently, previous techniques face inherent barriers to achieve rate better than $1/4$ or correct more than $1/2$ fraction of errors.
In this work we give new constructions of such codes that meet these bounds, namely, asymptotically good linear insdel codes that can correct arbitrarily close to $1$ fraction of errors over some constant size alphabet, and binary asymptotically good linear insdel codes that can achieve rate arbitrarily close to $1/2$.\ All our constructions are efficiently encodable and decodable. Our constructions are based on a novel approach of code concatenation, which embeds the index information implicitly into codewords. This significantly differs from previous techniques and may be of independent interest. Finally, we also prove the existence of linear concatenated insdel codes with parameters that match random linear codes, and propose a conjecture about linear insdel codes.
Shu Liu, Chaoping Xing and Chen Yuan
Abstract: Despite of numerous results about the list decoding of Hamming-metric codes, development of list decoding on rank-metric codes is not rapid as its counterpart. The limit on list decoding obeys the Gilbert-Varshamov bound in both the metrics. In the case of the Hamming-metric, the Gilbert-Varshamov bound is a trade-off among rate, decoding radius and alphabet size, while in the case of the rank-metric, the Gilbert-Varshamov bound is a trade-off among rate, decoding radius and column-to-row ratio (i.e., the ratio between the numbers of columns and rows). Hence, alphabet size and column-to-row ratio play the similar role for list decodability in each metric. In the case of the Hamming-metric, it is more challenging to list decode codes over smaller alphabets. In contrast, in the case of the rank-metric, it is more difficult to list decode codes with large column-to-row ratio. In particular, it is extremely difficult to list decoding of square matrix rank-metric codes (i.e., the column-to-row ratio is equal to $1$).
The main purpose of this paper is to explicitly construct a class of rank-metric codes $mC$ of rate $R$ with the column-to-row ratio up to $2/3$
and efficiently list decode these codes with decoding radius beyond the decoding radius $(1-R)/2$ (note that $(1-R)/2$ is at least half of relative minimum distance $Gd$). In literatures, the largest column-to-row ratio of rank-metric codes that can be efficiently list decoded beyond half of minimum distance is $1/2$. Thus, it is greatly desired to efficiently design list decoding algorithms for rank-metric codes with the column-to-row ratio bigger than $1/2$ or even close to $1$.
Our key idea is to compress an element of the field $F_{q^n}$ into a smaller $F_q$-subspace via a linearized polynomial. Thus, the column-to-row ratio gets increased at the price of reducing the code rate. Our result shows that the compression technique is powerful and it has not been employed in the topic of list decoding of both the Hamming and rank metrics.
Apart from the above algebraic technique, we follow some standard techniques to prune down the list. The algebraic idea enables us to pin down the messages
into a structured subspace of dimension linear in the number $n$ of columns. This ``periodic" structure allows us to pre-encode the messages to prune down the list.
Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi and Kewen Wu
Abstract: We study the problem of performing counting queries at different levels in hierarchical structures while preserving individuals' privacy. Motivated by applications, we propose a new error measure for this problem by considering a combination of multiplicative and additive approximation to the query results.
We examine known mechanisms in differential privacy (DP) and prove their optimality, under this measure, in the pure-DP setting. In the approximate-DP setting, we design new algorithms achieving significant improvements over known ones.
Talya Eden, Sofya Raskhodnikova, Quanquan C. Liu and Adam Smith
Abstract: Many deployments of differential privacy in industry are in the local model, where each party releases its private information via a differentially private randomizer. We study triangle counting in the
noninteractive and interactive local model with edge differential privacy (that, intuitively, requires that
the outputs of the algorithm on graphs that differ in one edge be indistinguishable). In this model, each
party’s local view consists of the adjacency list of one vertex.
In the noninteractive model, we prove that additive $\Omega(n^2)$ error is necessary, where n is the number
of nodes. This lower bound is our main technical contribution. It uses a reconstruction attack with a
new class of linear queries and a novel mix-and-match strategy of running the local randomizers with
different completions of their adjacency lists. It matches the additive error of the algorithm based
on Randomized Response, proposed by Imola, Murakami and Chaudhuri (USENIX2021) and analyzed
by Imola, Murakami and Chaudhuri (CCS2022) for constant ε. We use a different postprocessing of
Randomized Response and provide tight bounds on the variance of the resulting algorithm.
In the interactive setting, we prove a lower bound of $\Omega(n^{3/2})$ on the additive error. Previously, no
hardness results were known for interactive, edge-private algorithms in the local model, except for those
that follow trivially from the results for the central model. Our work significantly improves on the state
of the art in differentially private graph analysis in the local model.
Harry Vinall-Smeeth and Christoph Berkholz
Abstract: The task of computing homomorphisms between two finite relational
structures A and B is a well-studied question with
numerous applications. Since the set Hom(A,B) of all
homomorphisms may be very large having a method of representing it in
a succinct way, especially one which enables us to perform efficient
enumeration and counting, could be extremely useful.
One simple yet powerful way of doing so is to decompose
Hom(A,B) using union and Cartesian product.
Such data structures, called d-representations, have been introduced by
Olteanu and Zavodny in the context of
database theory. Their results also imply that if the treewidth of the left-hand
side structure A is bounded, then a d-representation of
polynomial size can be found in polynomial time. We show that for
structures of bounded arity this is optimal: if the treewidth is
unbounded then there are instances where the size of any
d-representation is superpolynomial. Along the way we develop tools
for proving lower bounds on the size of d-representations, in
particular we define a notion of reduction suitable for this context
and prove an almost tight lower bound on the size of d-representations
of all k-cliques in a graph.
Moritz Lichter
Abstract: At the core of the quest for a logic for PTime is a mismatch between algorithms making arbitrary choices and isomorphism-invariant logics. One approach to tackle this problem is witnessed symmetric choice. It allows for choices from definable orbits certified by definable witnessing automorphisms.
We consider the extension of fixed-point logic with counting (IFPC) with witnessed symmetric choice (IFPC+WSC) and a further extension with an interpretation operator (IFPC+WSC+I). The latter operator evaluates a subformula in the structure defined by an interpretation.
When similarly extending pure fixed-point logic (IFP), IFP+WSC+I simulates counting which IFP+WSC fails to do. For IFPC+WSC, it is unknown whether the interpretation operator increases expressiveness and thus allows studying the relation between WSC and interpretations beyond counting.
In this paper, we separate IFPC+WSC from IFPC+WSC+I by showing that IFPC+WSC is not closed under FO-interpretations. By the same argument, we answer an open question of Dawar and Richerby regarding non-witnessed symmetric choice in IFP.
Additionally, we prove that nesting WSC-operators increases the expressiveness using the so-called CFI graphs. We show that if IFPC+WSC+I canonizes a particular class of base graphs, then it also canonizes the corresponding CFI graphs.
This differs from various other logics, where CFI graphs provide difficult instances.
Diptarka Chakraborty, Sourav Chakraborty, Gunjan Kumar and Kuldeep Meel
Abstract: Given a Boolean formula $\phi$ over $n$ variables, the problem of model counting is to compute the number of solutions of $\phi$. Model counting is a fundamental problem in computer science with wide-ranging applications in domains such as quantified information leakage, probabilistic reasoning, network reliability, neural network verification, and more. Owing to the #P-hardness of the problems, Stockmeyer initiated the study of the complexity of approximate counting Stockmeyer showed that $\log n$ calls to NP oracle are necessary and sufficient to achieve $(\varepsilon,\delta)$ guarantees. The hashing-based framework proposed by Stockmeyer has been very influential in designing practical counters over the past decade, wherein the NP oracle calls are substituted by the SAT solver in practice. It is well known that NP oracle does not fully capture the behavior of SAT solvers, as SAT solvers are also designed to provide satisfying assignments when a formula is satisfiable, without additional overhead. Accordingly, the notion of SAT oracle has been proposed to capture the behavior of SAT solver wherein given a Boolean formula, a SAT oracle returns a satisfying assignment if the formula is satisfiable or returns unsatisfiable otherwise. Since the practical state of the art approximate counting techniques use SAT solvers, a natural question is whether a SAT oracle is more powerful than NP oracle in the context of approximate model counting.
The primary contribution of this work is to study the relative power of NP oracle and SAT oracle in the context of approximate model counting. The previous techniques proposed in the context of NP oracle are weak to provide strong bounds in the context of SAT oracle since, in contrast to an NP oracle that provides only 1 bit of information, a SAT oracle can provide $n$ bits of information. We, therefore, develop a new methodology to achieve the main result: a SAT oracle is no more powerful than NP oracle in the context of approximate model counting.
Manuel Bodirsky and Simon Knäuer
Abstract: We show that the problem of deciding for a given finite relation algebra A whether the network satisfaction problem for A can be solved by the k-consistency procedure, for some k, is undecidable. For the important class of finite relation algebras A with a normal representation, however, the decidability of this problem remains open. We show that if A is symmetric and has a flexible atom, then the question whether NSP(A) can be solved by k-consistency, for some k, is decidable (even in polynomial time in the number of atoms of A). This result follows from a more general sufficient condition for the correctness of the k-consistency procedure for finite symmetric relation algebras. In our proof we make use of a result of Alexandr Kazda about finite binary conservative structures.
Austen Z. Fan, Paraschos Koutris and Hangdong Zhao
Abstract: We study the fine-grained complexity of evaluating Boolean Conjunctive Queries and their generalization to sum-of-product problems over an arbitrary semiring. For these problems, we present a general \emph{semiring-oblivious} reduction from the $k$-clique problem to any query structure (hypergraph). Our reduction uses the notion of {\em embedding} a graph to a hypergraph, first introduced by Marx~\cite{Marx13}. As a consequence of our reduction, we can show tight conditional lower bounds for many classes of hypergraphs, including cycles, Loomis-Whitney joins, some bipartite graphs, and chordal graphs. These lower bounds have a dependence on what we call the {\em clique embedding power} of a hypergraph $H$, which we believe is a quantity of independent interest. We show that the clique embedding power is always less than the submodular width of the hypergraph, and present a decidable algorithm for computing it. We conclude with many open problems for future research.
Kurt Mehlhorn
Abstract: A set of indivisible goods (a car, a toothbrush, ...) have to be allocated to a set of agent. Each agent has its own preferences over sets of goods. What constitutes a fair allocation? When can we find a fair allocation efficiently? Can we approximate fair allocations?