Combinatorics and Optimizationhttp://hdl.handle.net/10012/99282024-05-19T08:18:40Z2024-05-19T08:18:40ZChromatic Number of Random Signed GraphsYuan, Dao Chenhttp://hdl.handle.net/10012/205372024-05-04T02:31:00Z2024-05-03T00:00:00ZChromatic Number of Random Signed Graphs
Yuan, Dao Chen
We naturally extend Bollobas's classical method and result about the chromatic number of random graphs chi(G(n,p)) ~ n/log_b(n) (for p constant, b=1/(1-p)) to the chromatic number of random signed graphs to obtain chi(G(n,p,q)) ~ n/log_b(n) (for p constant, b=1/(1-p), q=o(1)). We also give a sufficient bound on q under which a.a.s. the chromatic number of G(n,p,q) is unchanged before and after adding negative edges.
2024-05-03T00:00:00ZRouting, Scheduling, and Sorting in Consolidated NetworksVan Dyk, Madisonhttp://hdl.handle.net/10012/205012024-04-26T02:31:14Z2024-04-25T00:00:00ZRouting, Scheduling, and Sorting in Consolidated Networks
Van Dyk, Madison
Modern parcel logistic networks are designed to ship demand between given origin, destination pairs of nodes in an underlying directed network. Efficiency dictates that volume needs to be consolidated at intermediate nodes in typical hub-and-spoke fashion. In practice, such consolidation requires tracking packages in both space and time (temporal network design), as well as parcel sortation.
In the first half of the thesis, we study solution methods for temporal problems arising in consolidated networks. While many time-dependent network design problems can be formulated as time-indexed formulations, the size of these formulations depends on the discretization of the time horizon and can become prohibitively large. The recently-developed dynamic discretization discovery (DDD) method allows many time-dependent problems to become more tractable by iteratively solving instances of the problem on smaller networks where each node has its own discrete set of departure times.
There are two design elements of existing DDD algorithms that make it problematic for use in region-based hub-and-spoke networks. First, in each iteration, all arcs departing a common node share the same set of departure times. This causes DDD to be ineffective for solving problems where all near-optimal solutions require many distinct departure times at the majority of the high-degree nodes in the network, an aspect characteristic of region-based networks. A second challenge is handling static storage constraints without leading to a weak relaxation in each iteration.
To mitigate these limitations, an arc-based DDD framework is proposed in Chapter 3, where departure times are determined at the arc level instead of the node level. We apply this arc-based DDD method to instances of the service network design problem (SND). We show that an arc-based approach is particularly advantageous when instances arise from region-based networks, and when candidate paths are fixed in the base graph for each commodity. Moreover, our algorithm builds upon the existing DDD framework and achieves these improvements with only benign modifications to the original implementation. Additionally, Chapter 4 introduces bounds on additional storage required in each iteration, expanding the applicability of DDD to problems with bounded node storage, such as the universal packet routing problem. Our arguments rely solely on the structure of the standard map, μ, from the original formulation to the smaller relaxed formulations.
In order to maintain consistent operations, some models stipulate that the implemented transportation schedule must be repeated each day. In Chapter 5 we present a DDD model for solving a version of SND with cyclic constraints. We show that these cyclic constraints require new conditions on the time discretization at each node, leading to larger partial networks. We then highlight challenges in reducing the size of partial networks as they grow over each iteration of DDD. We demonstrate that certain policies for removing departure times in each iteration can cause the iterations in DDD to repeat, preventing termination.
In the second half of this thesis, we study parcel sortation, an aspect of routing that has previously been left unaddressed from a theory perspective. Warehouses have limited sort points, the physical devices tasked with handling packages destined for a particular downstream warehouse. We propose a mathematical model for the physical requirements, and limitations of parcel sortation. We then show that it is NP-hard to determine whether a feasible sortation plan exists. We consider two natural objectives: minimizing the maximum number of sort points required at a warehouse, and minimizing the total number of sort points required in the network.
In Chapter 6, we consider the problem of minimizing the maximum number of sort points required at a warehouse. We discuss several settings, where (near-)optimality of a given sortation instance can be determined efficiently. The algorithms we propose are fast and build on combinatorial witness set type lower bounds that are reminiscent and extend those used in earlier work on degree-bounded spanning trees and arborescences. In Chapter 7, we present algorithms for minimizing the total number of sort points required in a network. In contrast to the min-degree setting, it is not known if this min-cardinality setting is NP-hard. In progress towards answering this question, we present fast combinatorial algorithms for solving in-tree, out-tree, and spider instances. Our algorithms are based on reduction, decomposition, and uncrossing techniques that simplify instances.
2024-04-25T00:00:00ZAnalytic Methods and Combinatorial PlantsChizewer, Jeremyhttp://hdl.handle.net/10012/204252024-04-09T02:31:01Z2024-04-08T00:00:00ZAnalytic Methods and Combinatorial Plants
Chizewer, Jeremy
Combinatorial structures have broad applications in computer science, from error-correcting codes to matrix multiplication. Many analytic tools have been developed for studying these structures. In this thesis, we examine three applications of these tools to problems in combinatorics. By coincidence, each problem involves a combinatorial structure named for a plant--AVL trees, cactus graphs, and sunflowers--which we refer to collectively as combinatorial plants.
In our first result, we use a novel decomposition to create a succinct encoding for tree classes satisfying certain properties, extending results of Munro, Nicholson, Benkner, and Wild. This has applications to the study of data structures in computer science, and our encoding supports a wide range of operations in constant time.
To analyze our encoding, we derive asymptotics for the information-theoretic lower bound on the number of bits needed to store these trees. Our method characterizes the exponential growth for the counting sequence of combinatorial classes whose generating functions satisfy certain functional equations, and may be of independent interest. Our analysis applies to AVL trees (a commonly studied self-balancing binary search tree in computer science) as a special case, and we show that about $0.938$ bits per node are necessary and sufficient to encode AVL trees.
Next, we study the hat guessing game on graphs. In this game, a player is placed on each vertex $v$ of a graph $G$ and assigned a colored hat from $h(v)$ possible colors. Each player makes a deterministic guess on their hat color based on the colors assigned to the players on neighboring vertices, and the players win if at least one player correctly guesses his assigned color. If there exists a strategy that ensures at least one player guesses correctly for every possible assignment of colors, the game defined by $\langle G,h\rangle$ is called winning. The hat guessing number of $G$ is the largest integer $q$ so that if $h(v)=q$ for all $v\in G$ then $\langle G,h\rangle$ is winning. We determine whether $\langle G,h\rangle $ is winning for any $h$ whenever $G$ is a cycle, resolving a conjecture of Kokhas and Latyshev in the affirmative and extending it. We then use this result to determine the hat guessing number of every cactus graph, in which every pair of cycles shares at most one vertex.
Finally, we study the sunflower problem. A sunflower with $r$ petals is a collection of $r$ sets over a ground set $X$ such that every element in $X$ is in no set, every set, or exactly one set. Erd\H{o}s and Rado~\cite{er} showed that a family of sets of size $n$ contains a sunflower if there are more than $n!(r-1)^n$ sets in the family. Alweiss et al.~\cite{alwz}, and subsequently Rao~\cite{rao} and Bell et al.~\cite{bcw}, improved this bound to $(O(r \log(n))^n$. We study the case where the pairwise intersections of the set family are restricted. In particular, we improve the best-known bound for set families when the size of the pairwise intersections of any two sets is in a set $L$. We also present a new bound for the special case when the set $L$ is the nonnegative integers less than or equal to $d$, using techniques of Alweiss et al.~\cite{alwz}.
2024-04-08T00:00:00ZGraph-Theoretic Techniques for Optimizing NISQ AlgorithmsJena, Andrewhttp://hdl.handle.net/10012/203432024-02-16T03:30:51Z2024-02-15T00:00:00ZGraph-Theoretic Techniques for Optimizing NISQ Algorithms
Jena, Andrew
Entering the NISQ era, the search for useful yet simple quantum algorithms is perhaps of more importance now than it may ever be in the future. In place of quantum walks, the quantum Fourier transform, and asymptotic results about far-term advantages of quantum computation, the real-world applications of today involve nitty-gritty details and optimizations which make the most use of our limited resources. These priorities pervade the research presented in this thesis, which focuses on combinatorial techniques for optimizing NISQ algorithms.
With variational algorithms directing the discussion, we investigate methods for reducing Hamiltonians, reducing measurement errors, and reducing entangling gates. All three of these reductions bring us ever closer to demonstrating the utility of quantum devices, by improving some of the major bottlenecks of the NISQ era, and all of them do so while rarely ever leaving the combinatorial framework provided by stabilizer states. The qubit tapering approach to Hamiltonian simplification which we present was developed independently of the work by Bravyi et al., who discovered how to reduce qubit counts by parallelizing the computation of the ground state. The measurement scheme we describe, AEQuO, is built upon years of research and dozens of articles detailing, comparing, and contrasting a plethora of schemes. The circuit optimization technique we introduce answers a question posed by Adcock et al., and our ideas and proofs are fundamentally grounded in the literature of isotropic systems and the graph-based results which have followed from it.
2024-02-15T00:00:00Z