您现在的位置:首页>外文期刊>Journal of the Association for Computing Machinery

期刊信息

  • 期刊名称:

    Journal of the Association for Computing Machinery

  • 中文名称: 计算机器协会杂志
  • 刊频: 2.717
  • ISSN: 0004-5411
  • 出版社: -
  • 简介:
  • 排序:
  • 显示:
  • 每页:
全选(0
<1/20>
671条结果
  • 机译 PCL定理:事务不能并行,一致且实时
    摘要:2.1-2.66%We establish a theorem called the PCL theorem, which states that it is impossible to design a transactional memory algorithm that ensures (1) parallelism, i.e., transactions do not need to synchronize unless they access the same application objects, (2) very little consistency, i.e., a consistency condition, called weak adaptive consistency, introduced here and that is weaker than snapshot isolation, processor consistency, and any other consistency condition stronger than them (such as opacity, serializability, causal serializability, etc.), and (3) very little liveness, i.e., which transactions eventually commit if they run solo.
  • 机译 快速学习需要良好的记忆力:奇偶学习的时空下界
    • 作者:Raz, Ran;
    • 刊名:Journal of the Association for Computing Machinery
    • 2019年第1期
    摘要:3.1-3.18%We prove that any algorithm for learning parities requires either a memory of quadratic size or an exponential number of samples. This proves a recent conjecture of Steinhardt et al. (2016) and shows that for some learning problems, a large storage space is crucial.More formally, in the problem of parity learning, an unknown string x is an element of {0,1}(n) was chosen uniformly at random. A learner tries to learn x from a stream of samples (a(1), b(1)). (a(2), b(2)) ..., where each a(t) is uniformly distributed over {0, 1}(n) and b(t) is the inner product of a(t) and x, modulo 2. We show that any algorithm for parity learning that uses less than n(2)/25 bits of memory requires an exponential number of samples.Previously, there was no non-trivial lower bound on the number of samples needed for any learning problem, even if the allowed memory size is O(n) (where n is the space needed to store one sample).We also give an application of our result in the field of bounded-storage cryptography. We show an encryption scheme that requires a private key of length n, as well as time complexity of n per encryption/decryption of each bit, and is provably and unconditionally secure as long as the attacker uses less than n(2)/25 memory bits and the scheme is used at most an exponential number of times. Previous works on bounded-storage cryptography assumed that the memory size used by the attacker is at most linear in the time needed for encryption/decryption.
  • 机译 在弱指数时间内计算基本半代数集的同调性
    摘要:5.1-5.30%We describe and analyze an algorithm for computing the homology (Betti numbers and torsion coefficients) of basic semialgebraic sets that works in weak exponential time. That is, of a set of exponentially small measure in the space of data, the cost of the algorithm is exponential in the size of the data. All algorithms previously proposed for this problem have a complexity that is doubly exponential (and this is so for almost all data).
  • 机译 用逻辑进行工程:严格的Test-Oracle规范和TCP / IP和Sockets API的验证
    摘要:1.1-1.77%Conventional computer engineering relies on test-and-debug development processes, with the behavior of common interfaces described (at best) with prose specification documents. But prose specifications cannot be used in test-and-debug development in any automated way, and prose is a poor medium for expressing complex (and loose) specifications.The TCP/IP protocols and Sockets API are a good example of this: they play a vital role in modern communication and computation, and interoperability between implementations is essential. But what exactly they are is surprisingly obscure: their original development focused on \"rough consensus and running code,\" augmented by prose RFC specifications that do not precisely define what it means for an implementation to be correct. Ultimately, the actual standard is the de facto one of the common implementations, including, for example, the 15 000 to 20 000 lines of the BSD implementation-optimized and multithreaded C code, time dependent, with asynchronous event handlers, intertwined with the operating system, and security critical.This article reports on work done in the Netsem project to develop lightweight mathematically rigorous techniques that can be applied to such systems: to specify their behavior precisely (but loosely enough to permit the required implementation variation) and to test whether these specifications and the implementations correspond with specifications that are executable as test oracles. We developed post hoc specifications of TCP, UDP, and the Sockets API, both of the service that they provide to applications (in terms of TCP bidirectional stream connections) and of the internal operation of the protocol (in terms of TCP segments and UDP datagrams), together with a testable abstraction function relating the two. These specifications are rigorous, detailed, readable, with broad coverage, and rather accurate. Working within a general-purpose proof assistant (HOL4), we developed language idioms (within higher-order logic) in which to write the specifications: operational semantics with nondeterminism, time, system calls, monadic relational progranuning, and so forth. We followed an experimental semantics approach, validating the specifications against several thousand traces captured from three implementations (FreeBSD, Linux, and WinXP). Many differences between these were identified, as were a number of bugs. Validation was done using a special-purpose symbolic model checker programmed above HOL4.Having demonstrated that our logic-based engineering techniques suffice for handling real-world protocols, we argue that similar techniques could be applied to future critical software infrastructure at design time, leading to cleaner designs and (via specification-based testing) more robust and predictable implementations. In cases where specification looseness can be controlled, this should be possible with lightweight techniques, without the need for a general-purpose proof assistant, at relatively little cost.
  • 机译 资源分配问题的近似最优在线算法和快速逼近算法
    摘要:7.1-7.41%We present prior robust algorithms for a large class of resource allocation problems where requests arrive one-by-one (online), drawn independently from an unknown distribution at every step. We design a single algorithm that, for every possible underlying distribution, obtains a 1 - epsilon fraction of the profit obtained by an algorithm that knows the entire request sequence ahead of time. The factor epsilon approaches 0 when no single request consumes/contributes a significant fraction of the global consumption/contribution by all requests together. We show that the tradeoff we obtain here that determines how fast epsilon approaches 0, is near optimal: We give a nearly matching lower bound showing that the tradeoff cannot be improved much beyond what we obtain.Going beyond the model of a static underlying distribution, we introduce the adversarial stochastic input model, where an adversary, possibly in an adaptive manner, controls the distributions from which the requests are drawn at each step. Placing no restriction on the adversary, we design an algorithm that obtains a 1 - epsilon fraction of the optimal profit obtainable w.r.t. the worst distribution in the adversarial sequence. Further, if the algorithm is given one number per distribution, namely the optimal profit possible for each of the adversary's distribution, then we design an algorithm that achieves a 1 - epsilon fraction of the weighted average of the optimal profit of each distribution the adversary picks.In the offline setting we give a fast algorithm to solve very large linear programs (LPs) with both packing and covering constraints. We give algorithms to approximately solve (within a factor of 1 + epsilon) the mixed packing-covering problem with O(gamma m log(n/delta)/epsilon(2)) oracle calls where the constraint matrix of this LP has dimen- sion n x m, the success probability of the algorithm is 1 - delta, and gamma quantifies how significant a single request is when compared to the sum total of all requests.We discuss implications of our results to several special cases including online combinatorial auctions, network routing, and the adwords problem.
  • 机译 近线性时间的确定性边缘连接
    摘要:4.1-4.50%We present a deterministic algorithm that computes the edge-connectivity of a graph in near-linear time. This is for a simple undirected unweighted graph G with n vertices and m edges. This is the first o(mn) time deterministic algorithm for the problem. Our algorithm is easily extended to find a concrete minimum edge-cut. In fact, we can construct the classic cactus representation of all minimum cuts in near-linear time.The previous fastest deterministic algorithm by Gabow from STOC'91 took (O) over tilde (m + lambda(2)n), where lambda is the edge connectivity, but lambda can be as big as n - 1. Karger presented a randomized near-linear time Monte Carlo algorithm for the minimum cut problem at STOC'96, but the returned cut is only minimum with high probability.Our main technical contribution is a near-linear time algorithm that contracts vertex sets of a simple input graph G with minimum degree delta, producing a multigraph (G) over tilde with (O) over tilde (m/delta) edges, which preserves all minimum cuts of G with at least two vertices on each side.In our deterministic near-linear time algorithm, we will decompose the problem via low-conductance cuts found using PageRank a la Brin and Page (1998), as analyzed by Andersson, Chung, and Lang at FOCS'06. Normally, such algorithms for low-conductance cuts are randomized Monte Carlo algorithms, because they rely on guessing a good start vertex. However, in our case, we have so much structure that no guessing is needed.
  • 机译 扩展指数补偿:恒定吞吐量,对数通道访问尝试和鲁棒性
    摘要:6.1-6.33%Randomized exponential backoff is a widely deployed technique for coordinating access to a shared resource. A good backoff protocol should, arguably, satisfy three natural properties: (1) it should provide constant throughput, wasting as little time as possible; (2) it should require few failed access attempts, minimizing the amount of wasted effort; and (3) it should be robust, continuing to work efficiently even if some of the access attempts fail for spurious reasons. Unfortunately, exponential backoff has some well-known limitations in two of these areas: it can suffer subconstant throughput under bursty traffic, and it is not robust to adversarial disruption.The goal of this article is to \"fix\" exponential backoff by making it scalable, particularly focusing on the case where processes arrive in an online, worst-case fashion. We present a relatively simple backoff protocol, RE-BACKOFF, that has, at its heart, a version of exponential backoff. It guarantees expected constant throughput with dynamic process arrivals and requires only an expected polylogarithmic number of access attempts per process.RE-BACKOFF is also robust to periods where the shared resource is unavailable for a period of time. If it is unavailable for D time slots, RE-BACKOFF provides the following guarantees. For n packets, the expected number of access attempts for successfully sending a packet is O(log(2)(n + D)). For the case of an infinite number of packets, we provide a similar result in terms of the maximum number of processes that are ever in the system concurrently.
  • 机译 图形模型中的近似计数,Lovazz局部引理和推断
    • 作者:Moitra, Ankur;
    • 刊名:Journal of the Association for Computing Machinery
    • 2019年第2期
    摘要:In this article, we introduce a new approach to approximate counting in bounded degree systems with higher-order constraints. Our main result is an algorithm to approximately count the number of solutions to a CNF formula Phi when the width is logarithmic in the maximum degree. This closes an exponential gap between the known upper and lower bounds.Moreover, our algorithm extends straightforwardly to approximate sampling, which shows that under Lovasz Local Lemma-like conditions it is not only possible to find a satisfying assignment, it is also possible to generate one approximately uniformly at random from the set of all satisfying assignments. Our approach is a significant departure from earlier techniques in approximate counting, and is based on a framework to bootstrap an oracle for computing marginal probabilities on individual variables. Finally, we give an application of our results to show that it is algorithmically possible to sample from the posterior distribution in an interesting class of graphical models.
  • 机译 在单词的一阶量词交替层次结构中更高
    摘要:We investigate quantifier alternation hierarchies in first-order logic on finite words. Levels in these hierarchies are defined by counting the number of quantifier alternations in formulas. We prove that one can decide membership of a regular language in the levels B Sigma(2) (finite Boolean combinations of formulas having only one alternation) and Sigma(3) (formulas having only two alternations and beginning with an existential block). Our proofs work by considering a deeper problem, called separation, which, once solved for lower levels, allows us to solve membership for higher levels.
  • 机译 条形归纳法与构造型理论相容
    摘要:Powerful yet effective induction principles play an important role in computing, being a paramount component of programming languages, automated reasoning, and program verification systems. The Bar Induction (BI) principle is a fundamental concept of intuitionism, which is equivalent to the standard principle of transfinite induction. In this work, we investigate the compatibility of several variants of BI with Constructive Type Theory (CTT), a dependent type theory in the spirit of Martin-Lof's extensional theory. We first show that CTT is compatible with a BI principle for sequences of numbers. Then, we establish the compatibility of CTT with a more general BI principle for sequences of name-free closed terms. The formalization of the latter principle within the theory involved enriching CTT's term syntax with a limit constructor and showing that consistency is preserved. Furthermore, we provide novel insights regarding BI, such as the non-truncated version of BI on monotone bars being intuitionistically false. These enhancements are carried out formally using the Nuprl proof assistant that implements CTT and the formalization of CTT within the Coq proof assistant presented in previous works.
  • 机译 s-t路径TSP接近3/2
    摘要:We show that there is a polynomial-time algorithm with approximation guarantee 3/2 + epsilon for the s-t-path TSP, for any fixed epsilon > 0.It is well-known that Wolsey's analysis of Christofide algorithm also works for the s-t-path TSP with its natural LP relaxation, except for the narrow cuts (in which the LP solution has a value less than two). A fixed optimum tour has either a single edge in a narrow cut (then call the edge and the cut lonely) or at least three (then call the cut busy). Our algorithm \"guesses\" (by dynamic programming) lonely cuts and edges. Then, we partition the instance into smaller instances and strengthen the LP, requiring a value of at least three for busy cuts. By setting up a k-stage recursive dynamic program, we can compute a spanning tree (V, S) and an LP solution y such that (1/2 + O(2(-k)))y is in the T-join polyhedron, where T is the set of vertices whose degree in S has the wrong parity.
  • 机译 删除类型通道的容量上限
    • 作者:Cheraghchi, Mahdi;
    • 刊名:Journal of the Association for Computing Machinery
    • 2019年第2期
    摘要:We develop a systematic approach, based on convex programming and real analysis for obtaining upper bounds on the capacity of the binary deletion channel and, more generally, channels with i.i.d. insertions and deletions. Other than the classical deletion channel, we give special attention to the Poisson-repeat channel introduced by Mitzenmacher and Drinea (IEEE Transactions on Information Theory, 2006). Our framework can be applied to obtain capacity upper bounds for any repetition distribution (the deletion and Poisson-repeat channels corresponding to the special cases of Bernoulli and Poisson distributions). Our techniques essentially reduce the task of proving capacity upper bounds to maximizing a univariate, real-valued, and often concave function over a bounded interval. The corresponding univariate function is carefully designed according to the underlying distribution of repetitions, and the choices vary depending on the desired strength of the upper bounds as well as the desired simplicity of the function (e.g., being only efficiently computable versus having an explicit closed-form expression in terms of elementary, or common special, functions). Among our results, we show the following:(1) The capacity of the binary deletion channel with deletion probability d is at most (1 - d) log phi for d >= 1/2 and, assuming that the capacity function is convex, is at most 1 - d log(4/phi) for d < 1/2, where phi = (1 + root 5)/2 is the golden ratio. This is the first nontrivial capacity upper bound for any value of d outside the limiting case d -> 0 that is fully explicit and proved without computer assistance.(2) We derive the first set of capacity upper bounds for the Poisson-repeat channel. Our results uncover further striking connections between this channel and the deletion channel and suggest, somewhat counter-intuitively, that the Poisson-repeat channel is actually analytically simpler than the deletion channel and may be of key importance to a complete understanding of the deletion channel.(3) We derive several novel upper bounds on the capacity of the deletion channel. All upper bounds are maximums of efficiently computable, and concave, univariate real functions over a bounded domain. In turn, we upper bound these functions in terms of explicit elementary and standard special functions, whose maximums can be found even more efficiently (and sometimes analytically, for example, for d = 1/2).
  • 机译 收缩产生的伪随机性
    摘要:One powerful theme in complexity theory and pseudorandomness in the past few decades has been the use of lower bounds to give pseudorandom generators (PRGs). However, the general results using this hardness vs. randomness paradigm suffer from a quantitative loss in parameters, and hence do not give nontrivial implications for models where we don't know super-polynomial lower bounds but do know lower bounds of a fixed polynomial. We show that when such lower bounds are proved using random restrictions, we can construct PRGs which are essentially best possible without in turn improving the lower bounds.More specifically, say that a circuit family has shrinkage exponent Gamma if a random restriction leaving a p fraction of variables unset shrinks the size of any circuit in the family by a factor of p(Gamma+o(1)). Our PRG uses a seed of length s(1/(Gamma+ 1)+o(1)) to fool circuits in the family of size s. By using this generic construction, we get PRGs with polynomially small error for the following classes of circuits of size s and with the following seed lengths:(1) For de Morgan formulas, seed length s(1/3+o(1));(2) For formulas over an arbitrary basis, seed length s(1/2+o(1));(3) For read-once de Morgan formulas, seed length s(.234...);(4) For branching programs of size s, seed length s(1/2+o(1)).The previous best PRGs known for these classes used seeds of length bigger than n/2 to output n bits, and worked only for size s = O(n) [8].
  • 机译 通过单调本地搜索的精确算法
    摘要:We give a new general approach for designing exact exponential-time algorithms for subset problems. In a subset problem the input implicitly describes a family of sets over a universe of size n and the task is to determine whether the family contains at least one set. A typical example of a subset problem is Weighted d-SAT. Here, the input is a CNF-formula with clauses of size at most d, and an integer W. The universe is the set of variables and the variables have integer weights. The family contains all the subsets S of variables such that the total weight of the variables in S does not exceed W and setting the variables in S to 1 and the remaining variables to 0 satisfies the formula. Our approach is based on \"monotone local search,\" where the goal is to extend a partial solution to a solution by adding as few elements as possible. More formally, in the extension problem, we are also given as input a subset X of the universe and an integer k. The task is to determine whether one can add at most k elements to X to obtain a set in the (implicitly defined) family. Our main result is that a c(k)n(O(1)) time algorithm for the extension problem immediately yields a randomized algorithm for finding a solution of any size with running time O((2 - 1/c)(n)).In many cases, the extension problem can be reduced to simply finding a solution of size at most k. Furthermore, efficient algorithms for finding small solutions have been extensively studied in the field of parameterized algorithms. Directly applying these algorithms, our theorem yields in one stroke significant improvements over the best known exponential-time algorithms for several well-studied problems, including d-HITTING SET, FEEDBACK VERTEX SET, NODE UNIQE LABEL COVER, and WEIGHTED d-SAT. Our results demonstrate an interesting and very concrete connection between parameterized algorithms and exact exponential-time algorithms.We also show how to derandomize our algorithms at the cost of a subexponential multiplicative factor in the running time. Our derandomization is based on an efficient construction of a new pseudo-random object that might be of independent interest. Finally, we extend our methods to establish new combinatorial upper bounds and develop enumeration algorithms.
  • 机译 关于无害电路的复杂性
    摘要: The problem of constructing hazard-free Boolean circuits dates back to the 1940s and is an important problem in circuit design. Our main lower-bound result unconditionally shows the existence of functions whose circuit complexity is polynomially bounded while every hazard-free implementation is provably of exponential size. Previous lower bounds on the hazard-free complexity were only valid for depth 2 circuits. The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards.These results follow from a crucial structural insight: Hazard-free complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions. Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazard-free complexity. We also lift these methods from the monotone setting to prove exponential hazard-free complexity lower bounds for non-monotone functions.As our main upper-bound result, we show how to efficiently convert a Boolean circuit into a bounded-bit hazard-free circuit with only a polynomially large blow-up in the number of gates. Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement.As a side result, we establish the NP-completeness of several hazard detection problems.
  • 机译 近似实数计算的NP完全性和不适条件理论
    摘要: We develop a complexity theory for approximate real computations. We first produce a theory for exact computations but with condition numbers. The input size depends on a condition number, which is not assumed known by the machine. The theory admits deterministic and nondeterministic polynomial time recognizable problems. We prove that P is not NP in this theory if and only if P is not NP in the BSS theory over the reals.Then we develop a theory with weak and strong approximate computations. This theory is intended to model actual numerical computations that are usually performed in floating point arithmetic. It admits classes P and NP and also an NP-complete problem. We relate the P vs. NP question in this new theory to the classical P vs. NP problem.
  • 机译 度量空间中的土匪和专家
    摘要: In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of trials to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is well understood, bandit problems with large strategy sets are still a topic of active investigation, motivated by practical applications, such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions that enable the design of efficient solutions.In this work, we study a general setting for the multi-armed bandit problem, in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the Lipschitz MAB problem. We present a solution for the multi-armed bandit problem in this setting. That is, for every metric space, we define an isometry invariant that bounds from below the performance of Lipschitz MAB algorithms for this metric space, and we present an algorithm that comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions. We also address the full-feedback ("best expert") version of the problem, where after every round the payoffs from all arms are revealed.
  • 机译 无限期竞价游戏
    摘要: Two-player games on graphs are widely studied in formal methods, as they model the interaction between a system and its environment. The game is played by moving a token throughout a graph to produce an infinite path. There are several common modes to determine how the players move the token through the graph; e.g., in turn-based games the players alternate turns in moving the token. We study the bidding mode of moving the token, which, to the best of our knowledge, has never been studied in infinite-duration games. The following bidding rule was previously defined and called Richman bidding. Both players have separate budgets, which sum up to 1. In each turn, a bidding takes place: Both players submit bids simultaneously, where a bid is legal if it does not exceed the available budget, and the higher bidder pays his bid to the other player and moves the token. The central question studied in bidding games is a necessary and sufficient initial budget for winning the game: a threshold budget in a vertex is a value t is an element of [0, 1] such that if Player 1's budget exceeds t, he can win the game; and if Player 2's budget exceeds 1- t, he can win the game. Threshold budgets were previously shown to exist in every vertex of a reachability game, which have an interesting connection with random-turn games a sub-class of simple stochastic games in which the player who moves is chosen randomly. We show the existence of threshold budgets for a qualitative class of infinite-duration games, namely parity games, and a quantitative class, namely mean-payoff games. The key component of the proof is a quantitative solution to strongly connected mean-payoff bidding games in which we extend the connection with random-turn games to these games, and construct explicit optimal strategies for both players.
  • 机译 层次聚类:目标函数和算法
    摘要: Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a "good" hierarchical clustering is one that minimizes a particular cost function [23]. He showed that this cost function has certain desirable properties: To achieve optimal cost, disconnected components (namely, dissimilar elements) must be separated at higher levels of the hierarchy, and when the similarity between data elements is identical, all clusterings achieve the same cost.We take an axiomatic approach to defining "good" objective functions for both similarity- and dissimilarity-based hierarchical clustering. We characterize a set of admissible objective functions having the property that when the input admits a "natural" ground-truth hierarchical clustering, the ground-truth clustering has an optimal value. We show that this set includes the objective function introduced by Dasgupta.Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better and faster algorithms for hierarchical clustering. We also initiate a beyond worst-case analysis of the complexity of the problem and design algorithms for this scenario.
  • 联系方式:010-58892860转803 (工作时间) 18141920177 (微信同号)
  • 客服邮箱:kefu@zhangqiaokeyan.com
  • 京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-1 六维联合信息科技(北京)有限公司©版权所有
  • 客服微信
  • 服务号