您现在的位置:首页>外文期刊>IPSJ Transactions

期刊信息

  • 期刊名称:

    IPSJ Transactions

  • 中文名称: IPSJ交易
  • 刊频: 0.867
  • ISSN: 0387-5806
  • 出版社: -
  • 简介:
  • 排序:
  • 显示:
  • 每页:
全选(0
<1/20>
803条结果
  • 机译 所有人的一致子类型
    摘要: Consistent subtyping is employed in some gradual type systems to validate type conversions. The original definition by Siek and Taha serves as a guideline for designing gradual type systems with subtyping. Polymorphic types a la System F also induce a subtyping relation that relates polymorphic types to their instantiations. However, Siek and Taha's definition is not adequate for polymorphic subtyping. The first goal of this article is to propose a generalization of consistent subtyping that is adequate for polymorphic subtyping and subsumes the original definition by Sick and Taha. The new definition of consistent subtyping provides novel insights with respect to previous polymorphic gradual type systems, which did not employ consistent subtyping. The second goal of this article is to present a gradually typed calculus for implicit (higher-rank) polymorphism that uses our new notion of consistent subtyping. We develop both declarative and (bidirectional) algorithmic versions for the type system. The algorithmic version employs techniques developed by Dunfield and Krish-naswami for higher-rank polymorphism to deal with instantiation. We prove that the new calculus satisfies all static aspects of the refined criteria for gradual typing. We also study an extension of the type system with static and gradual type parameters, in an attempt to support a variant of the dynamic criterion for gradual typing. Assuming a coherence conjecture for the extended calculus, we show that the dynamic gradual guarantee of our source language can be reduced to that of lambda B, which, at the time of writing, is still an open question. Most of the metatheory of this article, except some marinal proofs for the algorithmic type system and extensions, has been mechanically formalized using the Coq proof assistant.
  • 机译 模块化产品程序
    摘要: Many interesting program properties like determinism or information flow security are hyperproperties, that is, they relate multiple executions of the same program. Hyperproperties can be verified using relational logics, but these logics require dedicated tool support and are difficult to automate. Alternatively, constructions such as self-composition represent multiple executions of a program by one product program, thereby reducing hyperproperties of the original program to trace properties of the product. However, existing constructions do not fully support procedure specifications, for instance, to derive the determinism of a caller from the determinism of a callee, making verification non-modular.We present modular product programs, a novel kind of product program that permits hyperproperties in procedure specifications and, thus, can reason about calls modularly. We provide a general formalization of our product construction and prove it sound and complete. We demonstrate its expressiveness by applying it to information flow security with advanced features such as declassification and termination-sensitivity. Modular product programs can be verified using off-the-shelf verifiers; we have implemented our approach for both secure information flow and general hyperproperties using the Viper verification infrastructure. Our evaluation demonstrates that modular product programs can be used to prove hyperproperties for challenging examples in reasonable time.
  • 机译 通过模态的代数效应的行为对等
    摘要: The article investigates behavioural equivalence between programs in a call-by-value functional language extended with a signature of (algebraic) effect-triggering operations. Two programs are considered as being behaviourally equivalent if they enjoy the same behavioural properties. To formulate this, we define a logic whose formulas specify behavioural properties. A crucial ingredient is a collection of modalities expressing effect-specific aspects of behaviour. We give a general theory of such modalities. If two conditions, openness and decomposability, are satisfied by the modalities, then the logically specified behavioural equivalence coincides with a modality-defined notion of applicative bisimilarity, which can be proven to be a congruence by a generalisation of Howe's method. We show that the openness and decomposability conditions hold for several examples of algebraic effects: nondeterminism, probabilistic choice, global store, and input/output.
  • 机译 ESOP 2018特刊简介
    • 作者:Ahmed, Amal;
    • 刊名:情報処理学会論文誌
    • 2020年第1期
    摘要:
  • 机译 关于具有本地功能的机器的推理:可证明的安全堆栈和返回指针管理
    摘要: Capability machines provide security guarantees at machine level which makes them an interesting target for secure compilation schemes that provably enforce properties such as control-flow correctness and encapsulation of local state. We provide a formalization of a representative capability machine with local capabilities and study a novel calling convention. We provide a logical relation that semantically captures the guarantees provided by the hardware (a form of capability safety) and use it to prove control-flow correctness and encapsulation of local state. The logical relation is not specific to our calling convention and can be used to reason about arbitrary programs.
  • 机译 功能特定的分析
    摘要:While high-level languages come with significant readability and maintainability benefits, their performance remains difficult to predict. For example, programmers may unknowingly use language features inappropriately, which cause their programs to run slower than expected. To address this issue, we introduce feature-specific profiling, a technique that reports performance costs in terms of linguistic constructs. Feature-specific profilers help programmers find expensive uses of specific features of their language. We describe the architecture of a profiler that implements our approach, explain prototypes of the profiler for two languages with different characteristics and implementation strategies, and provide empirical evidence for the approach's general usefulness as a performance debugging tool.
  • 机译 何时收集垃圾的最佳选择
    摘要:We consider the ultimate limits of program-specific garbage collector (GC) performance for real programs. We first characterize the GC schedule optimization problem. Based on this characterization, we develop a linear-time dynamic programming solution that, given a program run and heap size, computes an optimal schedule of collections for a non-generational collector. Using an analysis of a heap object graph of the program, we compute a property of heap objects that we call their pre-birth time. This information enables us to extend the non-generational GC schedule problem to the generational GC case in a way that also admits a dynamic programming solution with cost quadratic in the length of the trace (number of objects allocated). This improves our previously reported approximately optimal result. We further extend the two-generation dynamic program to any number of generations, allowing other generalizations as well. Our experimental results for two generations on traces from Java programs of the DaCapo benchmark suite show that there is considerable promise to reduce garbage collection costs for some programs by developing program-specific collection policies. For a given space budget, optimal schedules often obtain modest but useful time savings, and for a given time budget, optimal schedules can obtain considerable space savings.
  • 机译 重新考虑增量和并行指针分析
    摘要:Pointer analysis is at the heart of most interprocedural program analyses. However, scaling pointer analysis to large programs is extremely challenging. In this article, we study incremental pointer analysis and present a new algorithm for computing the points-to information incrementally (i.e., upon code insertion, deletion, and modification). Underpinned by new observations of incremental pointer analysis, our algorithm significantly advances the state of the art in that it avoids redundant computations and the expensive graph reachability analysis, and preserves precision as the corresponding whole program exhaustive analysis. Moreover, it is parallel within each iteration of fixed-point computation. We have implemented our algorithm, IPA, for Java based on the WALA framework and evaluated its performance extensively on real-world large, complex applications. Experimental results show that IPA achieves more than 200X speedups over existing incremental algorithms, two to five orders of magnitude faster than whole program pointer analysis, and also improves the performance of an incremental data race detector by orders of magnitude. Our IPA implementation is open source and has been adopted by WALA.
  • 机译 用于常规屏障同步的动态死锁验证
    摘要:We present Armus, a verification tool for dynamically detecting or avoiding barrier deadlocks. The core design of Armus is based on phasers, a generalisation of barriers that supports split-phase synchronisation, dynamic membership, and optional-waits. This allows Armus to handle the key barrier synchronisation patterns found in modern languages and libraries. We implement Armus for X10 and Java, giving the first sound and complete barrier deadlock verification tools in these settings.Armus introduces a novel event-based graph model of barrier concurrency constraints that distinguishes task-event and event-task dependencies. Decoupling these two kinds of dependencies facilitates the verification of distributed barriers with dynamic membership, a challenging feature of X10. Further, our base graph representation can be dynamically switched between a task-to-task model, Wait-for Graph (WFG), and an event-to-event model, State Graph (SG), to improve the scalability of the analysis.Formally, we show that the verification is sound and complete with respect to the occurrence of deadlock in our core phaser language, and that switching graph representations preserves the soundness and completeness properties. These results are machine checked with the Coq proof assistant. Practically, we evaluate the runtime overhead of our implementations using three benchmark suites in local and distributed scenarios. Regarding deadlock detection, distributed scenarios show negligible overheads and local scenarios show overheads below 1.15x. Deadlock avoidance is more demanding, and highlights the potential gains from dynamic graph selection. In one benchmark scenario, the runtime overheads vary from 1.8x for dynamic selection, 2.6x for SG-static selection, and 5.9x for WFG-static selection.
  • 机译 具有符号泰勒展开数的浮点舍入误差的严格估计
    摘要:Rigorous estimation of maximum floating-point round-off errors is an important capability central to many formal verification tools. Unfortunately, available techniques for this task often provide very pessimistic overestimates, causing unnecessary verification failure. We have developed a new approach called Symbolic Taylor Expansions that avoids these problems, and implemented a new tool called FPTaylor embodying this approach. Key to our approach is the use of rigorous global optimization, instead of the more familiar interval arithmetic, affine arithmetic, and/or SMT solvers. FPTaylor emits per-instance analysis certificates in the form of HOL Light proofs that can be machine checked.In this article, we present the basic ideas behind Symbolic Taylor Expansions in detail. We also survey as well as thoroughly evaluate six tool families, namely, Gappa (two tool options studied), Fluctuat, PRECiSA, Real2Float, Rosa, and FPTaylor (two tool options studied) on 24 examples, running on the same machine, and taking care to find the best options for running each of these tools. This study demonstrates that FPTaylor estimates round-off errors within much tighter bounds compared to other tools on a significant number of case studies. We also release FPTaylor along with our benchmarks, thus contributing to future studies and tool development in this area.
  • 机译 咖喱风格语言的实用子类型化
    摘要:We present a new, syntax-directed framework for Curry-style type systems with subtyping. It supports a rich set of features, and allows for a reasonably simple theory and implementation. The system we consider has sum and product types, universal and existential quantifiers, and inductive and coinductive types. The latter two may carry size invariants that can be used to establish the termination of recursive programs. For example, the termination of quiksort can be derived by showing that partitioning a list does not increase its size. The system deals with complex programs involving mixed induction and coinduction, or even mixed polymorphism and (co-)induction. One of the key ideas is to separate the notion of size from recursion. We do not check the termination of programs directly, but rather show that their (circular) typing proofs are wellfounded. Termination is then obtained using a standard (semantic) normalisation proof. To demonstrate the practicality of the system, we provide an implementation accepting all the examples discussed in the article.
  • 机译 Java虚拟机上任务粒度的分析和优化
    摘要: Task granularity, i.e., the amount of work performed by parallel tasks, is a key performance attribute of parallel applications. On the one hand, fine-grained tasks (i.e., small tasks carrying out few computations) may introduce considerable parallelization overheads. On the other hand, coarse-grained tasks (i.e., large tasks performing substantial computations) may not fully utilize the available CPU cores, leading to missed parallelization opportunities. In this article, we provide a better understanding of task granularity for task-parallel applications running on a single Java Virtual Machine in a shared-memory multicore. We present a new methodology to accurately and efficiently collect the granularity of each executed task, implemented in a novel profiler (available open-source) that collects carefully selected metrics from the whole system stack with low overhead, and helps developers locate performance and scalability problems. We analyze task granularity in the DaCapo, ScalaBench, and Spark Perf benchmark suites, revealing inefficiencies related to fine-grained and coarse-grained tasks in several applications. We demonstrate that the collected task-granularity profiles are actionable by optimizing task granularity in several applications, achieving speedups up to a factor of 5.90x. Our results highlight the importance of analyzing and optimizing task granularity on the Java Virtual Machine.
  • 机译 高阶需求驱动程序分析
    摘要: Developing accurate and efficient program analyses for languages with higher-order functions is known to be difficult. Here we define a new higher-order program analysis, Demand-Driven Program Analysis (DDPA), which extends well-known demand-driven lookup techniques found in first-order program analyses to higher-order programs.This task presents several unique challenges to obtain good accuracy, including the need for a new method for demand-driven lookup of non-local variable values. DDPA is flow- and context-sensitive and provably polynomial-time. To efficiently implement DDPA, we develop a novel pushdown automaton metaprogramming framework, the Pushdown Reachability automaton. The analysis is formalized and proved sound, and an implementation is described.
  • 机译 组合寄存器分配和指令调度
    摘要: This article introduces a combinatorial optimization approach to register allocation and instruction scheduling, two central compiler problems. Combinatorial optimization has the potential to solve these problems optimally and to exploit processor-specific features readily. Our approach is the first to leverage this potential in practice: it captures the complete set of program transformations used in state-of-the-art compilers, scales to medium-sized functions of up to 1.000 instructions, and generates executable code. This level of practicality is reached by using constraint programming, a particularly suitable combinatorial optimization technique. Unison, the implementation of our approach, is open source, used in industry, and integrated with the LLVM toolchain.An extensive evaluation confirms that Unison generates better code than LLVM while scaling to medium-sized functions. The evaluation uses systematically selected benchmarks from Media-Bench and SPEC CPU2006 and different processor architectures (Hexagon, ARM, MIPS). Mean estimated speedup ranges from 1.1% to 10% and mean code size reduction ranges from 1.3% to 3.8% for the different architectures. A significant part of this improvement is due to the integrated nature of the approach. Executing the generated code on Hexagon confirms that the estimated speedup results in actual speedup. Given a fixed time limit, Unison solves optimally functions of up to 946 instructions, nearly an order of magnitude larger than previous approaches.The results show that our combinatorial approach can be applied in practice to trade compilation time for code quality beyond the usual compiler optimization levels, identify improvement opportunities in heuristic algorithms, and fully exploit processor-specific features.
  • 机译 Resilient X10中的故障恢复
    摘要: Cloud computing has made the resources needed to execute large-scale in-memory distributed computations widely available. Specialized programming models, e.g., MapReduce, have emerged to offer transparent fault tolerance and fault recovery for specific computational patterns, but they sacrifice generality. In contrast, the Resilient X10 programming language adds failure containment and failure awareness to a general purpose, distributed programming language. A Resilient X10 application spans over a number of places. Its formal semantics precisely specify how it continues executing after a place failure. Thanks to failure awareness, the X10 programmer can in principle build redundancy into an application to recover from failures. In practice, however, correctness is elusive, as redundancy and recovery are often complex programming tasks.This article further develops Resilient X10 to shift the focus from failure awareness to failure recovery, from both a theoretical and a practical standpoint. We rigorously define the distinction between recoverable and catastrophic failures. We revisit the happens-before Invariance principle and its implementation. We shift most of the burden of redundancy and recovery from the programmer to the runtime system and standard library. We make it easy to protect critical data from failure using resilient stores and harness elasticity-dynamic place creation-to persist not just the data but also its spatial distribution.We demonstrate the flexibility and practical usefulness of Resilient X10 by building several representative high-performance in-memory parallel application kernels and frameworks. These codes are 10x to 25x larger than previous Resilient X10 benchmarks. For each application kernel, the average runtime overhead of resiliency is less than 7%. By comparing application kernels written in the Resilient X10 and Spark programming models, we demonstrate that Resilient X10's more general programming model can enable significantly better application performance for resilient in-memory distributed computations.
  • 机译 PYE:一种用于Java程序的精确高效的即时分析框架
    摘要: Languages like Java and C# follow a two-step process of compilation: static compilation and just-in-time (JIT) compilation. As the time spent in JIT compilation gets added to the execution-time of the application, JIT compilers typically sacrifice the precision of program analyses for efficiency. The alternative of performing the analysis for the whole program statically ignores the analysis of libraries (available only at runtime), and thereby generates imprecise results. To address these issues, in this article, we propose a two-step (static+JIT) analysis framework called precise-yet-efficient (PYE) that helps generate precise analysis-results at runtime at a very low cost.PYE achieves the twin objectives of precision and performance during JIT compilation by using a two-pronged approach: (i) It performs expensive analyses during static compilation, while accounting for the unavailability of the runtime libraries by generating partial results, in terms of conditional values, for the input application. (ii) During JIT compilation, PYE resolves the conditions associated with these values, using the pre-computed conditional values for the libraries, to generate the final results. We have implemented the static and the runtime components of PYE in the Soot optimization framework and the OpenJDK HotSpot Server Compiler (C2), respectively. We demonstrate the usability of PYE by instantiating it to perform two context-, flow-, and field-sensitive heap-based analyses: (i) points-to analysis for null-dereference-check elimination; and (ii) escape analysis for synchronization elimination. We evaluate these instantiations against their corresponding state-of-the-art implementations in C2 over a wide range of benchmarks. The extensive evaluation results show that our strategy works quite well and fulfills both the promises it makes: enhanced precision while maintaining efficiency during JIT compilation.
  • 机译 Java中注入攻击的静态识别
    摘要: The most dangerous security-related software errors, according to the OWASP Top Ten 2017 list, affect web applications. They are potential injection attacks that exploit user-provided data to execute undesired operations: database access and updates (SQL injection); generation of malicious web pages (cross-site scripting injection); redirection to user-specified web pages (redirect injection); execution of OS commands and arbitrary scripts (command injection); loading of user-specified, possibly heavy or dangerous classes at run time (reflection injection); access to arbitrary files on the file system (path-traversal); and storing user-provided data into heap regions normally assumed to be shielded from the outside world (trust boundary violation). All these attacks exploit the same weakness: unconstrained propagation of data from sources that the user of a web application controls into sinks whose activation might trigger dangerous operations. Although web applications are written in a variety of languages, Java remains a frequent choice, in particular for banking applications, where security has tangible relevance.This article defines a unified, sound protection mechanism against such attacks, based on the identification of all possible explicit flows of tainted data in Java code. Such flows can be arbitrarily complex, passing through dynamically allocated data structures in the heap. The analysis is based on abstract interpretation and is interprocedural, flow-sensitive, and context-sensitive. Its notion of taint applies to reference (non-primitive) types dynamically allocated in the heap and is object-sensitive and field-sensitive. The analysis works by translating the program into Boolean formulas that model all possible data flows. Its implementation, within the Julia analyzer for Java and Android, found injection security vulnerabilities in the Internet banking service and in the customer relationship management of large Italian banks, as well as in a set of open-source third-party applications. It found the command injection, which is at the origin of the 2017 Equifax data breach, one of the worst data breaches ever. For objective, repeatable results, this article also evaluates the implementation on two open-source security benchmarks: the Juliet Suite and the OWASP Benchmark for the automatic comparison of static analyzers for cybersecurity. We compared this technique against more than 10 other static analyzers, both free and commercial. The result of these experiments is that ours is the only analysis for injection that is sound (up to well-stated limitations such as multithreading and native code) and works on industrial code, and it is also much more precise than other tools.
  • 机译 ML,可视下推类内存自动机和带有状态的扩展分支向量加法系统
    摘要: We prove that the observational equivalence problem for a finitary fragment of the programming langauge ML is recursively equivalent to the reachability problem for extended branching vector addition systems with states (EBVASS). This result has two natural and independent parts. We first prove that the observational equivalence problem is equivalent to the emptiness problem for a new class of class memory automata equipped with a visibly pushdown stack, called Visibly Pushdown Class Memory Automata (VPCMA). Our proof uses the fully abstract game semantics of the language. We then prove that the VPCMA emptiness problem is equivalent to the reachability problem for EBVASS. The results of this article complete our programme to give an automata classification of the ML types with respect to the observational equivalence problem for closed terms.
  • 机译 通过约束求解实现CSS缩小
    摘要: Minification is a widely accepted technique that aims at reducing the size of the code transmitted over the web. This article concerns the problem of semantic-preserving minification of Cascading Style Sheets (CSS)-the de facto language for styling web documents-based on merging similar rules.The cascading nature of CSS makes the semantics of CSS files sensitive to the ordering of rules in the file. To automatically identify rule-merging opportunities that best minimise file size, we reduce the rule-merging problem to a problem concerning "CSS-graphs," i.e., node-weighted bipartite graphs with a dependency ordering on the edges, where weights capture the number of characters.Constraint solving plays a key role in our approach. Transforming a CSS file into a CSS-graph problem requires us to extract the dependency ordering on the edges (an NP-hard problem), which requires us to solve the selector intersection problem. To this end, we provide the first full formalisation of CSS3 selectors (the most stable version of CSS) and reduce their selector intersection problem to satisfiability of quantifier-free integer linear arithmetic, for which highly optimised SMT-solvers are available. To solve the above NP-hard graph optimisation problem, we show how Max-SAT solvers can be effectively employed. We have implemented our rule-merging algorithm and tested it against approximately 70 real-world examples (including examples from each of the top 20 most popular websites). We also used our benchmarks to compare our tool against six well-known minifiers (which implement other optimisations). Our experiments suggest that our tool produced larger savings. A substantially better minification rate was shown when our tool is used together with these minifiers.
  • 机译 依存类型的经典顺序演算
    摘要: Dependent types are a key feature of the proof assistants based on the Curry-Howard isomorphism. It is well known that this correspondence can be extended to classical logic by enriching the language of proofs with control operators. However, they are known to misbehave in the presence of dependent types, unless dependencies are restricted to values. Moreover, while sequent calculi naturally support continuation-passing-style interpretations, there is no such presentation of a language with dependent types. The main achievement of this article is to give a sequent calculus presentation of a call-by-value language with a control operator and dependent types, and to justify its soundness through a continuation-passing-style translation.We start from the call-by-value version of the lambda mu(mu) over tilde -calculus. We design a minimal language with a value restriction and a type system that includes a list of explicit dependencies to maintain type safety. We then show how to relax the value restriction and introduce delimited continuations to directly prove the consistency by means of a continuation-passing-style translation. Finally, we relate our calculus to a similar system by Lepigre and present a methodology to transfer properties from this system to our own.
  • 联系方式:010-58892860转803 (工作时间) 18141920177 (微信同号)
  • 客服邮箱:kefu@zhangqiaokeyan.com
  • 京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-1 六维联合信息科技(北京)有限公司©版权所有
  • 客服微信
  • 服务号