您现在的位置:首页>外文期刊>Journal of network and computer applications

期刊信息

  • 期刊名称:

    Journal of network and computer applications

  • 中文名称: 网络与计算机应用杂志
  • 刊频: 1.111
  • ISSN: 1084-8045
  • 出版社: -
  • 简介:
  • 排序:
  • 显示:
  • 每页:
全选(0
<1/20>
2358条结果
  • 机译 开源捕获标志(CTF)环境的功能和配置的实证研究
    摘要: Capture the Flag (CTF) is a computer security competition that is generally used to give participants experience in securing (virtual) machines and responding to cyber attacks. CTF contests have been getting larger and are receiving many participants every year (e.g., DEFCON, NYU-CSAW). CTF competitions are typically hosted in virtual environments, specifically set up to fulfill the goals and scenarios of the CTF. This article investigates the underlying infrastructures and CTF environments, specifically open-source CTF environments. A systematic review is conducted to assess functionality and game configuration in CTF environments where the source code is available on the web (i.e., open-source software). In particular, from out of 28 CTF platforms, we found 12 open-source CTF environments. As four platforms were not installable for several reasons, we finally examined 8 open-source CTF environments (PieoCTF, FacebookCTF, HackTheArch, WrathCTF, Pedagogic-CTF, RootTheBox, CTFd and Mellivora) regarding their features and functions for hosting CTFs (e.g., scoring, statistics or supported challenge types) and providing game configurations (e.g., multiple flags, points, hint penalities). Surprisingly, while many platforms provide similar base functionality, game configurations between the platforms varied strongly. For example, hint penalty, time frames for solving challenges, limited number of attempts or dependencies between challenges are game options that might be relevant for potential CTF organizers and for choosing a technology. This article contributes to the general understanding of CTF software configurations and technology design and implementation. Potential CTF organizers and participants may use this as a reference for challenge configurations and technology utilization. Based on our analysis, we would like to further review commercial and other platforms in order to establish a golden standard for CTF environments and further contribute to a better understanding of CTF design and development.
  • 机译 用于云计算中恶意行为检测和识别的混合机器学习方法
    摘要: The rapid growth of new emerging computing technologies has encouraged many organizations to outsource their data and computational requirements. Such services are expected to always provide security principles such as confidentiality, availability and integrity; therefore, a highly secure platform is one of the most important aspects of Cloud-based computing environments. A considerable improvement over traditional security strategies is achieved by understanding how malware behaves over the entire behavioural space. In this paper, we propose a new approach to improve the capability of Cloud service providers to model users' behaviours. We applied a particle swarm optimization-based probabilistic neural network (PSO-PNN) for the detection and recognition process. In the first module of the recognition process, we meaningfully converted the users' behaviours to an understandable format and then classified and recognized the malicious behaviours by using a multi-layer neural network. We took advantage of the UNSW-NB15 dataset to validate the proposed solution by characterizing different types of malicious behaviours exhibited by users. Evaluation of the experimental results shows that the proposed method is promising for use in security monitoring and recognition of malicious behaviours.
  • 机译 高效计算无环路替代
    摘要: Network failures may lead to serious packet loss and degrade network performance. Therefore, loop-free alternates (LFA) have been widely deployed by many Internet service providers to cope with single network component failure in large Internet backbones. However, the efficiency of LFA has not been sufficiently studied. Some existing methods have extremely large computational overhead, and their computational complexity is linear with the average node degree of the network. The current methods consume a large amount of central processing unit resources, thereby aggravating the burden on the router. To improve the routing resilience without introducing significant extra overhead, this study proposes an incremental alternate computation with negative augmentation algorithm (IAC), which is based on incremental shortest path first algorithm. First, IAC turns the problem of quick implementation of LFA into efficiently calculating the minimum cost of all its neighbors to all other network nodes on the shortest path tree rooted at the computing node. Then, several theorems for calculating the cost are presented and their correctness is validated. Finally, we evaluate IAC through simulations with real-world and generated topologies. Compared with TBFH, DMPA, and DMPA-e, which are algorithms optimized for limited scenarios, IAC finds approximately 50% more available alternates, is more than three times faster, and provides node protection capabilities that TBFH, DMPA, and DMPA-e cannot provide. These advantages make IAC a good candidate for traditional telecommunication networks and emerging complex networks that require failure repair and load balancing in a highly dynamic environment.
  • 机译 CTRA:Smart World中复杂的避免地形区域收费算法
    摘要: The emergence of "Smart World" is set to enhance daily necessities with abilities of sensing, communication, computation and intelligence so that many tasks and processes could be simplified, and made more efficient and enjoyable. The realization of Smart World requires information sensing as the foundation of future applications, thus wireless sensor networks (WSNs) are highly called for deployment. However, the limited energy of nodes has always restricted the application of WSNs. Wireless rechargeable sensor networks (WRSNs) introduce mobile chargers in WSNs to provide energy to low-powered sensor nodes through wireless charging technologies, thus solving energy problems effectively. Many studies have designed charging algorithms in normal environment without considering terrain complexity. The complex terrain causes difficulties in the movement and the path planning of the charger. In this paper, we propose a complex terrain region-avoidance charging algorithm (CTRA) in WRSNs. The algorithm aims to optimize the node energy replenishment, minimize the energy consumption of nodes in complex terrain and reduce the frequency the charger visits the complex terrain. The CTRA comprises three parts: the network is divided into small charging regions based on terrain complexity; nodes are classified into three classes based on the terrain where they are located, and different data routing protocols are designed for different classes of nodes in order to reduce the energy consumption on nodes in complex terrain areas; three charging schemes are designed in different terrain areas for the mobile charger. Simulation experiments indicate that the charging efficiency of the CTRA is greatly improved, and the number of dead nodes are also reduced effectively.
  • 机译 边缘云环境中基于负载估计和服务支出的按需资源提供
    摘要: The trend of the Internet of Everything is deepening, and the amount of data that needs to be processed in the network is growing. Using the edge cloud technology can process data at the edge of the network, lowering the burden on the data center. When the load of the edge cloud is large, it is necessary to apply for more resources to the cloud service provider, and the resource billing granularity affects the cost. When the load is small, releasing the idle node resources to the cloud service provider can lower the service expenditure. To this end, an on-demand resource provision model based on service expenditure is proposed. The demand for resources needs to be estimated in advance. To this end, a load estimation model based on ARIMA model and BP neural network is proposed. The model can estimate the load according to historical data and reduce the estimation error. Before releasing the node resources, the user data on the node need to be migrated to other working nodes to ensure that the user data will not be lost. In this paper, when selecting the migration target, the three metrics of load balancing, migration time consumption and migration cost of the cluster are considered. A data migration model based on load balancing is proposed. Through the comparison of experimental results, the proposed methods can effectively reduce service expenditure and make the cluster in a state of load balancing.
  • 机译 基于BRNN的QoE估计模式和模型的强大的基于SDN的多媒体流量管理系统
    摘要: Nowadays, network infrastructures such as Software Defined Networks (SDN) achieve a huge computational power. This allows to add a high processing on the network nodes. In this paper, a multimedia traffic management system is presented. This system is based on estimation models of Quality of Experience (QoE) and also on the traffic patterns classification. In order to achieve this, a QoE estimation method has been modeled. This method allows for classifying the multimedia traffic from multimedia transmission patterns. In order to do this, the SDN controller gathers statistics from the network. The patterns used have been defined from a lineal combination of objective QoE measurements. The model has been defined by Bayesian regularized neural networks (BRNN). From this model, the system is able to classify several kind of traffic according to the quality perceived by the users. Then, a model has been developed to determine which video characteristics need to be changed to provide the user with the best possible quality in the critical moments of the transmission. The choice of these characteristics is based on the quality of service (QoS) parameters, such as delay, jitter, loss rate and bandwidth. Moreover, it is also based on subpatterns defined by clusters from the dataset and which represents network and video characteristics. When a critical network situation is given, the model selects, by using network parameters as entries, the subpattern with the most similar network condition. The minimum Euclidean distance between these entries and the network parameters of the subpatters is calculated to perform this selection. Both models work together to build a reliable multimedia traffic management system perfectly integrated into current network infrastructures, which is able to classify the traffic and solve critical situations changing the video characteristics, by using the SDN architecture.
  • 机译 异构云数据中心的节能,性能高效的资源整合方案
    摘要: Datacenters are the principal electricity consumers for cloud computing that provide an IT backbone for today's business and economy. Numerous studies suggest that most of the servers, in the US datacenters, are idle or less-utilised, making it possible to save energy by using resource consolidation techniques. However, consolidation involves migrations of virtual machines, containers and/or applications, depending on the underlying virtualisation method; that can be expensive in terms of energy consumption and performance loss. In this paper, we: (a) propose a consolidation algorithm which favours the most effective migration among VMs, containers and applications; and (b) investigate how migration decisions should be made to save energy without any negative impact on the service performance. We demonstrate through a number of experiments, using the real workload traces for 800 hosts, approximately 1516 VMs, and more than million containers, how different approaches to migration, will impact on datacenter's energy consumption and performance. We suggest, using reasonable assumptions for datacenter set-up, that there is a trade-off involved between migrating containers and virtual machines. It is more performance efficient to migrate virtual machines; however, migrating containers could be more energy efficient than virtual machines. Moreover, migrating containerised applications, that run inside virtual machines, could lead to energy and performance efficient consolidation technique in large-scale datacenters. Our evaluation suggests that migrating applications could be similar to 5.5% more energy efficient and similar to 11.9% more performance efficient than VMs migration. Further, energy and performance efficient consolidation is similar to 14.6% energy and similar to 7.9% performance efficient than application migration. Finally, we generalise our results using several repeatable experiments over various workloads, resources and datacenter set-ups.
  • 机译 通用密钥管理协议,用于物联网中的安全组和设备到设备通信
    摘要: The Internet of Things (IoT) is a network made up of a large number of devices that collaborate to provide various service for the benefit of society. Two communication modes are required to enable a smooth collaboration. A device can send the same message to several other ones participating in the same service. It may also address a specific device in a Peer-to-Peer manner. The first mode of communication is called Group Communication, while we refer to the second as Device-to-Device Communication. One of the main challenges facing the IoT is how to secure these two modes of communication. Among all the security issues, the Key Management is one of the most challenging. This is mainly due to the fact that most of the IoT devices have limited resources in terms of storage, calculation, communication and energy. Although different approaches have been proposed to deal with this problem, each of them presents its own limitations and weaknesses. Moreover, they usually consider either the Group or the Device-to-Device Communication. In this paper, we propose a novel versatile Key Management protocol for the Internet of Things. To the best of our knowledge, this is the first protocol that secures both modes of communication at the same time. We then analyze the security and performance of our solution and compare it to the existing schemes. For Group Communication, we show that our solution ensures the forward and backward secrecy and, unlike most of the existing Group Key Management protocols, guarantees the secure coexistence of several services in the network. With regard to Device-to-Device Communication, we prove that our solution is flexible and provides a good level of resilience and network connectivity compared to the existing Peer-to-Peer Key Management schemes. We finally demonstrate that, by balancing the loads between the heterogeneous devices according to their capabilities, our solution is both efficient and scalable.
  • 机译 基于强化学习的僵尸网络检测方法
    摘要: The use of bot malware and botnets as a tool to facilitate other malicious cyber activities (e.g. distributed denial of service attacks, dissemination of malware and spam, and click fraud). However, detection of botnets, particularly peer-to-peer (P2P) botnets, is challenging. Hence, in this paper we propose a sophisticated traffic reduction mechanism, integrated with a reinforcement learning technique. We then evaluate the proposed approach using real-world network traffic, and achieve a detection rate of 98.3%. The approach also achieves a relatively low false positive rate (i.e. 0.012%).
  • 机译 在软件定义的网络中有效测量往返链路延迟
    摘要: Round-trip link delay is an important indicator for network performance optimization and troubleshooting. The Software-Defined Networking (SDN) paradigm, which provides flexible and centralized control capability, paves the way for efficient round-trip link delay measurement. In this paper, we study the round-trip link delay measurement problem in SDN networks. We propose an efficient measurement scheme, which infers round-trip link delays from end-to-end delay of some measurement paths implemented with few flow rules in each SDN switch. Furthermore, to reduce measurement cost and meet measurement constraint, we address the Monitor Placement and Link Assignment (MPLA) problem involved in the measurement scheme. Specifically, we formulate the MPLA problem as a Mixed Integer Linear Programming (MILP) problem, prove that it is NP-hard, and propose an efficient algorithm called MPLA Algorithm based on Biding Strategy (MPLAA-BS) to solve the problem. The extensive simulation results on real network topologies reveal that the proposed scheme can efficiently and accurately measure round-trip link delays in SDN networks, and the MPLAA-BS can find feasible and resource-efficient solutions for the MPLA problem.
  • 机译 LAM-CIoT:基于云的物联网环境中的轻量级身份验证机制
    摘要: Internet of Things (IoT) becomes a new era of the Internet, which consists of several connected physical smart objects (i.e., sensing devices) through the Internet. IoT has different types of applications, such as smart home, wearable devices, smart connected vehicles, industries, and smart cities. Therefore, IoT based applications become the essential parts of our day-to-day life. In a cloud-based IoT environment, cloud platform is used to store the data accessed from the IoT sensors. Such an environment is greatly scalable and it supports realtime event processing which is very important in several scenarios (i.e., IoT sensors based surveillance and monitoring). Since some applications in cloud-based IoT are very critical, the information collected and sent by IoT sensors must not be leaked during the communication. To accord with this, we design a new lightweight authentication mechanism in cloud-based loT environment, called LAM-CIoT. By using LAM-CIoT, an authenticated user can access the data of IoT sensors remotely. LAM-CIoT applies efficient "one-way cryptographic hash functions" along with "bitwise XOR operations". In addition, fuzzy extractor mechanism is also employed at the user's end for local biometric verification. LAM-CIoT is methodically analyzed for its security part through the formal security using the broadly-accepted "Real-Or-Random (ROR)" model, formal security verification using the widely-used "Automated Validation of Internet Security Protocols and Applications (AVISPA)" tool as well as the informal security analysis. The performance analysis shows that LAM-CIoT offers better security, and low communication and computation overheads as compared to the closely related authentication schemes. Finally, LAM-CIoT is evaluated using the NS2 network simulator for the measurement of network performance parameters that envisions the impact of LAM-CIoT on the network performance of LAM-CIoT and other schemes.
  • 机译 基于边缘计算的物联网的通道预留媒体访问控制
    摘要: Edge computing brings powerful computing service to the proximity of IoT nodes to support sophisticated applications in future Internet of Things (IoT). Considering the channel is generally shared or multiplexed by multiple nodes in wireless networks, short response packets in edge computing service processes may easily congest or conflict with other simultaneous transmissions. Then, the service latency increases and may exceed the latency constraint of computing tasks, which naturally decrease the service performance of applications. Therefore, for edge computing based IoT (EdgeIoT), it is of great necessary yet has not been studied to schedule the transmission of responses. This paper studies the transmission of responses from the perspective of the MAC layer, a channel-reserved MAC (ChRMAC) protocol is proposed to reduce the collision and latency of responses in edge computing service procedures. A latency constraint aware scheme is devised in the ChRMAC to improve the effectiveness of reservations. Besides, a backoff recovery mechanism is designed to avoid the increase of latency and collision of computing tasks after reservations. Moreover, a cross-layer framework for the implementation of ChRMAC is proposed. Simulations are conducted in ns-3 to evaluate the proposed ChRMAC. Simulation results indicate that ChRMAC can reduce the average latency of response and increase the service performance of EdgeIoT.
  • 机译 D2D网络中无线内容交付的最佳缓存策略
    摘要: The huge demand for multimedia services has exponentially grown in mobile networks and is expected to congest cellular traffic in the near future. Since network resources are limited, content caching may be considered a superior solution to offload data traffic during peak times. Content caching in mobile devices together with Device-to-Device (D2D) communications can improve the performance of cellular wireless networks. Predicting user demand and his mobility pattern allows the network to proceed proactive caching in order to relieve the network congestion and hence decreases the network load as well as its service cost. Moreover, performing an optimal caching policy is one of the important issues to maximize the offloading probability and as a result enhances the overall network performance. In this paper, we are introducing an incentive caching policy in which networks jointly considers the user preference and group mobility for the caching problem. Firstly, the cost optimal caching problem for the network is formulated. Then, the overall network cost is minimized due to the effect of user demand and group mobility using the Frequency Searching Adaptive Bat Algorithm (FSABA) by optimizing the cached portions of requested files. System performance analysis in terms of the overall network gain, average transmission delay and offloading probability are derived and evaluated according to the achieved optimal cached portions. Extended simulations are carried out to validate the beneficial of the presented optimal caching policy. Additionally, to verify the effectiveness of FSABA, the results are compared with those obtained using the Particle Swarm Optimization (PSO) algorithm. The results show that the proposed caching scheme outperforms both the baseline scenario and the random mobility-based schemes. It is worth mentioning that the FSABA can achieve a superior convergence capability compared to the PSO.
  • 机译 认知Wi-Fi网络的实时吞吐量预测
    摘要: Wi-Fi as a wireless networking technology has become a widely acceptable commonplace. Over the course of time, the applications landscape of Wi-Fi networks is growing tremendously. The proliferation of new services is driving the industry to adopt novel and agile approaches to ensure the quality of experience delivered to the end user. To enhance end user experience, transmission throughput is an important metric that has a strong impact on the end-user quality of experience. The accurate real-time prediction of throughput can bring several new possibilities to enhance user experience in future self-organizing cognitive networks. However the real-time prediction of transmission throughput is challenging due to the dependency on several parameters. Previous studies on throughput prediction are primarily focused on non real-time prediction in less-dynamic networks. The studies also do not provide high accuracy as required in cognitive networks for efficient decision making. The purpose of this study is to use data-driven machine learning (ML) techniques and evaluating their accuracy and efficiency to predict the transmission throughput in Wi-Fi networks. Four algorithms are used namely multilayer perceptrons (MLP), support vector regressors (SVR), decision trees (DT) and random forests (RF). It is widely understood that the accuracy and efficiency of machine learning (ML) algorithms hugely depend upon the datasets being used to train the model. Hence, this study proposes two distinct data models for creating ML-ready datasets using feature engineering. The accuracy of each ML algorithm over these datasets is evaluated. The evaluation results show a maximum prediction accuracy of 96.2% using MLP algorithm, followed by DT (94.5%), RF (93.3%) and SVR (91.0%) respectively. Furthermore, the complexity analysis is also presented to support the adaptation of such schemes in real-time applications.
  • 机译 MuSC:多阶段服务链嵌入方法
    • 作者:;
    • 刊名:Journal of network and computer applications
    • 2020年第Jun.期
    摘要: Network function virtualization is an emerging concept that is attracting increasing attention in the industry because it offers high levels of agility and flexibility to the network allowing easier software updates, resource adjustment and scalable changes in the configuration of the services. Network functions traditionally delivered on hardware and purpose-built platforms in legacy networks can now be provided through shared virtual resources called VNFs (Virtual network functions) hosted over shared physical network infrastructures. Using network functions as absolute software components would certainly improve the deployment process and the management of the VNF life cycle. Since VNFs are hardware-independent, there is no need to check whether or not a network function is compatible with the rest of the network's physical parts; this also helps reduce the maintenance costs, which can increase drastically in the case of major upgrades to the network infrastructure. Along with all these benefits, shifting to virtual-based networks comes with a significant number of challenges, especially in terms of placing such virtual components, ensuring their interoperability and maintaining the quality of service at a level that is at least as good as what is offered by hardware based architectures. In this paper, we propose a cluster-based placement and chaining solution. The overall proposed approach consists of: 1) formulating an Integer Linear Programming (ILP) model aimed at finding an optimal tradeoff between multiple objective functions that might be sometimes conflicting (e.g hardware resource and energy consumption minimization, transmission delays, bandwidth usage, etc...), 2) classifying the substrate network into a set of on-demand clusters that are efficient for a predefined set of metrics, and 3) using meta-heuristic-based algorithms to find near-optimal solutions for the formulated ILP.
  • 机译 SDN中的安全性:全面调查
    • 作者:;
    • 刊名:Journal of network and computer applications
    • 2020年第Jun.期
    摘要: Software Defined Networking (SDN) is a revolutionary paradigm that is maturing along with other network technologies in the next-gen trend. The separation of control and data planes in SDN enables the emergence of novel network features like centralized flow management and network programmability that encourage the introduction of new and enhanced network functions in order to improve prominent network deployment aspects such as flexibility, scalability, network-wide visibility and cost-effectiveness. Although SDN exhibits a rapid evolution that is shaping this technology as a key enabler for future implementations in heterogeneous network scenarios, namely, datacenters, ISPs, corporate, academic and home; the technology is far from being considered secure and dependable to this day which inhibits its agile adoption. In recent years, the scientific community has been attracted to explore the field of SDN security to close the gap to SDN adoption. A twofold research context has been identified: on the one hand, leveraging SDN features to enhance security; while on the other hand one can find the pursue of a secure SDN system architecture. This article includes a description of security threats that menace SDN and a list of attacks that take advantage of vulnerabilities and misconfigurations in SDN constitutive elements. Accordingly, a discussion emphasizing the duality SDN-for-security and SDN-security is also presented. A comprehensive review of state-of-the art is accompanied by a categorization of the current research literature in a taxonomy that highlights the main characteristics and contributions of each proposal. Finally, the identified urgent needs and less explored topics are used to outline the opportunities and future challenges in the field of SDN security.
  • 机译 关于雾计算应用程序的分类:机器学习的观点
    • 作者:;
    • 刊名:Journal of network and computer applications
    • 2020年第Jun.期
    摘要: Currently, Internet applications running on mobile devices generate a massive amount of data that can be transmitted to a Cloud for processing. However, one fundamental limitation of a Cloud is the connectivity with end devices. Fog computing overcomes this limitation and supports the requirements of time-sensitive applications by distributing computation, communication, and storage services along the Cloud to Things (C2T) continuum, empowering potential new applications, such as smart cities, augmented reality (AR), and virtual reality (VR). However, the adoption of Fog-based computational resources and their integration with the Cloud introduces new challenges in resource management, which requires the implementation of new strategies to guarantee compliance with the quality of service (QoS) requirements of applications.In this context, one major question is how to map the QoS requirements of applications on Fog and Cloud resources. One possible approach is to discriminate the applications arriving at the Fog into Classes of Service (CoS). This paper thus introduces a set of CoS for Fog applications which includes, the QoS requirements that best characterize these Fog applications. Moreover, this paper proposes the implementation of a typical machine learning classification methodology to discriminate Fog computing applications as a function of their QoS requirements. Furthermore, the application of this methodology is illustrated in the assessment of classifiers in terms of efficiency, accuracy, and robustness to noise. The adoption of a methodology for machine learning-based classification constitutes a first step towards the definition of QoS provisioning mechanisms in Fog computing. Moreover, classifying Fog computing applications can facilitate the decision-making process for Fog scheduler.
  • 机译 物联网系统的协同资源分配
    • 作者:;
    • 刊名:Journal of network and computer applications
    • 2020年第Jun.期
    摘要: The conceptual approach known as Fog/Edge Computing has recently emerged, aiming to move part of the computing and storage resources from the cloud to the edge of the network. The combination of IoT devices, edge nodes, and the Cloud gives rise to a three-tier Cloud of Things (CoT) architecture. In the complex and dynamic CoT ecosystems, a key issue is how to efficiently and effectively allocate resources to meet the demands of applications. Similar to traditional clouds, the goal of resource allocation in the CoT is to maximize the number of applications served by the infrastructure while ensuring a target operational cost. We propose a resource allocation algorithm for CoT systems that (i) supports heterogeneity of devices and applications, (ii) leverages the distributed nature of edge nodes to promote collaboration during the allocation process and (iii) provides an efficient usage of the system resources while meeting latency requirements and considering different priorities of IoT applications. Our algorithm follows a heuristic-based approach inspired on an economic model for solving the resource allocation problem in CoT. A set of simulations were performed, with promising results, showing that our collaborative resource allocation algorithm is more scalable, reduces the response time for applications and the energy consumption of end devices, in comparison to a two-tier, Cloud-based approach. Moreover, the network traffic between edge nodes, and between the Edge and Cloud tiers, is considerably smaller when using our collaborative solution, in comparison to other evaluated approaches.
  • 机译 单网关室内定位的信道状态信息指纹评估方法
    • 作者:;
    • 刊名:Journal of network and computer applications
    • 2020年第Jun.期
    摘要: The proliferation of location-based services highlights the need to develop an accurate indoor localization solution. The global navigation satellite system does not deliver good accuracy indoors because of weak signal. One solution is to piggyback Wi-Fi technology, which is widespread in offices and domestic environments. This wireless communication has a promising future, with the possibility to estimate locations with a single gateway by combining channel state information with fingerprinting. However, the existing solutions are often limited to a specific setup and are hard to replicate in other situations. Furthermore, channel state information consists of complex data, which hampers the learning phase of machine learning techniques. This paper assesses the performances of unsupervised data complexity reduction methods by considering different data collection scenarios with multiple antenna elements at the anchor gateway. The study puts forward an evaluation method based on five heuristic scores to guide the design of future fingerprinting solutions based on channel state information. This has been extended to several spatial distributions of training locations, and we show that the kernel entropy component analysis is more suitable for implementation than the principal component analysis, the factor analysis, the independent component analysis and the kernel principal component analysis.
  • 机译 JPAS:用于深度学习集群的作业进度感知流调度
    • 作者:;
    • 刊名:Journal of network and computer applications
    • 2020年第May期
    摘要: Deep learning (DL) is an increasingly important tool for large-scale data analytics and DL workloads are also common in today's production clusters due to the increasing number of deep-learning-driven services (e.g., online search and speech recognition). To handle ever-growing training datasets, it is common to conduct distributed DL (DDL) training to leverage multiple machines in parallel. Training DL models in parallel can incur significant bandwidth contention on shared clusters. As a result, the network is a well-known bottleneck for distributed training. Efficient network scheduling is essential for maximizing the performance of DL training. DL training is feedback-driven exploration (e.g., hyper-parameter tuning, model structure optimization), which requires multiple retrainings of deep learning models that differ in terms of their configuration. The information at the early stage of each retraining can facilitate the direct search for high-quality models. Thus, reducing the early-stage time can accelerate the exploration of DL training. In this paper, we propose JPAS, which is a flow scheduling system for DDL training jobs that aims at reducing the early-stage time. JPAS uses a simple greedy mechanism to periodically order all DDL jobs. Each host machine sets priorities for its flows using the corresponding job order and offloads the flow scheduling and rate allocation to the underlying priority-enabled network. We evaluate JPAS over a real testbed that is composed of 13 servers and a commodity switch. The evaluation results demonstrate that JPAS can reduce the time to reach 90% or 95% of the converged accuracy by up to 38%. Hence, JPAS can remarkably reduce the early-stage time and thus accelerate the search for high-quality models.
  • 联系方式:010-58892860转803 (工作时间) 18141920177 (微信同号)
  • 客服邮箱:kefu@zhangqiaokeyan.com
  • 京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-1 六维联合信息科技(北京)有限公司©版权所有
  • 客服微信
  • 服务号