Multi-Node Acceleration for Large-Scale GCNs

Limited by the memory capacity and compute power, singe-node graph convolutional neural network (GCN) accelerators cannot complete the execution of GCNs within a reasonable amount of time, due to the explosive size of graphs nowadays. Thus, large-scale GCNs call for a multi-node acceleration system (MultiAccSys) like TPU-Pod for large-scale neural networks. In this work, we aim to scale up single-node GCN accelerators to accelerate GCNs on large-scale graphs. We first identify the communication pattern and challenges of multi-node acceleration for GCNs on large-scale graphs. We observe that (1) coarse-grained communication patterns exist in the execution of GCNs in MultiAccSys, which introduces massive amount of redundant network transmissions and off-chip memory accesses; (2) overall, the acceleration of GCNs in MultiAccSys is bandwidth-bound and latency-tolerant. Guided by these two observations, we then propose MultiGCN, the first MultiAccSys for large-scale GCNs that trades network latency for network bandwidth. Specifically, by leveraging the network latency tolerance, we first propose a topology-aware multicast mechanism with a one put per multicast message-passing model to reduce transmissions and alleviate network bandwidth requirements. Second, we introduce a scatter-based round execution mechanism which cooperates with the multicast mechanism and reduces redundant off-chip memory accesses. Compared to the baseline MultiAccSys, MultiGCN achieves 4$sim 12times$ speedup using only 28%$sim$68% energy, while reducing 32% transmissions and 73% off-chip memory accesses on average. It not only achieves 2.5$sim 8- imes$ speedup over the state-of-the-art multi-GPU solution, but also scales to large-scale graphs as opposed to single-node GCN accelerators.

Future Scaling of Memory Hierarchy for Tensor Cores and Eliminating Redundant Shared Memory Traffic Using Inter-Warp Multicasting

The CUDA core of NVIDIA GPUs had been one of the most efficient computation units for parallel computing. However, recent rapid developments in deep neural networks demand an even higher level of computational performance. To meet this requirement, NVIDIA has introduced the Tensor core in recent generations. However, their impressive enhancements in computational performance have newly brought high pressure on the memory hierarchy. In this paper, first we identify the required memory bandwidth in the memory hierarchy as the computational performance increases in actual GPU hardware. Through a comparison of the CUDA core and the Tensor core in V100, we find that the tremendous performance increase of the Tensor core requires much higher memory bandwidth than that in the CUDA core. Moreover, we thoroughly investigate memory bandwidth requirement over Tensor core generations of V100, RTX TITAN, and A100. Lastly, we analyze a hypothetical next-generation Tensor core introduced by NVIDIA through a GPU simulation, through which we propose an inter-warp multicasting microarchitecture that reduces redundant shared memory (SMEM) traffic during the GEMM process. Our evaluation shows that inter-warp multicasting reduces the SMEM bandwidth pressure by 33% and improves the performance by 19% on average in all layers of ResNet-152 and BERT-Large.

Algorithm and Hardware Co-Design of Energy-Efficient LSTM Networks for Video Recognition With Hierarchical Tucker Tensor Decomposition

Long short-term memory (LSTM) is a type of powerful deep neural network that has been widely used in many sequence analysis and modeling applications. However, the large model size problem of LSTM networks make their practical deployment still very challenging, especially for the video recognition tasks that require high-dimensional input data. Aiming to overcome this limitation and fully unlock the potentials of LSTM models, in this paper we propose to perform algorithm and hardware co-design towards high-performance energy-efficient LSTM networks. At algorithm level, we propose to develop fully decomposed hierarchical Tucker (FDHT) structure-based LSTM, namely FDHT-LSTM, which enjoys ultra-low model complexity while still achieving high accuracy. In order to fully reap such attractive algorithmic benefit, we further develop the corresponding customized hardware architecture to support the efficient execution of the proposed FDHT-LSTM model. With the delicate design of memory access scheme, the complicated matrix transformation can be efficiently supported by the underlying hardware without any access conflict in an on-the-fly way. Our evaluation results show that both the proposed ultra-compact FDHT-LSTM models and the corresponding hardware accelerator achieve very high performance. Compared with the state-of-the-art compressed LSTM models, FDHT-LSTM enjoys both order-of-magnitude reduction (more than $1000 times$1000×) in model size and significant accuracy improvement (0.6% to 12.7%) across different video recognition datasets. Meanwhile, compared with the state-of-the-art tensor decomposed model-oriented hardware TIE, our proposed FDHT-LSTM architecture achieve - inline-formula>$2.5times$2.5×, $1.46times$1.46× and $2.41times$2.41× increase in throughput, area efficiency and energy efficiency, respectively on LSTM-Youtube workload. For LSTM-UCF workload, our proposed design also outperforms TIE with $1.9times$1.9× higher throughput, $1.83times$1.83× higher energy efficiency and comparable area efficiency.

ReAAP: A Reconfigurable and Algorithm-Oriented Array Processor With Compiler-Architecture Co-Design

Parallelism and data reuse are the most critical issues for the design of hardware acceleration in a deep learning processor. Besides, abundant on-chip memories and precise data management are intrinsic design requirements because most of deep learning algorithms are data-driven and memory-bound. In this paper, we propose a compiler-architecture co-design scheme targeting a reconfigurable and algorithm-oriented array processor, named ReAAP. Given specific deep neural networks, the proposed co-design scheme is effective to perform parallelism and data reuse optimization on compute-intensive layers for guiding reconfigurable computing in hardware. Especially, the systemic optimization is performed in our proposed domain-specific compiler to deal with the intrinsic tensions between parallelism and data locality, for the purpose of automatically mapping diverse layer-level workloads onto our proposed reconfigurable array architecture. In this architecture, abundant on-chip memories are software-controlled and its massive data access is precisely handled by compiler-generated instructions. In our experiments, the ReAAP is implemented on an embedded FPGA platform. Experimental results demonstrate that our proposed co-design scheme is effective to integrate software flexibility with hardware parallelism for accelerating diverse deep learning workloads. As a whole system, ReAAP achieves a consistently high utilization of hardware resource for accelerating all the diverse compute-intensive layers in ResNet, MobileNet, and BERT.

Extending On-Chain Trust to Off-Chain – Trustworthy Blockchain Data Collection Using Trusted Execution Environment (TEE)

Blockchain creates a secure environment on top of strict cryptographic assumptions and rigorous security proofs. It permits on-chain interactions to achieve trustworthy properties such as traceability, transparency, and accountability. However, current blockchain trustworthiness is only confined to on-chain, creating a “trust gap” to the physical, off-chain environment. This is due to the lack of a scheme that can truthfully reflect the physical world in a real-time and consistent manner. Such an absence hinders further blockchain applications in the physical world, especially for the security-sensitive ones. In this paper, we propose a framework to extend blockchain trust from on-chain to off-chain, and take trustworthy vaccine tracing as an example scheme. Our scheme consists of 1) a Trusted Execution Environment (TEE)-enabled trusted environment monitoring system built with the Arm Cortex-M33 microcontroller that continuously senses the inside of a vaccine box through trusted sensors and generates anti-forgery data; and 2) a consistency protocol to upload the environment status data from the TEE system to blockchain in a truthful, real-time consistent, continuous and fault-tolerant fashion. Our security analysis indicates that no adversary can tamper with the vaccine in any way without being captured. We carry out an experiment to record the internal status of a vaccine shipping box during transportation, and the results indicate that the proposed system incurs an average latency of 84 ms in local sensing and processing followed by an average latency of 130 ms to have the sensed data transmitted to and been available in the blockchain.

LNS-Madam: Low-Precision Training in Logarithmic Number System Using Multiplicative Weight Update

Representing deep neural networks (DNNs) in low-precision is a promising approach to enable efficient acceleration and memory reduction. Previous methods that train DNNs in low-precision typically keep a copy of weights in high-precision during the weight updates. Directly training with low-precision weights leads to accuracy degradation due to complex interactions between the low-precision number systems and the learning algorithms. To address this issue, we develop a co-designed low-precision training framework, termed LNS-Madam, in which we jointly design a logarithmic number system (LNS) and a multiplicative weight update algorithm (Madam). We prove that LNS-Madam results in low quantization error during weight updates, leading to stable performance even if the precision is limited. We further propose a hardware design of LNS-Madam that resolves practical challenges in implementing an efficient datapath for LNS computations. Our implementation effectively reduces energy overhead incurred by LNS-to-integer conversion and partial sum accumulation. Experimental results show that LNS-Madam achieves comparable accuracy to full-precision counterparts with only 8 bits on popular computer vision and natural language tasks. Compared to FP32 and FP8, LNS-Madam reduces the energy consumption by over 90% and 55%, respectively.

A Systematic View of Model Leakage Risks in Deep Neural Network Systems

As deep neural networks (DNNs) continue to find applications in ever more domains, the exact nature of the neural network architecture becomes an increasingly sensitive subject, due to either intellectual property protection or risks of adversarial attacks. While prior work has explored aspects of the risk associated with model leakage, exactly which parts of the model are most sensitive and how one infers the full architecture of the DNN when nothing is known about the structure a priori are problems that have been left unexplored. In this paper we address this gap, first by presenting a schema for reasoning about model leakage holistically, and then by proposing and quantitatively evaluating DeepSniffer, a novel learning-based model extraction framework that uses no prior knowledge of the victim model. DeepSniffer is robust to architectural and system noises introduced by the complex memory hierarchy and diverse run-time system optimizations. Taking GPU platforms as a showcase, DeepSniffer performs model extraction by learning both the architecture-level execution features of kernels and the inter-layer temporal association information introduced by the common practice of DNN design. We demonstrate that DeepSniffer works experimentally in the context of an off-the-shelf Nvidia GPU platform running a variety of DNN models and that the extracted models significantly improve attempts at crafting adversarial inputs. The DeepSniffer project has been released in https://github.com/xinghu7788/DeepSniffer.

MemUnison: A Racetrack-ReRAM-Combined Pipeline Architecture for Energy-Efficient in-Memory CNNs

Though ReRAM has been greatly successful in reducing energy consumption of various neural networks, it still suffers write amplification in energy, which impedes ReRAM to provide efficient storage for the ubiquitous streaming data in CNNs, such as feature-maps. Racetrack memory, an emerging magnetic memory technique, is a proper candidate to hold streaming data since it enjoys fast sequential-access with ultra-low operating energy in read and write. In this work, we propose a hybrid processing-in-memory architecture, called MemUnison, that coordinates ReRAM and racetrack to overcome the expenditure storage of streaming data in ReRAM. By placing feature-maps in racetrack and leaving weights in ReRAM, a datapath is constructed between the two sides to form a fetch-process-writeback pipeline. As the invalid-shifts of the racetrack memory incurs a large amount of pipeline bubble, we propose a row-based access that can read and write a feature-map without any invalid-shifts. For the row-based operation, a cohesive controlling method is proposed to coordinate racetrack and ReRAM. In runtime, convolution kernels are scheduled in ReRAM banks for cross-channel calculations of one row, by which computing complexity of a convolutional layer can be reduced by 4 orders of magnitude, excessing the 2 order of reduction by traditional ReRAM.

End-to-End Synthesis of Dynamically Controlled Machine Learning Accelerators

Edge systems are required to autonomously make real-time decisions based on large quantities of input data under strict power, performance, area, and other constraints. Meeting these constraints is only possible by specializing systems through hardware accelerators purposefully built for machine learning and data analysis algorithms. However, data science evolves at a quick pace, and manual design of custom accelerators has high non-recurrent engineering costs: general solutions are needed to automatically and rapidly transition from the formulation of a new algorithm to the deployment of a dedicated hardware implementation. Our solution is the SOftware Defined Architectures (SODA) Synthesizer, an end-to-end, multi-level, modular, extensible compiler toolchain providing a direct path from machine learning tools to hardware. The SODA Synthesizer frontend is based on the multilevel intermediate representation (MLIR) framework; it ingests pre-trained machine learning models, identifies kernels suited for acceleration, performs high-level optimizations, and prepares them for hardware synthesis. In the backend, SODA leverages state-of-the-art high-level synthesis techniques to generate highly efficient accelerators, targeting both field programmable devices (FPGAs) and application-specific circuits (ASICs). In this paper, we describe how the SODA Synthesizer can also assemble the generated accelerators (based on the finite state machine with datapath model) in a custom system driven by a distributed controller, building a coarse-grained dataflow architecture that does not require a host processor to orchestrate parallel execution of multiple accelerators. We show the effectiveness of our approach by automatically generating ASIC accelerators for layers of popular deep neural networks (DNNs). Our high-level optimizations result in up to 74x speedup on isolated accelerators for individual DNN layers, and our dynamically scheduled architecture yields an additio- al 3x performance improvement when combining accelerators to handle streaming inputs.

CAMDNN: Content-Aware Mapping of a Network of Deep Neural Networks on Edge MPSoCs

Machine Learning (ML) workloads are increasingly deployed at the edge. Enabling efficient inference execution while considering model and system heterogeneity remains challenging, especially for ML tasks built with a network of deep neural networks (DNNs). The challenge is to maximize the utilization of all available resources on the multiprocessor system on a chip (MPSoC) at the same time. This becomes even more complicated because the optimal mapping for the network of DNNs can vary with input batch sizes and scene complexity. In this paper, a holistic hierarchical scheduling framework is presented to optimize the execution time for a network of DNN models on an edge MPSoC at runtime, considering varying input characteristics. The framework consists of a local and a global scheduler. The local scheduler maps individual DNNs in the inference pipeline to the best-performing hardware unit while the global scheduler customizes an Integer Linear Programming (ILP) solution to instantiate DNN remapping. To minimize scheduler runtime overhead, an imitation learning (IL) based scheduler is used that approximates the ILP solutions. The proposed scheduling framework (CAMDNN) was implemented on a Qualcomm Robotic RB5 platform. CAMDNN resulted in lower execution time of up to 32% than heterogeneous earliest finish time, and by factors of 6.67X, 5.6X and 2.17X than the CPU-only, GPU-only and Central Queue schedulers.