(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-38087426-2', 'auto'); ga('send', 'pageview');
Rohan McLure, Josh Milthorpe
JuliaCon 2020
Publication year: 2020

Performance outcomes for numerical codes involving large data manipulation depend on efficient access of memory. We introduce the ArrayChannels.jl library for manipulation of distributed array data with considerations for cache utilisation patterns. In contrast to communication constructs implemented by Julia’s remotecall, communication in the library occur entirely in-place, improving temporal locality. We evaluate the performance of ArrayChannels.jl constructs relative to comparable MPI and Distributed.jl implementations of the Intel PRK, yielding improvements of up to 150%.