This list is organized by research topic. A number of papers appear
under multiple topics, as appropriate. For a list without duplicates,
see the list of references at the bottom of this page.
Compiler optimization to control the layout of parallel-friendly data
structures: (Bergstrom et al.
2013)
Engineering the SML/NJ compiler to handle advanced features of
foreign-function calls. (Blume, Rainey,
and Reppy 2008)
Our PLDI’21 paper extends prior work on Heartbeat Scheduling: it
addresses the challenge of delivering a suitable heart beat mechanism
via hardware interrupts, and it addresses barriers to sequential
performance by proposing an assembly language equipped with support for
task parallelism. Here, we provide a long version of the paper, complete
with an appendix, and supplemental materials, including the artifact
evaluation and an executable dynamic semantics in PLT Redex.
Our PLDI’18 paper presents a proof and an experimental evaluation of
our Heartbeat Scheduling technique (Umut A. Acar et al. 2018). This project
page hosts the Coq source code of our proofs and the C++ prototype and
benchmarking scripts we used for our experimental evaluation. This is
the place to go if you are interested to check the proofs or to repeat
our experiments.
This project features a C++ template library which implements
ordered, in-memory containers that are based on a new B-tree-like data
structure.
PASL is a C++ library that provides algorithms for mapping
computations generated by programs with implicit threading to multicore
machines.
Manticore is a parallel programming language aimed at general-purpose
applications that run on multicore processors.
I worked on the back end of the compiler. My main projects covered
code generation for the x86_64 and support for foreign-function
calls.
Acar, Umut A., Vitaly Aksenov, Arthur Charguéraud, and Mike Rainey.
2019.
“Provably and Practically Efficient Granularity
Control.” In
Proceedings of the 24th Symposium on Principles
and Practice of Parallel Programming, 214–28. PPoPP ’19. New York,
NY, USA: ACM.
http://mike-rainey.site/papers/oracle-ppop19-long.pdf.
Acar, Umut A, Naama Ben-David, and Mike Rainey. 2017.
“Contention
in Structured Concurrency: Provably Efficient Dynamic NonZero Indicators
for Nested Parallel Computation.” In
22nd ACM
SIGPLAN Symposium on Principles and Practice of Parallel
Programming. PPoPP ’17. ACM.
http://mike-rainey.site/papers/dynsnzi.pdf.
Acar, Umut A., Arthur Charguéraud, and Mike Rainey. 2017.
Parallel
Work Inflation, Memory Effects, and Their Empirical Analysis.
CoRR. Vol. abs/1709.03767.
http://arxiv.org/abs/1709.03767.
Acar, Umut A, Arthur Charguéraud, Adrien Guatto, Mike Rainey, and Filip
Sieczkowski. 2018.
“Heartbeat Scheduling: Provable Efficiency for
Nested Parallelism.” In
39th ACM
SIGPLAN Conference on Programming Language Design and
Implementation. PLDI ’18. ACM.
http://mike-rainey.site/papers/heartbeat.pdf.
Acar, Umut A, Arthur Charguéraud, and Mike Rainey. 2011.
“Oracle
Scheduling: Controlling Granularity in Implicitly Parallel
Languages.” In
Proceedings of the 2011 ACM
International Conference on Object Oriented Programming Systems
Languages and Applications, 46:499–518. OOPSLA ’11 10. ACM.
http://mike-rainey.site/papers/oracle_scheduling.pdf.
———. 2012.
“Efficient Primitives for Creating and Scheduling
Parallel Computations.” DAMP ’12.
http://mike-rainey.site/papers/damp2012_primitives.pdf.
———. 2013.
“Scheduling Parallel Programs by Work Stealing with
Private Deques.” In
18th ACM
SIGPLAN Symposium on Principles and Practice of Parallel
Programming, 48:219–28. PPoPP ’13 8. ACM.
http://mike-rainey.site/papers/full.pdf.
———. 2014.
“Theory and Practice of Chunked Sequences.” In
The 22nd Annual European Symposium on Algorithms, 25–36. ESA
’14. Springer.
http://mike-rainey.site/papers/chunked_seq.pdf.
———. 2015.
“A Work-Efficient Algorithm for Parallel Unordered
Depth-First Search.” In
Proceedings of the International
Conference for High Performance Computing, Networking, Storage and
Analysis, 67:1–12. SC ’15. ACM.
http://mike-rainey.site/papers/pdfs_sc15.pdf.
———. 2016.
“Oracle-Guided Scheduling for Controlling Granularity
in Implicitly Parallel Languages.” In
Journal of Functional
Programming. Cambridge University Press.
http://mike-rainey.site/papers/jfp-oracle-guided.pdf.
Acar, Umut A, Arthur Charguéraud, Mike Rainey, and Filip Sieczkowski.
2016.
“Dag-Calculus: A Calculus for Parallel Computation.”
In
The 26th ACM SIGPLAN International
Conference on Functional Programming. ICFP ’16. ACM.
http://mike-rainey.site/papers/dag-calculus.pdf.
Bergstrom, Lars, Matthew Fluet, Mike Rainey, John Reppy, Stephen Rosen,
and Adam Shaw. 2013.
“Data-Only Flattening for Nested Data
Parallelism.” In
18th ACM SIGPLAN
Symposium on Principles and Practice of Parallel Programming,
48:81–92. PPoPP ’13 8. ACM.
http://mike-rainey.site/papers/ppopp13-flat.pdf.
Bergstrom, Lars, Matthew Fluet, Mike Rainey, John Reppy, and Adam Shaw.
2012.
“Lazy Tree Splitting.” In
Journal of Functional
Programming, 22:382–438. 4-5. Cambridge University Press.
http://mike-rainey.site/papers/jfp-lts-submitted.pdf.
Bergstrom, Lars, Mike Rainey, John Reppy, Adam Shaw, and Matthew Fluet.
2010.
“Lazy Tree Splitting.” In
The 20th
ACM SIGPLAN International Conference on
Functional Programming, 45:93–104. ICFP ’10 9. ACM.
http://mike-rainey.site/papers/icfp10-lts.pdf.
Blume, Matthias, Michael Rainey, and John Reppy. 2008.
“Calling
Variadic Functions from a Strongly-Typed Language.” ML ’08. ACM.
http://mike-rainey.site/papers/ml-varargs.pdf.
Charguéraud, Arthur, and Mike Rainey. 2017.
“Efficient Representations for Large Dynamic Sequences in
ML.” ML Family Workshop.
https://hal.inria.fr/hal-01669407/file/chunkseq_ml.pdf.
Fluet, Matthew, Mike Rainey, and John Reppy. 2008.
“A Scheduling
Framework for General-Purpose Parallel Languages.” In
The
13th ACM SIGPLAN International Conference on
Functional Programming, 43:241–52. ICFP ’08 9. ACM.
http://mike-rainey.site/papers/icfp08-sched.pdf.
Fluet, Matthew, Mike Rainey, John Reppy, and Adam Shaw. 2008.
“Implicitly-Threaded Parallelism in Manticore.” In
The
13th ACM SIGPLAN International Conference on
Functional Programming, 43:119–30. ICFP ’08 9. ACM.
http://mike-rainey.site/papers/icfp08-implicit.pdf.
———. 2010. “Implicitly Threaded Parallelism in Manticore.”
In Journal of Functional Programming, 20:537–76. 5-6. Cambridge
University Press.
Fluet, Matthew, Mike Rainey, John Reppy, Adam Shaw, and Yingqi Xiao.
2007. “Manticore: A Heterogeneous Parallel Language.” ACM.
Koparkar, Chaitanya, Mike Rainey, Michael Vollmer, Milind Kulkarni, and
Ryan R. Newton. 2021.
“Efficient Tree-Traversals: Reconciling
Parallelism and Dense Data Representations.” In
Proc. ACM
Program. Lang. Vol. 5. ICFP. New York, NY, USA: Association for
Computing Machinery. doi:
10.1145/3473596.
Koparkar, Chaitanya, Vidush Singhal, Aditya Gupta, Mike Rainey, Michael
Vollmer, Artem Pelenitsyn, Sam Tobin-Hochstadt, Milind Kulkarni, and
Ryan R. Newton. 2024.
“Garbage Collection for Mostly Serialized
Heaps.” In
International Symposium on Memory Management.
ISMM. New York, NY, USA: Association for Computing Machinery.
https://a.pelenitsyn.top/Papers/2024-gibbon-gc.pdf.
Rainey, Mike. 2010.
“Effective Scheduling Techniques for
High-Level Parallel Programming Languages.” PhD thesis,
University of Chicago.
http://mike-rainey.site/papers/rainey-phd.pdf.
———. 2023.
The Best Multicore Refactoring You’ve Never Heard
of.
http://mike-rainey.site/papers/pardefunc.pdf.
Rainey, Mike, Kyle Hale, Ryan R. Newton, Nikos Hardavellas, Simone
Campanoni, Peter Dinda, and Umut A. Acar. 2021.
“Task Parallel
Assembly Language for Uncompromising Parallelism.” In
Proceedings of the 42nd ACM SIGPLAN Conference on Programming
Language Design and Implementation. PLDI ’21. New York, NY, USA:
ACM.
http://mike-rainey.site/papers/tpal-long.pdf.
Singhal, Vidush, Chaitanya Koparkar, Joseph Zullo, Artem Pelenitsyn,
Michael Vollmer, Mike Rainey, Milind Kulkarni, and Ryan R. Newton. 2024.
“Optimizing Layout of Recursive Datatypes with Marmoset.”
In
European Conference on Object-Oriented Programming. ECOOP.
New York, NY, USA: Association for Computing Machinery.
https://arxiv.org/pdf/2405.17590.
Su, Yian, Mike Rainey, Nick Wanninger, Nadharm Dhiantravan, Jasper
Liang, Umut Acar, Peter Dinda, and Simone Campanoni. 2024.
“A
Scheduling Framework for General-Purpose Parallel Languages.” In
The 29th ACM International Conference on Architectural Support for
Programming Languages and Operating Systems. Vol. 22. ASPLOS ’24,
ASPLOS. New York, NY, USA: ACM.
http://mike-rainey.site/papers/HELIX_ASPLOS_2024.pdf.
Vollmer, Michael, Chaitanya Koparkar, Mike Rainey, Laith Sakka, Milind
Kulkarni, and Ryan R. Newton. 2019.
“LoCal: A Language for
Programs Operating on Serialized Data.” In
40th
ACM SIGPLAN Conference on Programming Language
Design and Implementation. PLDI ’19. ACM.
http://mike-rainey.site/papers/lo-cal19.pdf.
Westrick, Sam, Matthew Fluet, Mike Rainey, and Umut A. Acar. 2024.
“Automatic Parallelism Management.” In
Proc. ACM
Program. Lang. Vol. 8. POPL. New York, NY, USA: Association for
Computing Machinery. doi:
10.1145/3632880.
Westrick, Sam, Mike Rainey, Daniel Anderson, and Guy E. Blelloch. 2022.
“Parallel Block-Delayed Sequences.” In
Proceedings of
the 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel
Programming, 61–75. PPoPP ’22. New York, NY, USA: Association for
Computing Machinery. doi:
10.1145/3503221.3508434.
Wise, David S., Craig Citro, Joshua Hursey, Fang Liu, and Michael
Rainey. 2005.
“A Paradigm for Parallel Matrix Algorithms: Scalable
Cholesky.” In
In Euro-Par 2005 – Parallel Processing.
http://dx.doi.org/10.1007/11549468_76.