In this setting "code is data". Further, in a pure environment, functions can be seen as merely incomplete data. If set up correctly, every intermediate step, every function application, and every output can be moved around, cached, and shared. Why recompute a function that has been run before with those inputs? Is there some novel function to be run on a large data set? Send it a compute cluster and fetch the result lazily. Multiple devices can share workloads.