Using this simple statistics library [https://chrisbissell.wordpress.com/2011/05/23/a-simple-but-very-flexible-statistics-library-in-scala/], a timing function [[https://stackoverflow.com/questions/9160001/how-to-profile-methods-in-scala#9160068]] and Hazelcast, we can track the variability of code performance in Scala.
Hazelcast setup:
val cfg = new Config("concepts")
val hazelcastInstance = Hazelcast.newHazelcastInstance(cfg)
val timings = hazelcastInstance.getList("timings").asInstanceOf[ICollection[Map[String, Double]]]
Then, for timing functions:
var timer = 1
def time[R](name: String, block: => R): R = {
val maxMemory1 = runtime.maxMemory()
val allocatedMemory1 = runtime.totalMemory()
val freeMemory1 = runtime.freeMemory()
val totalFree1 = freeMemory1 + (maxMemory1 - allocatedMemory1)
val t0 = System.nanoTime()
val result = block
val t1 = System.nanoTime()
val maxMemory2 = runtime.maxMemory()
val allocatedMemory2 = runtime.totalMemory()
val freeMemory2 = runtime.freeMemory()
val totalFree2 = freeMemory2 + (maxMemory2 - allocatedMemory2)
timings.add(
Map[String, Double](
timer + ".1 " + name + ".time" ->1
).map(
(kv) => (
kv._1,
mean(kv._2),
2 * stddev(kv._2)
)
).map(
(kv) => kv._1 + ": " + format.format(kv._2) + " ± " + format.format(kv._3)
).toList.sorted
.mkString("\n") +
"\n********\n"
)
hazelcastInstance.shutdown()
This is a really easy way to track performance
- t1 - t0) / 1000000000.0),
timer + ".2 " + name + ".totalFree" -> ((totalFree1 - totalFree2) / 1024.0 / 1024),
timer + ".3 " + name + ".totalFree" -> ((allocatedMemory2 - allocatedMemory1) / 1024.0 / 1024)
)
)
timer = timer + 1
result
}
Then, right before you finish, you can compute metrics, and print them out:
import scala.collection.JavaConverters._ println( "*********\n" + "Averages:\n" + timings.asScala.flatMap( (map) => map.toList ).groupBy( _._1 ).map( (kv) => (kv._1, kv._2.map(_._2 [↩]