Skip to content

Blog

POPL 2015 Artifact Evaluation Revisted

Just over a decade again POPL ran its first artifact evaluation and I was on it! I'm listed in the proceedings as "Emma F. Tosch" as an inside joke with Arjun, who ran the committee. I have what only I think is a Very Interesting Tale of reviewing, but that's not what this post is about. Instead, this post is about tracking down those old artifacts with just a pinch of oral history.

What even is a "parameter"?

"Parameter" is one of those commonly used words in mathematics and computing and in my experience is rarely explicitly defined. While most uses have similar meanings, there can be small differences in their interpretation. Parameters and other statistical entities are important to the semantics and correctness of the Helical system, so it's worth considering what we mean by these terms.

DSL Usability Research

In my previous post, I asserted:

...learning a new formal language can itself contribute to the difficulty of encoding an experiment.

This statement was based on assumptions, intuitions, and folk wisdom. I started digging into the DSL usability research to see if I could find explicit support for this statement. This blog post is about what I found.

Jupyter DSLs

One of the broader goals of the Helical project is to make writing, maintaining, and debugging experiments easier and safer for the end-user through a novel domain-specific language. However, learning a new formal language can itself contribute to the difficulty of encoding an experiment. Therefore, we are intersted in mitigating the effects of language learning/novelty. To this end, a Northeastern coop student (Kevin G. Yang) investigated the suitability of using Jupyter notebooks as an execution environment for experiments last year.