Diagnosing Bad Hypothesis Tests
Last fall I saw a flyer for a class on computational ethics. Upon inspection it appears the class was poorly named; unlike computational geometry, computational finance, computational social-science, or almost any other occurrence of "computational $subject" this class isn't about the use of computational tools to study ethics; instead it's the study of ethical issues surrounding computation: privacy, censorship, cybercrime, AI, and the rest. A better name would be "ethics for programmers" although that sounds rather tedious. Perhaps "ethics of Facebook" would be enroll-baity enough? While not necessarily out-of-character, I didn't just want to complain about a misleading course name. Instead, I want to lament a missed opportunity: what if computational ethics actually was about computation?
One of the more fun features of R is that you can redefine pretty much anything. While this isn't particularly useful in itself, it can give you plenty of opportunities to mess with your friends' R sessions. Well, at least they were your friends before this post.
Apparently I've gotten better
I was organizing my hard drive and came across one of the first julia programs I ever wrote. It turns out this actually was quite a while ago; the timestamp is July 5, 2013. It was an enlightening exercise reading the code and then rewriting it with the benefit of three more years of experience.
The Fastest Poisson(1) in the West
I was optimizing some Poisson bootstrap code this week. Unsurprisingly, it turns out that most of the time was spent drawing Poisson random variables. When you've reached the point that your code is spending most of its time in the RNG you generally can call it quits. But what if you didn't? What if you could do better?
subscribe via RSS