An interesting post over at the Economist, noting that published results can be wrong -- and that the more selective a venue is, and the "hotter" the area, the more likely it is that something is amiss. This makes a lot of sense; in complex research, things go wrong; there's no way we can eliminate all errors. If the errors skew things in a "positive" direction, reviewers are impressed, and off it goes.
To my knowledge, I've only had bad numbers in one paper -- we were measuring balance of horizontal and vertical wiring from different placement methods, and while the general trend was right, the relative percentages were off (lots of details on how this went wrong; we detected it after the conference camera-ready went in, so we announced the error at the actual talk, and included a note about the error in the journal version). Safe to assume that I've probably screwed up elsewhere.
Fortunately, I got tenure a few years ago, and can now take my sweet time with papers. I've got a few in the pipe, and I'm going to do my best to get everything right.
DAC 2012: Mystical Confluence: ESL Hockey Stick and The Cup! - Another note from DAC 2012: In Gary Smith’s Sunday night pre-DAC talk, he mentioned that in 2011, ESL tools took off – the famous Hockey Stick. See his s...
1 year ago