19 Comments

Not quite the same, but at one of the most valuable companies in the world: https://money.com/amazon-meetings-no-powerpoint/

Expand full comment
author

Interesting.

Expand full comment

Interesting concept, I know I'd like it better, and I especially like that the meeting begins by everyone reading the memo, to themselves. It forces it to be fresh, not something that was read two days ago and forgotten, it eliminates the excuse of not having had time to read it because of some emergency or crisis, and it makes the memo as fresh as possible so the author can't add last minute revisions which have to be explained during the meeting.

Expand full comment

They also sometimes add questions as they read (to something like google docs) and that’s what the writer engages with. The article suggest that the team goes over the document after reading it, but that’s to answer those questions in page order. There’s no presentation. The only thing that I find off here is the length of the document. 6 pages?

Expand full comment

This reminds me of the sort of Bible study popular in the Navigators, a Christian group for college students (and also the military and some other places). In most other Bible studies I've been in, we read the passage together for the first time at the study. But in Navigators, we made a point of having each one of us read and analyze the passage individually before coming to the study.

I think it helped us get into a lot more detail, and if I remember correctly, several of my friends agreed. It's probably true across a lot of other fields of study too.

Expand full comment

IIRC, the Institute for Advanced Study at Princeton had a weekly physics seminar which you were only allowed to attend if you were prepared to give the talk. At the start of the meeting, a name would be drawn from a hat to give the talk (on a pre-announced topic). I believe that Feynman said this in one of his books.

Expand full comment

I attended many two-hour seminars in the philosophy and government departments for some years at LSE. This invariably involved the speaker reading aloud for at least an hour (zzzzzzz). Only during the questions did it usually become useful and interesting. I would much prefer a 15-minute summary with the option to read the paper in advance.

Expand full comment

I think code reviews and this kind of seminar may have similar incentive problems, leading to a proliferation of implementations that are less effective for their stated purpose.

Code reviews are supposed to be about improving the code, on the principle that many pairs of eyes are better than one. They are particularly supposed to be about finding defects, rather than improving the code or getting into arguments about style.

Unfortunately, the participants are human. They get embarrassed when people point out their mistakes. Sometimes they get angry and defensive. Sometimes they pull rank, consciously or unconsciously - a junior can't possibly have found a meaningful bug in a senior's code.

Then there are the managerial incentives. Maybe developers are judged on following the process (their code must be reviewed), but otherwise only on whether they get their own features in on time. Maybe the schedule has been drawn up with insufficient time for the reviewers to do a decent job. Maybe the least experienced developers predictably turn out to be the only ones with time available for reviewing,

Then there are the career incentives. Maybe everyone knows that the code will be "thrown over the wall" to a maintenance team, bugs and all, with no negative consequences to the developers for its bugginess. Maybe getting a reputation for being "difficult" has a really bad impact in this organization, and it requires immense amounts of tact to draw attention to even an extremely blatant bug without getting this reputation. Maybe there are other inter-personal land mines. (The boss' pet will get you if you point out their bugs.)

Implementing code reviews usefully requires that the developers trust that all their peers are committed to improving the code. It requires a culture where the respected response to being shown a bug in your code is "thank you" or "nice catch". This culture needs to be modeled from the top of the technical hierarchy down. Implementing good code reviews also requires that management get out of the way, including simply being absent from review meetings. And it requires that developers have enough relevant skill and experience to find bugs in the particular code being reviewed.

Many organizations choose, consciously or otherwise, to merely check the "code reviews were done" box, thus demonstrating management's commitment to (performative) quality, without making them useful.

I recall my "first code review" experience in two different organizations.

In the first one, new hires were being asked to do reviews of already submitted code, as an "extra quality measure". I found a latent bug, which would only become an actual bug if the compiler changed. The author said "the compiler won't ever change". End of discussion. (Later, the organization switched compiler technology.)

In the second one, the team lead asked me to review some of his code. I found a problem, and reported it. The team lead's response was "the process worked!". He fixed his bug, and we eventually wound up as friends as well as colleagues.

I suspect there are similar things going on with paper presentations in academic organizations. Not the same things. But confused incentives, and perhaps selection of methodologies to avoid interpersonal drama rather than maximize either quality ot learning.

Expand full comment
author

I agree that some of the incentive problems are similar.

As I wrote, one of my correspondents said that “The authors themselves can have a candid workshop among themselves while feeling that they are better controlling their reputations.” The implication of that, as I read it, is that authors don't want the fact that the early version of their piece had mistakes to be known, even though keeping it from being known makes it less likely that mistakes will be found and corrected.

Expand full comment

It is done at the economics department of University of Zurich

Expand full comment

A draft of my dissertation was OK for the four full professors on my committee. But the young assistant prof (put on the committee when another professor went on leave) went through the math and found an error that rendered my estimation process invalid. So the four others had just skimmed the document. (Fortunately, I had a back-up estimation methodology, less elegant but functional when the math error was fixed.)

Expand full comment
Aug 29, 2023·edited Aug 29, 2023

I have at least three comments to make here. They aren't especially closely related. Given how substack works, we might be better served if I make 4 comments - this one and one for each theme.

1) To this retired software engineer, your Chicago style seminar sounds a lot like the better version of the early code review, in particular likely to have succumbed to some of the same problems.

2) When economics fails to check whether its predictions are satisfied in the real world, it might as well be philosophy or theology. I don't know how non-trivial predictions can be checked without statistics.

3) To this non-economist, more than half of the math I encounter in the writing of economists looks like part of some kind of strange ritual. It isn't real economics unless it has math; relevance and usefulness are optional. I say this as someone originally reasonably talented at some kinds of math (800 on the quantitative section of the GRE) who discovered an intense allergy to theoretical math while still an undergraduate, and so never went beyond standard STEM math and standard social science statistics.

I may elaborate on some of these in additional comments, particularly the bit about code review, where I may have some relevant expertise.

Expand full comment
author
Aug 29, 2023·edited Aug 29, 2023Author

2. I'm not arguing that statistics isn't useful — my first econ journal article was rejected when I initially submitted it on the grounds that for the JPE to publish an article it had to contain some real world test of its implications. Satisfying that requirement and getting it accepted required statistics.

But a paper where the only interesting thing about it is the statistics is less interesting to an economist than one where the central point is economics.

3. My friend and colleague Gordon Tullock liked to refer to "ornamental mathematics." It's possible to do important economics with nothing beyond arithmetic, as Ricardo demonstrated more than two hundred years ago, but calculus makes it easier. The problem comes when the only excuse for publishing is the novel mathematics applied to a problem that doesn't need it.

Expand full comment
Aug 29, 2023·edited Aug 29, 2023

I love that term "ornamental mathematics". I'll have to vike it ;-)

Expand full comment
author
Aug 31, 2023·edited Aug 31, 2023Author

You would probably have liked Gordon Tullock. As I read it, the reason he didn't share in Jim Buchanan's Nobel prize (for inventing public choice theory) was that he had offended too many people, largely by arguing that they were wrong.

Expand full comment

The Chicago Style Seminar is alive and thriving at the University of Chicago Law School. (So, for that matter, is the study of economics!)

Expand full comment
author
Aug 29, 2023·edited Aug 29, 2023Author

Good. That may be where I first encountered it. I should have written some of my ex-colleagues from there, probably will.

Expand full comment

Indeed, my understanding is that the law school first had a law and economics workshop, done in the Chicago Style Seminar, and then it was such a success that maybe thirty years ago the law school instituted a school-wide faculty workshop that proceeds along the same principles. Both are still going strong, and other law schools (such as NYU, I'm told) have since copied the style.

Expand full comment
author

I remember the Law and Econ workshop, am not sure I ever attended the other law school one. My last year there was 1995, which might have been just before they started it.

I gather the UC law school is still an intellectually lively place. I remember at some point some law students somewhere calculated and published the ratio of published articles to faculty members for different law schools. Chicago was first by a sizable margin.

Expand full comment