“it may be in your interest to become a person who does not always act in your own interest.”
It is hard to say what is and isn’t in my interest, as this shows. The immediate consequence of an act may be something I would prefer, while the long-term effect on my subsequent opportunities make it less appealing. This argues against thinking of my interest as a unified consistent thing, rather than a bag of competing interests. But the fact that I must decide how to act narrows the scope - I can’t both do it and not do it. This doesn’t quite erase the distinction between altruism and selfishness, but it definitely complicates it.
The distinction isn't between long-term and short-term, although it is easy to misinterpret it that way. It's between the consequences of being a person who will act in a particular way and the consequences of acting in that way.
Suppose you have a chance to steal with no possibility of being caught. Doing it is in your interest, long-term and short-term. Being someone who will do it is not. The reason is that if you are someone who would do it other people may detect that about you, not from catching you stealing but from earlier observations that signal things about your utility function.
Newcomb's paradox does a good job of making the point. When you are offered the choice you are better off taking both boxes, whether or not one of them is empty. But it is better to be someone who won't take both boxes, because the alien can tell if you are someone like that and if you are will put a million in the box.
I hope that is clear. It's a somewhat subtle point.
Since acting that way requires me to be the sort of person who acts that way, how can the consequences be different? Or can I somehow do it without being the sort of person who does it? Perhaps it is a mistake, a miscalculation, rather than an accurate calculation using premises that disregard others?
I suppose I could be that sort of person without ever actually acting like it… but does that make sense? Being a person who experiences but resists temptation is not the same as being a person who gives in. But that is a different distinction.
“ Suppose you have a chance to steal with no possibility of being caught. ”
This thought experiment would have relevance in a universe where that was possible. In ours it is not. Even in such a universe, perhaps one where the ring of Gyges exists, it is not clear to me that living like a psychopath would be in a person's interest, even if they were justly confident they would never be caught. But perhaps I am arguing for your point that it isn’t a question of time?
In my view, your explanation of why virtue pays is the best available (though I am familiar with it from David Gauthier's /Morals by Agreement/, ch VI). To rearrange the quote, if "you are engaged in voluntary interactions with people who correctly perceive what you will or will not do" then , yes, "it is in [your] selfish interest to be thought to be honest; the easiest way of achieving that result is to be honest". Sometimes, the interest-maximizing behavioral disposition would have one do things that are not interest-maximizing. But it is pretty far from what people want a justification of virtue to look like . . .
For why engage in voluntary interactions at all? After all, a disposition to reckless aggression is likely to be very beneficial, especially if one is stronger than others, and even if it would sometimes lead one into pointless fights (though, if it really is effective, that won't happen all that often). And this sort of thing is no fiction, and in some people it is quite natural.
Further, even if we just stick with voluntary interactions, it does not follow that "it is in your self-interest to be committed to act in ways that maximize the summed benefit to the group of people with whom you are interacting". Two problems here.
First, a more plausible alternative, it seems to me, is that one's interactions with others should be calibrated to the sort of person one is dealing with, viz a disposition to exploit those weaker than oneself (to develop the first point, where "exploit" is defined in terms of the objective interests of each party), a disposition to deal "fairly" with equals in power (this is the case you seem to have most in mind), and a disposition to acquiesce to those who are stronger (the flip-side of the first point, for, as Hobbes would put it, why kick against the pricks?).
And second, even if we are dealing with equals, not sure why a utilitarian outcome ("the summed benefit to the group of people") should be expected. A more plausible alternative, in my view, is for the "fair" outcome in this case to be determined by a would-be /bargain/ between the relevant parties, and this is unlikely to match the utilitarian outcome. It is also one that is more likely to result in "rights" being respected, since there is not tyranny of the minority with a bargain, if any party can walk away (again, assuming voluntary interactions).
So we get something like negative rights amongst people of roughly equal power and nous, but otherwise the strong and cunning exploiting the weak and naive. Better than nothing, I guess.
You may want to read the chapter I am quoting — I linked to it. It is about the economics of virtue and vice, with vice being a commitment strategy to beating up people who don't do what you want. It concludes that there is an equilibrium distribution of the two strategies, analogous to a hawk-dove equilibrium.
I said "summed benefit" not "utilitarian maximum." More precisely I am referring to economic efficiency, which is what maximum utility would be if we assumed that everyone had the same marginal utility of income. I am assuming a competitive market, where the fact that A is worth $x more to an employer than B means that he will get paid $x more. Generalize that and I believe you get my result. Obviously it doesn't apply to bilateral monopoly bargaining except in a context of zero transaction costs.
The passage is a quote from an existing book and I thought using it as is was better than rewriting it for this post.
One point I went on to make in the chapter is that the logic of the equilibrium implies that a society where most interactions are voluntary will have nicer people in it than one where most interactions are not.
Thanks. I've had a look at the chapter you quoted, and realize that, strictly speaking, my concerns relate to the chapter before, on bargaining into anarchic order, which is the more fundamental case. If I understand you correctly, Schelling points influence one's predictions of others, viz that is it reasonable to believe that another will carry out a threat in defense of a salient status quo (eg, Bill remains on his side of the stream), and not to believe that they will do so to promote some non-salient outcome (eg, trusting someone who has already broken a promise).
Not sure. First, mere salience might not be enough. If salience can be jointly created (eg, on the stream between Arnold and Bill), then it can be unilaterally rescinded, if it becomes in the interest of one party to do so (eg, the balance of power shifts). Second, the size of the threat is also relevant. It may be (expected-utility) maximizing to make a sincere threat, but, by definition, it is also a commitment to perform a certain non-maximizing action should the other person "misbehave". But the degree of non-maximization involved in carrying it out may exceed the degree of non-maximization of not making it in the first place—nuclear deterrence is the prime example. If so, it may not be reasonable believe any such threat, even if it is focused on a Schelling point. This will limit, for example, what costs you can accept in order to deter you neighbor (if he will be deterred only threats costly to you), and give him some scope to counter-threaten you (if you will be deterred from your initial policy by threats not-so-costly to him), all presumably depending on the power balance between the two of you.
“disposition to reckless aggression is likely to be very beneficial, especially if one is stronger than others, and even if it would sometimes lead one into pointless fights (though, if it really is effective, that won't happen all that often). And this sort of thing is no fiction, and in some people it is quite natural.”
This might be true of politicians, but their aggression is not really reckless.
If we are talking about muggers, it seems to illustrate DF's point, since it seems like a risky strategy in the long run, however effective it might be in the short run. It must be done in secret, or else people who know the muggers' reputation will avoid them. I guess if there were muggers who retire after a long career without ever getting caught might exist and we would not know about them. But I suspect they don’t exist to any appreciable extent. I’m not saying there are no muggers, or none that escape detection, but that something about his activity discourages them from embracing it fully.
TBH, "reckless" aggression is not quite right. Though less catchy, I should have said something like "discriminating rights-violating" aggression, such as being prepared to make and act upon threats against property and person.
Not really talking about politicians, since any political system properly-so-called presumes widespread respect for rights. Absent such respect, we do not have politics but rather war. So I was assuming a pre-political context, the so-called state-of-nature.
Interesting point about muggers, though I actually had war-lords in mind. We should distinguish between individual and collective forms of discriminating aggression. You may well be right about the (individual) mugger, that such a strategy is unlikely to be successful in the state-of-nature, but that does not exclude aggressive coalitions of individuals (led by our war-lord). There may be scope now for arguing that to be effective there will have to be an internal "morality" in these coalitions, but I think that this is unlikely to respect libertarian rights. My guess (no more) is that it is likely to be hierarchical, if not—given men's greater strength, and the historical record—patriarchal. Again, not a very saintly morality at all.
So… who are you talking about, these people who reap such gains from reckless aggression or "discriminating rights-violating" aggression? Maybe Putin? No, he's a politician, however corrupt. Gangsters?
By asking “why engage in voluntary interactions at all? ”, you seem to imply that everyone should at least be tempted by the strategy to “make and act upon threats against property and person.” Doesn’t that overstate it a bit? Maybe I misinterpreted you.
>you seem to imply that everyone should at least be tempted by the strategy to “make and act upon threats against property and person.”
DF quoted himself: "There are people, probably many people, who will not steal even if they are certain nobody is watching. Why?" This seems to imply that is an open question why we should not want to steal. This is a crime against property, and I was just generalizing the question to apply to crimes against the person as well. I am not saying we /should/ be tempted to steal, or assault others, but am seeking some justification for not wanting to do such things. (Mind you, if no plausible justification is to be found, then we might conclude that we have have been duped into not wanting such things, and, at that point, we might become tempted by them. Not, of course, that we would tell anyone.)
“ disposition to reckless aggression is likely to be very beneficial,”
Is this meant to describe everyone, most people, muggers, warlords, or who (but not politicians)? This seems wrong even without the “reckless” qualification, and wronger with it.
My bad. I did not sufficiently qualify my original comment. So, hopefully more clearly, my view is that (i) everyone can /ask/ what justifies their not wanting to steal and assault others (this is my most recent comment), that (ii) one way of justifying the sort of person one is by asking whether it is in one's own interest to be that sort of person, and that (iii) a disposition to discriminating rights-violation is likely to be in the self-interest of /some/ people (viz, the strong, eg some warlords), but, I like to think, less likely to be in the interest of /average/ people like you and I (this significantly qualifies my original comment).
Now I /agree/ that violating people's rights is wrong, but I took it that the point of the DF's post was to show how people could come to respect people's rights, even when doing so is against their own interest, without presuming that people already /have/ moral commitments. That's why DF's explanation (and my preferred justification) is in other terms, namely, self-interest. I hope this makes things clearer.
I never understood why Newcomb's Paradox is supposed to be a difficult decision. Taking both boxes when the alien predicts you will take one gains you a 0.1% increase in your return, effectively zero compared to the error bars on whether the $1M will be there. Taking one when the alien predicted two has an opportunity cost of $1k, but gains valuable information about the fallibility of the alien.
It's assumed to be a one time game, so information about the fallibility of the alien is worthless. At the point when you choose, assuming that transmitting information backwards in time is impossible and that you believe the alien can't somehow change the content of the box after you choose, taking two boxes is certain to get you a thousand dollars more than taking one, whether that is a thousand instead of zero or $1,001,000 instead of a million.
Seems to me the con man and the honest man share the same point-of-view in that they both are not concerned whether or not their self-directed action turns out to be maximally beneficial to them; i.e., they are willing to take risks. Virtue as a commitment strategy, on the other hand, is other-directed and seeks to minimize risk.
It doesn't seek to minimize risk. It seeks to maximize expected return. It isn't fundamentally directed to the welfare of others, it is choosing to make yourself act for the welfare of some others, those with whom you are in a voluntary relation whose terms will depend on how they expect you to act, because being someone who acts that way is in your interest even though it involves failing to take some actions that are in your interest.
That elaborates the ways these behavioral choices work out -- but so does what I was seeking to convey. Not all seek to maximize expected return, by the way. I've got a beagle who only thinks to fill his belly, for example. (His breed has been selected for this attribute, apparently because that behavior covaries with his tracking abilities). I've also a yorkie mix who could care less about food until he starts shaking from low blood sugar. Point is, just as with dogs, there are different human behavioral phenotypes -- it's not all about return optimization and voluntary cooperation with others. Voluntary cooperation with others can sometimes even be fatal.
"Kim" has always struck me as the ultimate "in theory this, in practice that" fiction. If you count the sheer number of creepy middle-aged men of various marital states through whose hands he passes, the chances of him making it to puberty without being raped and then murdered are in practice, nil.
I disagree. The Lama, who is the main adult he interacts with, is a (plausibly portrayed) saint, quite unlikely to mistreat anyone. Mahbub Ali might enjoy sex with boys but he regards Kim as not only a friend but someone who will be very useful to him. Neither of the priests is a plausible rapist and the school he ends up in is an unlikely setting for adults raping boys, although there might be gay sex among the students.
Beyond which, Kim is very street-wise, sees and evades actual risks with frequent, but not universal, success.
There is at least a suggestion of such risk in Kim's time under the tutelage of Lurgan Sahib. At least, the boy who is Kim's fellow student is insanely jealous of Kim, in a way that suggests an emotional and perhaps sexual attachment. On the other hand, it's clear that Lurgan has no such intentions toward Kim, and if he feels attracted (Kim seems to be an extraordinarily attractive boy), keeps it sufficiently unexpressed not to alarm Kim.
Though I think the real answer is that Kipling is not telling that kind of story. The events of fiction are not usually a maximum-probability sample of the real world; they're more like an experimental demonstration of a specific phenomenon in a controlled experiment from which other forces have been excluded. I don't suppose that Kim is predicated on a denial of evil---the Lama, after all, lives by the teaching that existence is suffering, and that the Buddha came to show us how to be released from suffering---but I don't think showing Kim's rape and murder would have contributed to the novel's theme.
Whether religious people are more or less likely to steal depends on their view of what their god wants them to do — he might be in favor of some stealing. He might easily be in favor of other violations of trust.
Belief in a God won't lead to maximizing summed benefit in my sense, since that is defined by what people value not what God wants them to value.
It isn't that they don't want to steal because that will result in their being known as the type of person who steals. The argument still works if they can steal and never be caught. They don't want to be known as the type of person who would steal, even if they never actually do steal because no good opportunities to steal show up. The point is clearest if you imagine that whether you are someone who would steal given a good opportunity is written on your forehead for everyone to read.
“it may be in your interest to become a person who does not always act in your own interest.”
It is hard to say what is and isn’t in my interest, as this shows. The immediate consequence of an act may be something I would prefer, while the long-term effect on my subsequent opportunities make it less appealing. This argues against thinking of my interest as a unified consistent thing, rather than a bag of competing interests. But the fact that I must decide how to act narrows the scope - I can’t both do it and not do it. This doesn’t quite erase the distinction between altruism and selfishness, but it definitely complicates it.
The distinction isn't between long-term and short-term, although it is easy to misinterpret it that way. It's between the consequences of being a person who will act in a particular way and the consequences of acting in that way.
Suppose you have a chance to steal with no possibility of being caught. Doing it is in your interest, long-term and short-term. Being someone who will do it is not. The reason is that if you are someone who would do it other people may detect that about you, not from catching you stealing but from earlier observations that signal things about your utility function.
Newcomb's paradox does a good job of making the point. When you are offered the choice you are better off taking both boxes, whether or not one of them is empty. But it is better to be someone who won't take both boxes, because the alien can tell if you are someone like that and if you are will put a million in the box.
I hope that is clear. It's a somewhat subtle point.
Since acting that way requires me to be the sort of person who acts that way, how can the consequences be different? Or can I somehow do it without being the sort of person who does it? Perhaps it is a mistake, a miscalculation, rather than an accurate calculation using premises that disregard others?
I suppose I could be that sort of person without ever actually acting like it… but does that make sense? Being a person who experiences but resists temptation is not the same as being a person who gives in. But that is a different distinction.
“ Suppose you have a chance to steal with no possibility of being caught. ”
This thought experiment would have relevance in a universe where that was possible. In ours it is not. Even in such a universe, perhaps one where the ring of Gyges exists, it is not clear to me that living like a psychopath would be in a person's interest, even if they were justly confident they would never be caught. But perhaps I am arguing for your point that it isn’t a question of time?
In my view, your explanation of why virtue pays is the best available (though I am familiar with it from David Gauthier's /Morals by Agreement/, ch VI). To rearrange the quote, if "you are engaged in voluntary interactions with people who correctly perceive what you will or will not do" then , yes, "it is in [your] selfish interest to be thought to be honest; the easiest way of achieving that result is to be honest". Sometimes, the interest-maximizing behavioral disposition would have one do things that are not interest-maximizing. But it is pretty far from what people want a justification of virtue to look like . . .
For why engage in voluntary interactions at all? After all, a disposition to reckless aggression is likely to be very beneficial, especially if one is stronger than others, and even if it would sometimes lead one into pointless fights (though, if it really is effective, that won't happen all that often). And this sort of thing is no fiction, and in some people it is quite natural.
Further, even if we just stick with voluntary interactions, it does not follow that "it is in your self-interest to be committed to act in ways that maximize the summed benefit to the group of people with whom you are interacting". Two problems here.
First, a more plausible alternative, it seems to me, is that one's interactions with others should be calibrated to the sort of person one is dealing with, viz a disposition to exploit those weaker than oneself (to develop the first point, where "exploit" is defined in terms of the objective interests of each party), a disposition to deal "fairly" with equals in power (this is the case you seem to have most in mind), and a disposition to acquiesce to those who are stronger (the flip-side of the first point, for, as Hobbes would put it, why kick against the pricks?).
And second, even if we are dealing with equals, not sure why a utilitarian outcome ("the summed benefit to the group of people") should be expected. A more plausible alternative, in my view, is for the "fair" outcome in this case to be determined by a would-be /bargain/ between the relevant parties, and this is unlikely to match the utilitarian outcome. It is also one that is more likely to result in "rights" being respected, since there is not tyranny of the minority with a bargain, if any party can walk away (again, assuming voluntary interactions).
So we get something like negative rights amongst people of roughly equal power and nous, but otherwise the strong and cunning exploiting the weak and naive. Better than nothing, I guess.
You may want to read the chapter I am quoting — I linked to it. It is about the economics of virtue and vice, with vice being a commitment strategy to beating up people who don't do what you want. It concludes that there is an equilibrium distribution of the two strategies, analogous to a hawk-dove equilibrium.
I said "summed benefit" not "utilitarian maximum." More precisely I am referring to economic efficiency, which is what maximum utility would be if we assumed that everyone had the same marginal utility of income. I am assuming a competitive market, where the fact that A is worth $x more to an employer than B means that he will get paid $x more. Generalize that and I believe you get my result. Obviously it doesn't apply to bilateral monopoly bargaining except in a context of zero transaction costs.
The passage is a quote from an existing book and I thought using it as is was better than rewriting it for this post.
One point I went on to make in the chapter is that the logic of the equilibrium implies that a society where most interactions are voluntary will have nicer people in it than one where most interactions are not.
Thanks. I've had a look at the chapter you quoted, and realize that, strictly speaking, my concerns relate to the chapter before, on bargaining into anarchic order, which is the more fundamental case. If I understand you correctly, Schelling points influence one's predictions of others, viz that is it reasonable to believe that another will carry out a threat in defense of a salient status quo (eg, Bill remains on his side of the stream), and not to believe that they will do so to promote some non-salient outcome (eg, trusting someone who has already broken a promise).
Not sure. First, mere salience might not be enough. If salience can be jointly created (eg, on the stream between Arnold and Bill), then it can be unilaterally rescinded, if it becomes in the interest of one party to do so (eg, the balance of power shifts). Second, the size of the threat is also relevant. It may be (expected-utility) maximizing to make a sincere threat, but, by definition, it is also a commitment to perform a certain non-maximizing action should the other person "misbehave". But the degree of non-maximization involved in carrying it out may exceed the degree of non-maximization of not making it in the first place—nuclear deterrence is the prime example. If so, it may not be reasonable believe any such threat, even if it is focused on a Schelling point. This will limit, for example, what costs you can accept in order to deter you neighbor (if he will be deterred only threats costly to you), and give him some scope to counter-threaten you (if you will be deterred from your initial policy by threats not-so-costly to him), all presumably depending on the power balance between the two of you.
“disposition to reckless aggression is likely to be very beneficial, especially if one is stronger than others, and even if it would sometimes lead one into pointless fights (though, if it really is effective, that won't happen all that often). And this sort of thing is no fiction, and in some people it is quite natural.”
This might be true of politicians, but their aggression is not really reckless.
If we are talking about muggers, it seems to illustrate DF's point, since it seems like a risky strategy in the long run, however effective it might be in the short run. It must be done in secret, or else people who know the muggers' reputation will avoid them. I guess if there were muggers who retire after a long career without ever getting caught might exist and we would not know about them. But I suspect they don’t exist to any appreciable extent. I’m not saying there are no muggers, or none that escape detection, but that something about his activity discourages them from embracing it fully.
TBH, "reckless" aggression is not quite right. Though less catchy, I should have said something like "discriminating rights-violating" aggression, such as being prepared to make and act upon threats against property and person.
Not really talking about politicians, since any political system properly-so-called presumes widespread respect for rights. Absent such respect, we do not have politics but rather war. So I was assuming a pre-political context, the so-called state-of-nature.
Interesting point about muggers, though I actually had war-lords in mind. We should distinguish between individual and collective forms of discriminating aggression. You may well be right about the (individual) mugger, that such a strategy is unlikely to be successful in the state-of-nature, but that does not exclude aggressive coalitions of individuals (led by our war-lord). There may be scope now for arguing that to be effective there will have to be an internal "morality" in these coalitions, but I think that this is unlikely to respect libertarian rights. My guess (no more) is that it is likely to be hierarchical, if not—given men's greater strength, and the historical record—patriarchal. Again, not a very saintly morality at all.
So… who are you talking about, these people who reap such gains from reckless aggression or "discriminating rights-violating" aggression? Maybe Putin? No, he's a politician, however corrupt. Gangsters?
By asking “why engage in voluntary interactions at all? ”, you seem to imply that everyone should at least be tempted by the strategy to “make and act upon threats against property and person.” Doesn’t that overstate it a bit? Maybe I misinterpreted you.
>you seem to imply that everyone should at least be tempted by the strategy to “make and act upon threats against property and person.”
DF quoted himself: "There are people, probably many people, who will not steal even if they are certain nobody is watching. Why?" This seems to imply that is an open question why we should not want to steal. This is a crime against property, and I was just generalizing the question to apply to crimes against the person as well. I am not saying we /should/ be tempted to steal, or assault others, but am seeking some justification for not wanting to do such things. (Mind you, if no plausible justification is to be found, then we might conclude that we have have been duped into not wanting such things, and, at that point, we might become tempted by them. Not, of course, that we would tell anyone.)
I am thoroughly confused now.
“ disposition to reckless aggression is likely to be very beneficial,”
Is this meant to describe everyone, most people, muggers, warlords, or who (but not politicians)? This seems wrong even without the “reckless” qualification, and wronger with it.
My bad. I did not sufficiently qualify my original comment. So, hopefully more clearly, my view is that (i) everyone can /ask/ what justifies their not wanting to steal and assault others (this is my most recent comment), that (ii) one way of justifying the sort of person one is by asking whether it is in one's own interest to be that sort of person, and that (iii) a disposition to discriminating rights-violation is likely to be in the self-interest of /some/ people (viz, the strong, eg some warlords), but, I like to think, less likely to be in the interest of /average/ people like you and I (this significantly qualifies my original comment).
Now I /agree/ that violating people's rights is wrong, but I took it that the point of the DF's post was to show how people could come to respect people's rights, even when doing so is against their own interest, without presuming that people already /have/ moral commitments. That's why DF's explanation (and my preferred justification) is in other terms, namely, self-interest. I hope this makes things clearer.
I never understood why Newcomb's Paradox is supposed to be a difficult decision. Taking both boxes when the alien predicts you will take one gains you a 0.1% increase in your return, effectively zero compared to the error bars on whether the $1M will be there. Taking one when the alien predicted two has an opportunity cost of $1k, but gains valuable information about the fallibility of the alien.
It's assumed to be a one time game, so information about the fallibility of the alien is worthless. At the point when you choose, assuming that transmitting information backwards in time is impossible and that you believe the alien can't somehow change the content of the box after you choose, taking two boxes is certain to get you a thousand dollars more than taking one, whether that is a thousand instead of zero or $1,001,000 instead of a million.
Seems to me the con man and the honest man share the same point-of-view in that they both are not concerned whether or not their self-directed action turns out to be maximally beneficial to them; i.e., they are willing to take risks. Virtue as a commitment strategy, on the other hand, is other-directed and seeks to minimize risk.
It doesn't seek to minimize risk. It seeks to maximize expected return. It isn't fundamentally directed to the welfare of others, it is choosing to make yourself act for the welfare of some others, those with whom you are in a voluntary relation whose terms will depend on how they expect you to act, because being someone who acts that way is in your interest even though it involves failing to take some actions that are in your interest.
That elaborates the ways these behavioral choices work out -- but so does what I was seeking to convey. Not all seek to maximize expected return, by the way. I've got a beagle who only thinks to fill his belly, for example. (His breed has been selected for this attribute, apparently because that behavior covaries with his tracking abilities). I've also a yorkie mix who could care less about food until he starts shaking from low blood sugar. Point is, just as with dogs, there are different human behavioral phenotypes -- it's not all about return optimization and voluntary cooperation with others. Voluntary cooperation with others can sometimes even be fatal.
"Kim" has always struck me as the ultimate "in theory this, in practice that" fiction. If you count the sheer number of creepy middle-aged men of various marital states through whose hands he passes, the chances of him making it to puberty without being raped and then murdered are in practice, nil.
I disagree. The Lama, who is the main adult he interacts with, is a (plausibly portrayed) saint, quite unlikely to mistreat anyone. Mahbub Ali might enjoy sex with boys but he regards Kim as not only a friend but someone who will be very useful to him. Neither of the priests is a plausible rapist and the school he ends up in is an unlikely setting for adults raping boys, although there might be gay sex among the students.
Beyond which, Kim is very street-wise, sees and evades actual risks with frequent, but not universal, success.
There is at least a suggestion of such risk in Kim's time under the tutelage of Lurgan Sahib. At least, the boy who is Kim's fellow student is insanely jealous of Kim, in a way that suggests an emotional and perhaps sexual attachment. On the other hand, it's clear that Lurgan has no such intentions toward Kim, and if he feels attracted (Kim seems to be an extraordinarily attractive boy), keeps it sufficiently unexpressed not to alarm Kim.
Though I think the real answer is that Kipling is not telling that kind of story. The events of fiction are not usually a maximum-probability sample of the real world; they're more like an experimental demonstration of a specific phenomenon in a controlled experiment from which other forces have been excluded. I don't suppose that Kim is predicated on a denial of evil---the Lama, after all, lives by the teaching that existence is suffering, and that the Buddha came to show us how to be released from suffering---but I don't think showing Kim's rape and murder would have contributed to the novel's theme.
So you're saying that religious people are less likely to steal if they believe in an omnipresent omnipotent G-d?
And other people are less likely to steal simply because they don't want to be known as the type of person who steals?
Theoretically, you convinced me that the first type of person is more trustworthy.
Whether religious people are more or less likely to steal depends on their view of what their god wants them to do — he might be in favor of some stealing. He might easily be in favor of other violations of trust.
Belief in a God won't lead to maximizing summed benefit in my sense, since that is defined by what people value not what God wants them to value.
It isn't that they don't want to steal because that will result in their being known as the type of person who steals. The argument still works if they can steal and never be caught. They don't want to be known as the type of person who would steal, even if they never actually do steal because no good opportunities to steal show up. The point is clearest if you imagine that whether you are someone who would steal given a good opportunity is written on your forehead for everyone to read.