Peer disagreement: Why it matters and what the proper response to it is.
By request from Stillwater and Rose, here is a post on the epistemology of peer disagreement. Being the kind of lazy sod that I am in my deepest of hearts, I realised that since I had already had done a good discussion of the various views previously, I’ll just reproduce that discussion more or less intact. The context of the discussion below is me arguing that under widespread disagreement any reasonable view about the proper response to peer disagreement will require us to severely moderate our views. I argue that this would be the case even if the proper response to individual peer disagreement is not necessarily to moderate judgement.
Before we proceed with the discussion. I would like to elaborate what I mean by peer disagreement. Peer disagreement basically is what happens when two or more epistemic peers disagree. Widespread peer disagreement is wha happens when many epistemic peers disagree with many people on both sides of the issue. Defining what counts as an epistemic peer is slightly tougher as many different philosophers have used different notions of peerhood. The most stringent notion of peerhood is one in which two people count as epistemic peers only if they share the same evidence and have the same attitude towards the evidence. i.e. they draw the same kinds of connections and relations etc. A looser notion of peerhood is about two people who are equally competent when it comes to a particular question or set of questions. We can obviously see that the latter notion is wider than the former and includes the former as well. Two people could still be equally competent in a field of enquiry even if they don’t share the same evidence or don’t have the same attitude towards the evidence that they do have. Of course two people who have the same evidence and the same attitude towards it have to be equally competent. Below in the excerpt, I have examined three different views: the equal weight view, Kelly’s total evidence view and Enoch’s first-person inescapability view.
One of the key assumptions that motivates the problem of peer disagreement is the uniqueness thesis. The uniqueness thesis basically says that for any proposition P and set of evidence E, there is only one credence level C about P which is most epistemically appropriate. Obviously, if more than one credence was appropriate, then people could unproblematically reasonably disagree as long as their disagreement was limited to those appropriate credences. Of course, given the uniqueness thesis, what this still allows that people with different evidence (and priors) may permissibly have different levels of confidence in P. The uniqueness thesis also allows that evidence may be misleading. i.e. it may really be the case that P, but the body of evidence E is such that E indicates not-P. Credence levels range from 0 which indicates complete disbelief to 0.5 which is perfect agnosticism to 1 which is complete belief. So, it might be that the state of the evidence is such that we cannot say one way or another about P. In that case we should be agnostic about P. None of what I’ve said here indicates that we will know what that best credence level is, only that it exists. It may still be the case that a matter may be very difficult to assess and that some connections may be difficult to see. However, even if difficult to see, those connections are still there. The basic idea working here is that there is no fundamentally indeterminate piece of evidence. Any particular piece of evidence in its appropriate context evidences a particular conclusion by some determinate amount whether or not we can know what that amount is. Let us move on to a discussion of the various views.
In order to say that we should moderate judgement [under widespread peer disagreement], we don’t need to say that we should moderate judgement under individual peer disagreement. While that would certainly help, it is not necessary. To see why, I will try to reconstruct what the best response to the equal weight view is and try to show that it still recommends moderating our views under widespread disagreement.
Here, I will use Thomas Kelly’s total evidence view[1] as the most reasonable alternative to the equal weight view. The equal weight view is the view that when I find that someone who has exactly the same evidence that I do, and is equally as good as I am at assessing the evidence, but comes to a different conclusion about a particular proposition than I do, we should both moderate our views until we agree. Equal weight theorists suppose that this result is stable even when we weaken our notion of peerhood to one in which someone is a peer if he is as likely as I am to get the answer correct. This looser notion of peerhood says nothing about whether we both possess the same evidence or have the same attitude with regards to the evidence that we do have. If I have evidence that you don’t have and symmetrically you have evidence that I don’t, then, the fact of our disagreement is a fact about the relevance of our “personal evidence” to our relative conclusions. i.e. the fact that you arrive at a conclusion H reflects some part of the evidence that you do in fact possess and the fact that I arrived at not-H reflects some aspects of my evidence. The equal weight theorist could then argue that we should therefore suspend judgement until we have acquired the other person’s evidence and then made a re-evaluation. If, on the other hand, I have all the evidence that you have, as well as more, then I am in a better epistemic position than you and need not and indeed should not defer to you. But, in that case, I may not necessarily count you as a peer. Similarly, we could be peers and still evaluate the evidence in a different way. In that case, the flaws that I have in my reasoning need not be the flaws that are present in your reasoning. However, given that there are flaws in my reasoning (which I may at present be unable to identify), I should moderate my own judgement.
Kelly notes, however, that here is a problem with the equal weight view. The equal weight view presumes that the fact of disagreement swamps other kinds of facts. Kelly notes correctly that the fact of disagreement cannot constitute first order evidence against a proposition. Rather, the fact of disagreement constitutes a higher order evidence and instead bears on the strength and nature of our first order evidence. So, when I find that a peer disagrees with me, all that disagreement does is call into question the strength of the first order evidence with respect to the conclusion. However, this is not conclusive. It is possible that my peer could be wrong. And the mere fact that he is wrong on this one instance does not demote him from peerhood. There is nothing that prevents someone who usually very reliable from drawing unreasonable conclusions from the given evidence every once in a while. However, Kelly notes that disagreement even in such a case is still evidence even though it is misleading evidence. However, this higher order evidence need not undermine totally the confidence I have on the strength of my evidence. The reason for this is very simple. Kelly says that the views we can reasonably draw from our evidence is constrained by what we can reasonably believe about our evidence. However, if we have anything to say about our evidence at all, surely the actual state of the evidence is to play a part in it. The degree to which disagreement ought to affect our judgement about a particular case really depends on the strength of our own first order evidence. If my evidence for the proposition I hold is indeed strong, then it follows that there is unlikely to be reasonable evidence against the conclusion and the less that peer disagreement should move me. The weaker my evidence is, the greater the possibility that someone can disagree and have reasonable evidence to back him up. There are of course some cases where our primary evidence may be sufficiently weak that we are able to conceive of sufficient counter arguments that would give us reason to completely suspend judgement. But this need not happen in every case of peer disagreement. Of course, if you find that many of your peers disagree with you, then, the likelihood of your initial evidential assessment being correct decreases, or at least it does so in most cases. The question of whether there are any exceptions will be explored later. Nevertheless, in most cases, whenever you find that you are in a distinct minority among your peers, it is often the case that you should moderate your views significantly.
The result above may not work for other alternatives to the equal weight view. I will briefly take a small detour to look at David Enoch’s view and try to show why it is if not unreasonable, far more problematic than Kelly’s view. Roughly speaking, if Enoch’s view is right, then even under widespread disagreement, people need not significantly moderate their views[2]. Enoch’s argument, roughly, is that the equal weight view relies on treating the dissenting parties as truth measuring devices or truthometers. However, Enoch argues that it is impossible to treat ourselves as truthometers since we cannot escape the first person perspective. Enoch says that given that ought implies can, we need not treat ourselves as truthometers. Enoch draws a distinction between two types of responses to impossible goals. One type of response is to discard the goal as anything inherently worthy of pursuit. The other is to pursue the goal and try to approach the ideal as far as we can. He supposes that the issue of treating ourselves as truthometers fits the former approach. However he does not argue for this. The lack of any argument for why we should not simply try to treat ourselves as truthometers as far as possible implies that at the least we should be agnostic about whether to treat ourselves as truthometers as far as we can. However, the initial reasons for treating ourselves as truthometers presumably still exist: The endorsement of a proposition P by an extremely reliable epistemic agent is indicative of the truth of P and the similar rejection of q is likewise indicative of q’s falsity. Enoch’s reply to the equal weight view, therefore, fails.
Where Enoch goes wrong is in his embrace of epistemic permissiveness. Despite his initial setup where he sets aside epistemic permissiveness and endorses the uniqueness thesis for the purposes of the paper, his argument amounts to a rejection of positive justification[3] as a goal on the grounds of impossibility. The distinction that Enoch may be failing to appreciate is that whereas it is understandable that people will use the reasons that they do possess, they are not necessarily justified in doing so. So, even if Enoch criticises Kelly’s view because it doesn’t give people who face peer-disagreement advice on what to do, this is not a problem for Kelly’s view. That’s because Kelly’s view is not a decision procedure, but an account of epistemic rightness. The above remarks are admittedly insufficient to do full justice to Enoch’s argument which is more complicated than I have detailed. Nevertheless, the above remarks are intended to suggest very roughly that Enoch’s thesis is at best incomplete and that Kelly’s views are the best way to go.
Leaving aside Enoch’s view, let us take for granted that Kelly’s view is mostly correct and return to the possible counter example to Kelly’s Total Evidence view. Richard Fumerton raises a possible counter example to moderating your beliefs when confronted with peer disagreement[4]. Fumerton brings up the case of the Monty hall problem.[5] Fumerton says that prior to explanation, even though most people would disagree with him as to what the optimal strategy is, he says that he should stick to his guns even if most of his colleagues whom he considers to be peers disagree with him. The reason for this is because he knows that he is privy to an argument which he knows that he has assessed correctly, which if he were to tell his colleagues would also convince them. There is an intuitive plausibility about Fumerton’s case, and could provide a possible counter-example to Kelly’s view if Kelly’s view could not accommodate the intuition. Fortunately, this is not the case. Note that in Fumerton’s case, the first order evidence is so strong that it is just inconceivable that anyone, once presented with the evidence could reasonably disagree.
The possible counter-example is relevant for a very important reason. Kelly’s argument pre-supposes that evidence is shared. However, the type of peerhood we pre-supposed is not as strong as Kelly’s. It does not require evidence to be shared. However, the strength of Kelly’s view does not rely on the actual existence of peer disagreement, but rather on the possibility that someone could reasonably provide a counter argument/ evidence against your position. According to Kelly, the real threat comes from the possibility of reasonable disagreement. The possibility of reasonable disagreement indicates that the evidence may not be as strong as initially thought. The presence of actual peer disagreement is just dispositive about the possibility of such disagreement existing.
Thus far, we have shown that Kelly’s view is better than Enoch’s view. However, we have yet to show that Kelly’s view is better than the equal weight view. To recap, the equal weight view says that the fact of peer disagreement necessarily swamps other kinds of evidence. Enoch’s view says that our first person perspective necessarily swamps the fact of peer disagreement. Kelly’s view basically is that which kind of evidence dominates depends on the relative strength of each kind of evidence. If the initial primary evidence is very weak, then the fact of peer disagreement is evidence that it is weak and should appropriately weaken our confidence in that assertion. Kelly’s point is not that epistemic peers can reasonably disagree, rather it is that the situation is asymmetric. The person who has worse first order evidence should moderate his judgement more than the person with better first order evidence. This does not necessarily yield a practical guide as it may be difficult in practice to assess the strength of one’s first order evidence. However, that is not a problem for the view. It is not a practical guide, just an account of what is the epistemically right thing to do.
The problem with the equal weight view is that it cannot really account for why the fact of disagreement swamps the other evidence. The equal weight theorist argues that to still consider our initial evidence is to double count it. This is not true. To make the charge of double counting stick, it must be the case that the conclusion that your peer draws from the evidence is just as reasonable as the conclusion that you draw. Of course, this is not necessarily true. Just because someone is your peer and is generally speaking as reasonable and as competent as you are, it does not follow that on any one particular case, his conclusions from the evidence are as reasonable as yours are. It might be that his are more reasonable, or that yours are, or that neither yours nor your peer’s are. Of course as more and more peers disagree with you, the possibility that all of them are being less reasonable than you on this one case decreases.
Is the question of peer disagreement anything more than an abstract problem? Certainly! First of all, settling the question of the proper response to peer disagreement helps us answer the question of how to respond to disagreement with people who are experts and with people who are relative ignoramuses in a particular field. Another question has to do with how to deal with disagreement in a number of investigative communities like juries, electorates, scientific communities and philosophical communities. For example there seems to be a conundrum raised in the context of voting in a democracy.
[1]Thomas Kelly, Peer Disagreement and Higher Order Evidence, Chapter 6, disagreement, edited by Richard Feldman and Ted A. Warfield, Oxford University Press, 2010
[2] David Enoch, Not a truthometer: Taking oneself seriously (but not too seriously) in cases of peer disagreement, Mind Vol 119.476.October 2010
[3] I am drawing on the distinction Sinott-Armstrong makes with regards to permissive and positive justification in his book Moral Scepticisms, Oxford University press, 2006, pp65-66
[4] Richard Fumerton, You can’t trust a philosopher, Chapter 5, disagreement, edited by Richard Feldman and Ted A. Warfield, Oxford University Press, 2010
[5] The Monty Hall problem is set up the following way. There are three doors, of which behind one there is a large monetary reward and behind two, nothing. The contestant does not know which door the money is behind. The gameshow host who knows what is behind each door, asks him to pick a door and then tell him which door was picked. Once the contestant has picked the door, the host proceeds to open a door not picked by the contestant and which is also empty. The host then asks the contestant whether he wants to switch. While the intuitive answer is that it doesn’t matter, the correct strategy is to always switch. The reasoning proceeds as follows. If you originally picked an empty room, switching will give you the money. If you already picked the door with the money, then switching will give you an empty room. Since the probability of initially picking an empty room is higher than picking the room with the money, switching will give you a higher chance of getting the money. In fact you double your chances of winning by switching.
I just want to come out from the woodwork to say I greatly enjoyed this post. The whole time I was reading I kept thinking of the one wise judge vs. twelve idiots question, but then towards the end, I found myself thinking about how this framework does and doesn’t apply to science. In a certain sense, the scientific method is a standard that aims to get around the problems raised here – there’s also the fact that science is an empirical pursuit, so we can see afterwards who’s right and who’s not (so far we’re pretty sure quantum theory is on the money, as cracked out as it is, because every experiment ever designed to disprove it has failed.) We can’t do this for ordinary experience, however, since we aren’t talking about ideal cases.Report
You’re going to get a bit of this in science as well. What does the result of a particular experiment really mean vis-a vis a particular hypothesis/theory? Also, every once in a while, even the greats among us slip up. Einstein famously thought that quantum mechanics couldnt possibly be true. What are we to do about such situations?Report
You can certainly claim the the scientific method is flawed or that it allows for flawed thinking or experiment design to get past whatever standard you set up, but it is a standard that exists outside subjectivity, that consciously eliminates the problem of subjectivity, so that whatever systematic problems do arise are the problems of the system and not the problems of its individual agents (at least theoretically speaking. This whole thing breaks down when we’re talking about “sciency” things like neo-social Darwinism and evopsych.).Report
You’re mistaking me. My claim is this:
The line of reasoning from evidence (e.g. the result of an experiment) to conclusion is not straightforward. Not everyone can draw those conclusions. Similarly, even experiments require some interpretation. This is precisely why the scientific enterpise is and ought to be a social enterprise. The scientific enterprise is one that is conducted by specially trained communite who check each other and thereby reduce error. No single scientist can do it all on his own. Peers review eachother’s work to mak sure that such work adheres to accepted standards. Peers look at the evidence that individual scientists draw upon to reach their conclusions and ask themselves whether those conclusions are warranted. It is this social factor in scientific investigation which is often overlooked. But the social factor is key to the success of the scientif enterprise. I’m willing to bet that absent any kind of peer review, scientists would be drawing all sorts of unwarranted conclusions from their experiments and our state of scientific knowledge (to use rather imprecise terminology) would be much more backward and messy.Report
Sorry, I didn’t mean to misrepresent your views. I wasn’t disagreeing with you above, just qualifying my earlier statement. As for the social aspect of science being a crucial element, I definitely agree.Report
Misunderstanding was probably my fault as well, sorry.Report
I really like this post as well.
I would take issue the statement that Einstein believed quantum mechanics to be untrue.
My understanding is that he took issue with the concept of randomness, and its liberal (non-Rawlsian) application.
And I see his point. True randomness would be difficult to prove. Some manner of pattern may yet be ascertained.Report
Epistemic disagreement ought to lead the parties to propose better experiments to solve those disagreements. See Bell’s InequalityReport
This was an excellent piece, though my poor, Internet-addled brain required re-reading a few key paragraphs before I got what you were saying. Thanks for writing it,
Report
But what about the legitimacy of saying “I agree to disagree?” which is what started this conversation? Kelly’s view is not, as you say, a practical guide. As a practical matter, when peers disagree, one of us is right and one of us should be weighting the evidence differently. But I can’t always be sure it’s me (assuming an epistemic peer). Also, what if I believe my evidence is indeed weak? Then it seems legitimate to say to a disagreeing interlocuter, whom I believe to be using sound reasoning but perhaps slightly weaker evidence, that I am not willing to try to persuade her that I am right and she is wrong.
Looking at abortion, which is what brought his up: I think all the personhood arguments for any particular stage of embryonic development are too weak to endorse (conception, heartbeat, viability, sentience, etc.). My own position is that there are no necessary and sufficient conditions for personhood, but that’s based on a relatively tentative and weak philosophical position of mine that we shouldn’t always be trying to muck around with necessary and sufficient conditions. So I know personhood when I see it, and I think I see it some time after conception, but pretty early on. How strong is my evidence for that? Not very! I think it’s warranted, but barely, and I want to respect the fact that there are people who think that there are indeed necessary and sufficient conditions for personhood.Report
My own position is that there are no necessary and sufficient conditions for personhood, but that’s based on a relatively tentative and weak philosophical position of mine that we shouldn’t always be trying to muck around with necessary and sufficient conditions. So I know personhood when I see it, and I think I see it some time after conception, but pretty early on. How strong is my evidence for that? Not very! I think it’s warranted, but barely, and I want to respect the fact that there are people who think that there are indeed necessary and sufficient conditions for personhood.
+1Report
I think it’s warranted, but barely, and I want to respect the fact that there are people who think that there are indeed necessary and sufficient conditions for personhood.
I’m not sure this is a case of agreeing to disagree, since you’re not agreeing that their view is true, just that it might be true. It seems to me that you’d be less inclined to ‘agree to disagree’ with someone who holds a robust view that since personhood begins at conception, abortion is murder, than you would someone who is only slightly further to the pro-life side than you are. Additionally, you might be inclined to agree to disagree with them, while they wouldn’t be inclined to offer you the same concession.
Most of the disagreement, it seems to me, is based on the priors. If you discount priors and assume a restricted set of evidence E, then both of you should agree in your confidence of P. It’s the priors which cause the disagreement. And my guess is you disagree about those.
Which gets to some of my worries about the framework Murali has outlined here: how do you discount priors from creeping into the evidence set? And if we can’t, then do we have to include priors as a necessary part of the evidence set? And if so, then doesn’t the thesis amount to the claim that any two people with the same evidence (and priors) and of the same intellectual abilities will necessarily ascribe the same confidence level to proposition P?
But how is that interesting?Report
And if so, then doesn’t the thesis amount to the claim that any two people with the same evidence (and priors) and of the same intellectual abilities will necessarily ascribe the same confidence level to proposition P?
Let’s look at the abortion issue. Are the priors that I have the same that you have?No? If not, can those priors be stated in such a way as that I am aware of them. i.e. is the prior just some indefinable something or does it actually count as a premise in your argument. Presumably, if our different conclusions boil down to different priors, then resolving the conclusion would require us to resolve the difference in priors. Are our priors the kinds of things that can be critically evaluated? If yes, can we critically evaluate whose priors are better? If priors are the kinds of things we cannot critically evaluate, how is it that we continue to believe in those things? If they cannot be evaluated, shouldnt we both be reserving judgement?Report
Murali, ahh good. I think I misunderstood what you wrote in the OP. Priors will certainly play a fundamental role in how we judge any other evidence set E, or if we even accept it. So even tho we can restrict E to a limited number of beliefs in a particular context, for practical purposes E will be the total set of (relevant) beliefs which contribute to our confidence in P. I also like your statement about priors: either they can be evaluated and subject to argument or they can’t be, in which case it’s rational to reject them.Report
FWIW, I think the position that all abortion is murder is a defensible one. But I do agree that most on both sides of the issue do not think there is any way to hold the other position defensibly!Report
I’m not so sure about that. On what grounds would you say that you agree with them, in what sense is their belief ‘defensible’ given your evaluation of the evidence? One way to make sense of it is to say that you agree with the justification for their belief, that is, that you agree with the argument for the claim that abortion is murder. If so, then it seems that you hold two contradictory beliefs: both that abortion is always murder and that abortion sometimes isn’t murder. Or at a minimum, that you believe the two claims – abortion is always murder and abortion sometimes isn’t murder – are both equally justified. But by hypothesis, you don’t think that: you hold the claim that abortion sometimes isn’t murder with a higher level of confidence than it’s alternative.
It’s at this point that Murali’s main thesis comes into play. If both you and another person share the same intellectual abilities and the same evidence set E, and if the two of you disagree in the confidence of P (abortion is always murder), then the difference must be accounted for in some way. And I think the difference is accounted for by considering the antecedently held beliefs by which both of you evaluate and ascribe confidence levels to specific claims in E. If so, then you’re not necessarily agreeing to disagree. You’re saying that given the other person’s priors, his confidence level in P is justified.
But do you agree with the priors? I think this is where it gets sticky. Murali’s comment above suggests that priors are subject to evaluation and argument just like any other beliefs. And I agree with that. So in my view, it’s not correct to say that two people agree to disagree about whether abortion is/is not murder: what we’re in fact saying is that while I disagree with you, I agree to concede (usually for pragmatic purposes) the foundational premises which justify your view.
Or in other words, if two people who ostensibly agree to disagree about P were to identify and evaluate their priors, there would be movement towards agreement in the confidence level of P. This rarely happens, of course, since most people either aren’t consciously aware of their priors, or hold those priors so closely that they won’t surrender them in any event. In both cases, tho, I think that means the ascribed confidence level in P isn’t warranted (or defensible), and the belief in P is to some degree irrational.
Report
Also, Murali, I want to thank you for this post. I know shamefully little about epistemology – just enough to teach intro, or whenever it’s related to my area, or related stuff I’ve picked up by osmosis from my husband who has an AOC in phil sci. I’ve been meaning to beef up, like, forever.
Are you a philosopher (professionally, that is)?
And speaking of epistemic peers and disagreement, this cracked me up: http://fauxphilnews.wordpress.com/2012/02/22/psychologists-search-the-philosophical-mind-for-bullshit-detector-find-friendship-deterrence-system-instead/Report
Are you a philosopher (professionally, that is)?
That’s the eventual goal, but as for now, I’m just a masters student hoping to get into a good PHD program.Report
I just did a bunch of grad course on social epistemology, moral epistemology and reflective equilibrium. (And the core of the first part of my masster’s thesis is about reflective equilibrium and how Rwls puts it to use.Report
Good luck with the PhD program! I remember what an anxious time that was.
A friend of mine just did a dissertation defending reflective equilibrium in moral cases. And (speaking of peer disagreement), I agree with my friend – I think some reflective equilibrium is absolutely crucial in coming up with an ethical system. Just doesn’t make sense to me otherwise (in that, without reflective equilibrium, you can come up with a perfectly coherent moral system aimed at maximizing the amount of grape jelly in the world). Have you posted on the topic?Report
I actually disagree with you on that. I dont think reflective equilibrium gets us anywhere when it comes to moral theory (because we don’t really have any reason to think the starting points are any good in the first place).
I do however think that I can start with some self evident premises and then develop a moral theory from there.
One premise is that ought implies can.
Another premise is that at least the most fundamental principles are valid for all possible persons.
Even though this is not fully worked out yet, the rough idea is that we cannot permissibly pursue a particular end E using a particular means M if pursuing E via M would be counterproductive if everyone who cared about E pursued it via M. This might need to be tweaked a bit here and there, and I’ll need to be extra careful, but this seems viable. (It is also quite reminiscent of Kant’s formula of Universal law)Report
But how do you determine that as the most plausible version of a basic moral principle instead of (say) some version of the mere means principle without some kind of reflective equilibrium?
Also, how do you cash out “counterproductive” without something at least similar to reflective equilibrium?Report
But how do you determine that as the most plausible version of a basic moral principle instead of (say) some version of the mere means principle without some kind of reflective equilibrium
Presumably, the principle can be deduced from the initial 2 premises. On finding that the conclusions are counterintuitive, I will not go back and tweak my premises so that they produce more “plausible” principles.
Also, how do you cash out “counterproductive” without something at least similar to reflective equilibrium?
I’m not sure what you mean.When I say an action is counterproductive, all I mean is that some action fails to bring the actor closer the end for which it was performed (or may even take the actor further away from it). I am not sure how reflective equilibrium would figure into it? Do you mean something very different from what I mean when I talk about reflective equillibrium.
By reflective equilibrium, I refer to a process by which I tweak considered judgements and high level principles to fit with low level more fundamentl principles and vice versa. The end result is something which would tend to be fairly commonsensical. The presumed starting point is some set of pre-theoretical considered judgements about cases. Principles are constructed to fit these considered judgements. i.e application of these principles will reproduce those judgments fairly closely. Judgements which would not fit are discarded. Some more mutual adjustment takes place until the set of principles and judgements are in equilibrium.
By contrast, I move in only 1 direction. My argument is roughly that if my premises are necessarily true, then any conclusions I properly deduce from the premises are necessarily true as well.Report
I was thinking you’d have to see if things were counterproductive really, but maybe that’s not the case.
But I still think that maximization of grape jelly could follow from your two premises. Or at least, act-utilitarianism or something like the mere means principle.Report