‘Morality and Rationality Revisited’ Programme
9:30 - 10:00: Registration, coffee & tea + opening remarks
10:00 - 11:00: Luke Elson (University of Reading) - ‘Ought’ Implies ‘Can’ Supports Moral Rationalism
11:10 - 12:10: Lizzy Ventham (Trinity College Dublin) - Different Kinds of “Oughts” and a Response to Bootstrapping Objections
12:10 - 13:15: Lunch Break
13:15 - 14:15: Jens Gillessen (Philipps University Marburg) - If Rationality Is a Myth – What Becomes of Moral Rationalism?
14:25 - 15:25: Franz Altner (University of Vienna) - Willing a Broomian Solution to Moral Dilemmas
15:25 - 16:00: Afternoon Tea Break
16:00 - 17:30: Keynote talk - Maria Alvarez (KCL)
18:30: Conference Dinner (Hansa's Restaurant)
Abstracts
‘Ought’ Implies ‘Can’ Supports Moral Rationalism
Luke Elson (University of Reading)
It is a philosophical commonplace that ‘ought’ implies ‘can’. I argue that this principle supports Moral Rationalism: ‘the claim that moral obligations are or entail sound practical reasons for action.’[1]
My argument is explanatory, and trades on the apparent fact that ought-can principles apply both to moral obligations and to practical reasons. I argue that the best explanation for this ‘coincidence’ is that morality provides or entails practical reasons.
There are three sections, and a brief conclusion.
For the first, and in part to avoid terminological confusion, I appeal to a fairly weak version of the claim that (moral) ‘ought’ implies ‘can’:
Moral Obligation-Can. If an agent A is obligated to ɸ, then A can ɸ.
This formulation is weak because arguably there are situations in which one morally ought to ɸ yet is not obligated to do so. If there are supererogatory acts, they engender such situations. Moral Obligation-Can does not apply to the supererogatory.
The second and parallel principle concerns (normative) practical reasons. There are various formulations, but here is the strongest:
Reason-Can. If A has (any) reason to ɸ, then A can ɸ.
In my opinion, the arguments of Streumer (2007) for Reason-Can are decisive (the principle explains, for example, why we cannot have reason to change the past). The argument to follow does go through with a weaker version—that if A has most reason to ɸ, then A can ɸ —but things become rather more complicated. So I’ll assume the stronger version.
Officially, I do not defend either Moral Obligation-Can or Reason-Can; the argument of the paper is conditional on them.
With these two conditional premises in hand, in this section I argue that they jointly support a substantive metaethical claim:
Moral Rationalism. If A is morally obligated to ɸ, then A has some practical reason to ɸ.
This principle is of central importance to metaethics, and can be accepted both by strong realists—such as Shafer-Landau (2003), and of course Parfit (2011) who makes reasons the central plank of his view—and by error theorists such as Mackie (1977). It is not congenial to naturalists and, famously, Philippa Foot denied it:
This seems to take us to the heart of the matter, for, by contrast, it is supposed that moral considerations necessarily give reasons for acting to any man. The difficulty is, of course, to defend this proposition which is more often repeated than explained.[2]
As a partial response to Foot’s challenge, we are now in a position to give a positive (conditional) argument for Moral Rationalism:
1. Moral Obligation-Can is true.
2. Reason-Can is true.
3. Together, Moral Rationalism and Reason-Can best explain Moral Obligation- Can.
4. So, by inference to best explanation, Moral Obligation-Can supports Moral Rationalism.
Clearly, premise (3) requires the most defence. Here is the argument. If Reason- Can and Moral Rationalism are true, they immediately entail and explain Moral Obligation-Can: if we assume that ‘has a reason to ɸ’ entails ‘can ɸ’, then if ‘has a moral obligation to ɸ’ is or entails ‘has a reason to ɸ’, then it is clear why ‘has amoral obligation to ɸ’ entails ‘can ɸ’.
Put a slightly different way, the argument highlights something distinctive about morality, as opposed to other potential ‘sources’ of practical reasons. Upon learning that A cannot ɸ, we withdraw both a verdict that A has practical reason to ɸ and that A is obligated to ɸ. (At least, we should withdraw them if the two ability-restriction principles are correct.) Why these two verdicts? This is a coincidence, and calls for explanation.
And other verdicts are not withdrawn. Upon learning that you cannot ɸ, we withdraw the verdict that you have practical reason to ɸ. But even the Humean about reasons does not withdraw the verdict that you desire to ɸ. There is no parallel principle that ‘desired’ implies ‘can’. (Though there may be normative principles about how you ought structure your desires.)
Moral obligation shares a feature with practical reason—what I have called ‘ability-restriction’— that is not shared by other potentially reason-giving concepts. This is most naturally explained by the uniquely tight connection between obligation and practical reason that Moral Rationalism posits.
Premise (3) of the above argument is a comparative one: that Moral Rationalism forms part of the best explanation of Obligation-Can. In this section I consider a natural objection to the argument—that there is a better explanation available.
The objection is a difficult one to decisively respond to (who is to say what other candidate explanations might appear?), so here I consider and reject one prominent alternative explanation: that obligation-can is explained by the ‘action-guiding’ role of morality.
One version of the explanation could appeal to something like the following principle:
Guidance-Can. If a system of rules guides action, then an ought-can principle obtains.
I argue that Guidance-Can is false. Rules and obligations can guide action without it always being possible to comply with them. For example, an impossible law can ‘point’ us in the direction of correct behaviour.
Now it may be that as a contingent matter, morality is better able to fulfil its action-guiding role if it is restricted to obligations with which we can comply.
But this claim is questionable, and in any case its contingency weakens the explanatory force: if Obligation-Can is true, then it is plausibly a conceptual truth about moral obligations. Such a conceptual truth cannot be completely explained by contingent facts.
Another objection to this general strategy is that Guidance-Can seems to be violated in a number of non-moral rule systems, such as the law. It is often remarked that U.S. Federal law is so complex and contradictory that everyone is violating it in one way or another. And yet Federal law clearly does guide action.
I close with a brief historical comparison. Earlier I mentioned that arguing from ‘ought’ implies ‘can’ is somewhat rare. But the argument I give for Moral Rationalism bears some similarities to a much earlier argument due to W. D. Falk:
For we do assume that “ought implies can,” that a man can only have a duty which he can be expected to fulfil. But we cannot be expected to do anything if we feel no impulse whatever to do it. [. . . ] The presence of some natural impulse is constitutive of a natural duty to do anything.[3]
Though it is nice to have such good company, I contend that my argument is more successful than this one.
Falk relies on a substantive notion of ‘can ɸ’ as implying requiring motivation or ‘impulse’ to ɸ. This seems implausibly strong. But my argument for Moral Rationalism does not rely on any such spelling out: the argument above goes through however we understand ‘can’.
Different Kinds of “Oughts” and a Response to Bootstrapping Objections
Lizzy Ventham (Trinity College Dublin)
In this paper I defend the claim that moral oughts are the same kind of thing as other rational oughts. Both, according to the position I defend, are contingent on the agent’s desires. I begin by explaining what I take this to mean, and why it’s a controversial position to hold. I introduce the ‘bootstrapping objection’ to my position, and briefly discuss how previous attempts to answer this objection don’t work. Instead, I suggest my own answer to the objection, and show how it succeeds where the others have failed.
Moral ‘oughts’ are often taken to be a different kind of creature to other oughts. In particular, they’re taken to be a different kind of thing to what you ought to do according to your desires. Kant (2012) made a distinction that was very similar, claiming that moral oughts were ‘categorical imperatives’ because (among other reasons) they required us to perform actions for their own sake, as opposed to hypothetical imperatives that required us to perform actions as the means to some other desired end. Smith (1994) recognised this difference too, when he argued for one of the prongs of his trilemma of ‘The Moral Problem’: that moral oughts, unlike other kinds, are necessarily motivating.
According to an account of moral oughts as hypothetical imperatives (most famously argued for by Foot (1972), but then rejected by later-Foot (2001)), moral oughts apply to us because of common, shared moral desires we have (see also Brink, 1989). Just as we ought to heat up the jug because we want the coffee to be warm, so too we ought to attend the protest because we want to do what’s right.
My paper focuses on one particular kind of objection to this desire-based account of moral oughts: the bootstrapping objection. According to the bootstrapping objection, generating ‘oughts’ from desires is problematic because it means an agent can make it the case that they ought to do something simply from intending (akratically) to do it. Take this example from Kiesewetter (2017) p.82:
…suppose you are weighing your reasons for and against two incompatible courses of actions, say getting some work done at home and watching a football match with your friends. Your reasons, we can suppose, together require you to stay home: you have to hand in important work tomorrow, and the match is not supposed to be very promising, after all. In deliberating, you are reaching the correct conclusion that you ought to stay at home. But then you akratically decide to go watch the football match with your friends, you need to call them and ask where they are meeting. So now you intend to watch the football match, and you believe that in order to do so, you have to call your friends. […] [W]e can now detach the conclusion that you ought to call your friends. But this seems an absurd conclusion, given that we have just said that you ought not to meet your friends, but rather stay home.
As the idea goes, if our desires are an important part of what we ought to do, then forming an intention can generate an ‘ought’, even if that intention is for something we ought not to do. We have contradictory statements: that the agent ought to call her friends, but ought not to call her friends. This new ought is, as it were, pulling itself up by its own bootstraps; it’s willing itself into existence.
There have been other attempts to solve bootstrapping problems, such as Broome’s (2007) wide-scoping approach, but that are not necessarily successful. (Kiesewetter 2007) The answer, I think, is in understanding the supposedly contradictory oughts as being different kinds of ought, and so not the kinds of thing that can contradict. That is, there can be moral oughts, legal oughts, and what you most ought to do according to your desires. In Kiesewetter’s case above, we have what the agent ought (overall) to do, given the conclusion of all of their projects and their long-term desires weighed up against their short-term ones. The ‘ought’ which follows from their short-term desires, their akratic intention to go and watch the football, is an ought of a different kind: an ought based on that short-term desire alone. And so there’s no contradiction.
With this solution, we don’t generate any oughts by any method other than by how they should be generated, and nothing is pulled up by its own bootstraps. Moral desires generate the moral oughts, weak desires and akratic intentions generate weak oughts.
Finally, I respond to an objection from Piller (2013) against this kind of attempt to answer bootstrapping problems. Piller argues against distinguishing between subjective and objective ‘oughts’, on the grounds that it’s failing to see what practical deliberation is really trying to get at. Practical thinking, he says, is about what we ought to do, not about a variety of different kinds of ought.
I respond by demonstrating that these different oughts are exactly the kinds of thing which are important to practical deliberation. They’re still a unified concept, about what we ought to do. We can see an analogy here with our talk of reasons. We have stronger and weaker reasons, different kinds of reasons (moral and prudential, for example) and we can have reasons to do contradictory actions. But these different kinds of reason all mean the same kind of thing, they’re still normative and practical reasons for action.
This paper argues that indexing different oughts in this way not only provides a solution to bootstrapping objections, and helps defend the thesis that oughts can be contingent on desires, but it also seems independently plausible, because of how we understand normativity and different kinds of reason.
If Rationality Is a Myth – What Becomes of Morality Rationalism?
Jens Gillessen (Philipps University Marburg)
My talk will be based on two papers of mine on normative aspects of rationality (both under peer review), and will proceed in two steps. The first part concerns recent developments regarding the normativity of rationality. The second part will explore consequences for moral rationalism, by which I mean the view that the normativity of moral requirements is nothing but the normativity of rationality itself, the principal contenders being Christine Korsgaard and (arguably) Immanuel Kant.
In the first part, I intend to make the case for certain elements of what has, in the wake of Raz (2005) and Kolodny’s (2005, 2007, 2008), come to be dubbed ‘Myth Theory’ about rationality. I find these elements in Kolodny’s version of the view: First, the requirements of rationality concern the internal coherence of our mental states. Second, there is no normative reason to be coherent as such (no ‘C-reason’, for short). Despite a whole spate of rejoinders to various aspects of Myth Theory (see e.g. Southwood 2008, Bratman 2009, Reisner 2011, Broome 2013, Broncano/Vega 2015, Shackel 2015), I am going to argue that there is indeed no reason of a general kind to conform to requirements of formal coherence at the level of particular actions or attitude revisions. Part of my argument will be that no credible intuitive evidence has been produced for the existence of such reasons; as a reexamination of ‘eccentric billionaire’ cases shows, in which agents are offered rewards for being incoherent. Hence the supposition of C-reasons is far less reasonable than the contrary supposition.
Meanwhile, it still seems plausible that virtually everyone has almost all the time one particular reason to cultivate rational dispositions. This reason seems to consist in the fact that in the long run, rational dispositions help us be as we have most first-order normative reason to be (see Kolodny 2008, 458f.; 2009, 376). However, reasons favoring the acquisition of disposition—‘ D-reasons’, for short—do not lend support to the existence of some general sort of C-reason. Even though there will often be reasons that favour what rationality requires, rationality is not itself a source of normativity.
In the second part, I will explore how these Kolodnian upshots bear on the viability of moral rationalism. I will begin by explaining what kinds of views I count as examples of moral rationalism, and which I don’t. In fact, many accounts that have been described as seeking a foundation of morals in ‘rationality’, ‘reason’ or ‘reasonability’ did not appeal to rationality as coherence of attitudes, but to prudence. This goes for views as diverse as contractarianism, classical utilitarianism and moral contractualism. These accounts remain unaffected by Myth Theory, not only because they locate the ultimate source of normativity in prudence instead of rationality, but also because recent theorizing about normative reasons has made it clear that the transmission of normativity from potential ends to potential means depends on objective relations of ‘facilitation’ rather than on an instrumental requirement of rationality (Raz 2005, Kolodny forthcoming).
What Myth Theory does affect are Kantian-inspired accounts such as Korsgaard’s (1996, 2009), specifically, which traces the source of moral normativity to the value of rational agency. If rationality is not a normative source of requirements in the first place, her theory must seem doomed to failure from the outset.
However, there is a view in the close neighborhood of Korsgaard’s that escapes such easy refutation even on Myth Theory’s premises. I call it Dispositional Moral Rationalism (DMR). Here is a rough outline. The view assumes that there is reason for us to be rationally disposed. It claims, furthermore, that this D-reason gives us, if not a reason to satisfy moral requirements on particular occasions, then at least a reason to acquire moral dispositions. Here is how: Rational agency can be considered a (complex) disposition. This disposition is intrinsically valuable, meaning that its very acquirability gives us normative reason to acquire it. What kind of disposition is it? In a Korsgaardian perspective, it includes a disposition to self-legislate oneself by means of universal principles; and to self-legislate ourselves in this way just is what morality requires of us. Rational agency is thus claimed to include a moral disposition. Hence any reasons to cultivate the former will trivially give us a reason to cultivate the latter.
Would it make sense for a Kantian such as Korsgaard to switch to DMR? At the end of my talk, I shall explain on what it depends. DMR would work if rational agency was a value in and of itself. While Korsgaard is clearly committed to this view, considerations from the first part of my talk cast doubts on it. On the (broadly pragmatic) view I find most plausible, our D-reason is a higher-order reason arising from first-order normative reasons; reasons that we have independently of any reason to be rational. These first-order reasons will comprise of either 1) non-moral reasons—most importantly, prudential reasons, i.e. facts bearing on the reason-owner’s well-being; or of 2) moral reasons (such as e.g. the fact that I have given a promise) or 3) of both. In cases 2 and 3, moral normativity co-constitutes the D-reason itself, which would render DMR circular. In case 1, it would seem that whatever reason we have to cultivate rational agency will flow directly from our nonmoral first-order reasons (whatever they may be). So on this horn of the trilemma, rational agency would do no actual work in the theory.
To sum up, Myth Theory about rationality has plausible elements that pose a serious threat to both Korsgaard’s and the dispositional version of moral rationalism. In the end, its prospects depend on whether our D-reason derives from presupposed first-order normative reasons, or is itself primitive. This is why my results leave a lane open for future developments in moral rationalism.
Willing a Broomian Solution to Moral Dilemmas
Franz Altner (University of Vienna)
In discussing moral dilemmas Van Fraassen (1973) considers an interesting way to solve them by appealing to what he calls the existentialist hero. The existentialist hero is someone, who, in case of conflicting moral ideals, does not fail to act but resolves the situation through an act of will. Following this portrayal I propose that this kind of heroism is, as much as it is heroic, also a matter of rationally following through with one’s decisions. I argue for this by introducing a novel rational requirement that provides us with an account of what kind of dispositions and habits are needed for a rational progression from a decision to an action.
I take it that moral dilemmas can be of two kinds. Either such that the moral agent is presented by two or more alternatives that carry equal or similar moral value or weight, given all relevant considerations such as probabilities, or they can come about as two mutually exclusive actions that are both morally incommensurable so that even when additional moral considerations come to bear, none of the options becomes in any way better or preferable from a purely moral standpoint. These moral dilemmas are problematic on a synchronic (at one point in time) and diachronic (over a temporally extended period) level. Synchronically, fear of regret or guilt can leave the agent paralysed and unable to decide which option to take. On the other hand, even if the agent, like the existentialist hero, has willed herself to intend one of the options, her moral predicament might bring her to constantly reconsider and shift from one to the other option. Now clearly an agent that constantly reconsiders her previous resolve is doing something wrong. It is not just problematic from a moral perspective, since her inaction would ultimately mean that she would be unable to fulfil either one of the morally valuable actions, but also from the standpoint of rationality. But how exactly is our existentialist hero irrational. One answer might appeal to the set of requirements that Broome (2013) has argued for. The most suitable candidate would be his requirement of the persistence of intention. Roughly it says that an agent should stick to a previously formed intention unless she has already fulfilled, is unable to fulfil or reconsiders it. But then our agent is not at fault at all, since she always consciously reconsiders her previous resolve. We need something more.
In this talk I therefore want to propose a new, additional requirement that closes the lacuna in Broome’s theory with regard to the question of when to rationally reconsider a prior resolve. This requirement will be characterized by two central notions. First it will be concerned, unlike Broome’s other requirements, with the dispositions and habits that govern the most prevalent form of reconsideration and secondly it will determine the strength of those dispositions that govern the agents non- or actual reconsideration by giving a counter factual analysis of relevant diachronic aspects in the deliberative process that issues the intention. Roughly, I argue that rationality requires that an agent, who has previously formed an intention, must have a disposition to reconsider just in case she possibly wouldn’t have formed her intention in the first place, had she known that the circumstance that obtain now are evidence for a serious problem for her intention. The central goal of this talk is then to show that a version of this requirement can resolve the incoherence over time that an agent facing a moral dilemma might get into and by doing that provide an answer to the intuition of how an act of will by an everyday hero can resolve a morally tough situations. Finally I tackle three possible objections that might be raised against my proposal. The first is that my requirement presents a version of rationality as "responding correctly to believes about reasons". Since I want the requirement to be compatible with Broome’s framework I have to show that it does not fall prey to the arguments brought forward by Broome (2013). Secondly, it might be argued that my requirement strips the present agent of her autonomy to decide what to do, since for her to rationally reconsider the counter factual about her previous deliberation must be true, but this truth depends on the mental states of her previous self that formed the intention. Finally, in a similar vein, one objection might be that my requirement does not have the wide scope property that Broome argues, for similar reasons as presented in the second objection. I will show that all of these arguments are either harmless, can be refuted or are inescapable for requirements dealing with dispositions and habits.
[1] Shafer-Landau (2003), p. 48.
[2] Foot (1972), p. 309.
[3] Falk (1944), p. 7.
9:30 - 10:00: Registration, coffee & tea + opening remarks
10:00 - 11:00: Luke Elson (University of Reading) - ‘Ought’ Implies ‘Can’ Supports Moral Rationalism
11:10 - 12:10: Lizzy Ventham (Trinity College Dublin) - Different Kinds of “Oughts” and a Response to Bootstrapping Objections
12:10 - 13:15: Lunch Break
13:15 - 14:15: Jens Gillessen (Philipps University Marburg) - If Rationality Is a Myth – What Becomes of Moral Rationalism?
14:25 - 15:25: Franz Altner (University of Vienna) - Willing a Broomian Solution to Moral Dilemmas
15:25 - 16:00: Afternoon Tea Break
16:00 - 17:30: Keynote talk - Maria Alvarez (KCL)
18:30: Conference Dinner (Hansa's Restaurant)
Abstracts
‘Ought’ Implies ‘Can’ Supports Moral Rationalism
Luke Elson (University of Reading)
It is a philosophical commonplace that ‘ought’ implies ‘can’. I argue that this principle supports Moral Rationalism: ‘the claim that moral obligations are or entail sound practical reasons for action.’[1]
My argument is explanatory, and trades on the apparent fact that ought-can principles apply both to moral obligations and to practical reasons. I argue that the best explanation for this ‘coincidence’ is that morality provides or entails practical reasons.
There are three sections, and a brief conclusion.
- Ability-Restriction Principles
For the first, and in part to avoid terminological confusion, I appeal to a fairly weak version of the claim that (moral) ‘ought’ implies ‘can’:
Moral Obligation-Can. If an agent A is obligated to ɸ, then A can ɸ.
This formulation is weak because arguably there are situations in which one morally ought to ɸ yet is not obligated to do so. If there are supererogatory acts, they engender such situations. Moral Obligation-Can does not apply to the supererogatory.
The second and parallel principle concerns (normative) practical reasons. There are various formulations, but here is the strongest:
Reason-Can. If A has (any) reason to ɸ, then A can ɸ.
In my opinion, the arguments of Streumer (2007) for Reason-Can are decisive (the principle explains, for example, why we cannot have reason to change the past). The argument to follow does go through with a weaker version—that if A has most reason to ɸ, then A can ɸ —but things become rather more complicated. So I’ll assume the stronger version.
Officially, I do not defend either Moral Obligation-Can or Reason-Can; the argument of the paper is conditional on them.
- Moral Rationalism explains Moral Obligation-Can
With these two conditional premises in hand, in this section I argue that they jointly support a substantive metaethical claim:
Moral Rationalism. If A is morally obligated to ɸ, then A has some practical reason to ɸ.
This principle is of central importance to metaethics, and can be accepted both by strong realists—such as Shafer-Landau (2003), and of course Parfit (2011) who makes reasons the central plank of his view—and by error theorists such as Mackie (1977). It is not congenial to naturalists and, famously, Philippa Foot denied it:
This seems to take us to the heart of the matter, for, by contrast, it is supposed that moral considerations necessarily give reasons for acting to any man. The difficulty is, of course, to defend this proposition which is more often repeated than explained.[2]
As a partial response to Foot’s challenge, we are now in a position to give a positive (conditional) argument for Moral Rationalism:
1. Moral Obligation-Can is true.
2. Reason-Can is true.
3. Together, Moral Rationalism and Reason-Can best explain Moral Obligation- Can.
4. So, by inference to best explanation, Moral Obligation-Can supports Moral Rationalism.
Clearly, premise (3) requires the most defence. Here is the argument. If Reason- Can and Moral Rationalism are true, they immediately entail and explain Moral Obligation-Can: if we assume that ‘has a reason to ɸ’ entails ‘can ɸ’, then if ‘has a moral obligation to ɸ’ is or entails ‘has a reason to ɸ’, then it is clear why ‘has amoral obligation to ɸ’ entails ‘can ɸ’.
Put a slightly different way, the argument highlights something distinctive about morality, as opposed to other potential ‘sources’ of practical reasons. Upon learning that A cannot ɸ, we withdraw both a verdict that A has practical reason to ɸ and that A is obligated to ɸ. (At least, we should withdraw them if the two ability-restriction principles are correct.) Why these two verdicts? This is a coincidence, and calls for explanation.
And other verdicts are not withdrawn. Upon learning that you cannot ɸ, we withdraw the verdict that you have practical reason to ɸ. But even the Humean about reasons does not withdraw the verdict that you desire to ɸ. There is no parallel principle that ‘desired’ implies ‘can’. (Though there may be normative principles about how you ought structure your desires.)
Moral obligation shares a feature with practical reason—what I have called ‘ability-restriction’— that is not shared by other potentially reason-giving concepts. This is most naturally explained by the uniquely tight connection between obligation and practical reason that Moral Rationalism posits.
- Can the Role of Morality Explain Moral Obligation-Can?
Premise (3) of the above argument is a comparative one: that Moral Rationalism forms part of the best explanation of Obligation-Can. In this section I consider a natural objection to the argument—that there is a better explanation available.
The objection is a difficult one to decisively respond to (who is to say what other candidate explanations might appear?), so here I consider and reject one prominent alternative explanation: that obligation-can is explained by the ‘action-guiding’ role of morality.
One version of the explanation could appeal to something like the following principle:
Guidance-Can. If a system of rules guides action, then an ought-can principle obtains.
I argue that Guidance-Can is false. Rules and obligations can guide action without it always being possible to comply with them. For example, an impossible law can ‘point’ us in the direction of correct behaviour.
Now it may be that as a contingent matter, morality is better able to fulfil its action-guiding role if it is restricted to obligations with which we can comply.
But this claim is questionable, and in any case its contingency weakens the explanatory force: if Obligation-Can is true, then it is plausibly a conceptual truth about moral obligations. Such a conceptual truth cannot be completely explained by contingent facts.
Another objection to this general strategy is that Guidance-Can seems to be violated in a number of non-moral rule systems, such as the law. It is often remarked that U.S. Federal law is so complex and contradictory that everyone is violating it in one way or another. And yet Federal law clearly does guide action.
- Conclusion
I close with a brief historical comparison. Earlier I mentioned that arguing from ‘ought’ implies ‘can’ is somewhat rare. But the argument I give for Moral Rationalism bears some similarities to a much earlier argument due to W. D. Falk:
For we do assume that “ought implies can,” that a man can only have a duty which he can be expected to fulfil. But we cannot be expected to do anything if we feel no impulse whatever to do it. [. . . ] The presence of some natural impulse is constitutive of a natural duty to do anything.[3]
Though it is nice to have such good company, I contend that my argument is more successful than this one.
Falk relies on a substantive notion of ‘can ɸ’ as implying requiring motivation or ‘impulse’ to ɸ. This seems implausibly strong. But my argument for Moral Rationalism does not rely on any such spelling out: the argument above goes through however we understand ‘can’.
Different Kinds of “Oughts” and a Response to Bootstrapping Objections
Lizzy Ventham (Trinity College Dublin)
In this paper I defend the claim that moral oughts are the same kind of thing as other rational oughts. Both, according to the position I defend, are contingent on the agent’s desires. I begin by explaining what I take this to mean, and why it’s a controversial position to hold. I introduce the ‘bootstrapping objection’ to my position, and briefly discuss how previous attempts to answer this objection don’t work. Instead, I suggest my own answer to the objection, and show how it succeeds where the others have failed.
Moral ‘oughts’ are often taken to be a different kind of creature to other oughts. In particular, they’re taken to be a different kind of thing to what you ought to do according to your desires. Kant (2012) made a distinction that was very similar, claiming that moral oughts were ‘categorical imperatives’ because (among other reasons) they required us to perform actions for their own sake, as opposed to hypothetical imperatives that required us to perform actions as the means to some other desired end. Smith (1994) recognised this difference too, when he argued for one of the prongs of his trilemma of ‘The Moral Problem’: that moral oughts, unlike other kinds, are necessarily motivating.
According to an account of moral oughts as hypothetical imperatives (most famously argued for by Foot (1972), but then rejected by later-Foot (2001)), moral oughts apply to us because of common, shared moral desires we have (see also Brink, 1989). Just as we ought to heat up the jug because we want the coffee to be warm, so too we ought to attend the protest because we want to do what’s right.
My paper focuses on one particular kind of objection to this desire-based account of moral oughts: the bootstrapping objection. According to the bootstrapping objection, generating ‘oughts’ from desires is problematic because it means an agent can make it the case that they ought to do something simply from intending (akratically) to do it. Take this example from Kiesewetter (2017) p.82:
…suppose you are weighing your reasons for and against two incompatible courses of actions, say getting some work done at home and watching a football match with your friends. Your reasons, we can suppose, together require you to stay home: you have to hand in important work tomorrow, and the match is not supposed to be very promising, after all. In deliberating, you are reaching the correct conclusion that you ought to stay at home. But then you akratically decide to go watch the football match with your friends, you need to call them and ask where they are meeting. So now you intend to watch the football match, and you believe that in order to do so, you have to call your friends. […] [W]e can now detach the conclusion that you ought to call your friends. But this seems an absurd conclusion, given that we have just said that you ought not to meet your friends, but rather stay home.
As the idea goes, if our desires are an important part of what we ought to do, then forming an intention can generate an ‘ought’, even if that intention is for something we ought not to do. We have contradictory statements: that the agent ought to call her friends, but ought not to call her friends. This new ought is, as it were, pulling itself up by its own bootstraps; it’s willing itself into existence.
There have been other attempts to solve bootstrapping problems, such as Broome’s (2007) wide-scoping approach, but that are not necessarily successful. (Kiesewetter 2007) The answer, I think, is in understanding the supposedly contradictory oughts as being different kinds of ought, and so not the kinds of thing that can contradict. That is, there can be moral oughts, legal oughts, and what you most ought to do according to your desires. In Kiesewetter’s case above, we have what the agent ought (overall) to do, given the conclusion of all of their projects and their long-term desires weighed up against their short-term ones. The ‘ought’ which follows from their short-term desires, their akratic intention to go and watch the football, is an ought of a different kind: an ought based on that short-term desire alone. And so there’s no contradiction.
With this solution, we don’t generate any oughts by any method other than by how they should be generated, and nothing is pulled up by its own bootstraps. Moral desires generate the moral oughts, weak desires and akratic intentions generate weak oughts.
Finally, I respond to an objection from Piller (2013) against this kind of attempt to answer bootstrapping problems. Piller argues against distinguishing between subjective and objective ‘oughts’, on the grounds that it’s failing to see what practical deliberation is really trying to get at. Practical thinking, he says, is about what we ought to do, not about a variety of different kinds of ought.
I respond by demonstrating that these different oughts are exactly the kinds of thing which are important to practical deliberation. They’re still a unified concept, about what we ought to do. We can see an analogy here with our talk of reasons. We have stronger and weaker reasons, different kinds of reasons (moral and prudential, for example) and we can have reasons to do contradictory actions. But these different kinds of reason all mean the same kind of thing, they’re still normative and practical reasons for action.
This paper argues that indexing different oughts in this way not only provides a solution to bootstrapping objections, and helps defend the thesis that oughts can be contingent on desires, but it also seems independently plausible, because of how we understand normativity and different kinds of reason.
If Rationality Is a Myth – What Becomes of Morality Rationalism?
Jens Gillessen (Philipps University Marburg)
My talk will be based on two papers of mine on normative aspects of rationality (both under peer review), and will proceed in two steps. The first part concerns recent developments regarding the normativity of rationality. The second part will explore consequences for moral rationalism, by which I mean the view that the normativity of moral requirements is nothing but the normativity of rationality itself, the principal contenders being Christine Korsgaard and (arguably) Immanuel Kant.
In the first part, I intend to make the case for certain elements of what has, in the wake of Raz (2005) and Kolodny’s (2005, 2007, 2008), come to be dubbed ‘Myth Theory’ about rationality. I find these elements in Kolodny’s version of the view: First, the requirements of rationality concern the internal coherence of our mental states. Second, there is no normative reason to be coherent as such (no ‘C-reason’, for short). Despite a whole spate of rejoinders to various aspects of Myth Theory (see e.g. Southwood 2008, Bratman 2009, Reisner 2011, Broome 2013, Broncano/Vega 2015, Shackel 2015), I am going to argue that there is indeed no reason of a general kind to conform to requirements of formal coherence at the level of particular actions or attitude revisions. Part of my argument will be that no credible intuitive evidence has been produced for the existence of such reasons; as a reexamination of ‘eccentric billionaire’ cases shows, in which agents are offered rewards for being incoherent. Hence the supposition of C-reasons is far less reasonable than the contrary supposition.
Meanwhile, it still seems plausible that virtually everyone has almost all the time one particular reason to cultivate rational dispositions. This reason seems to consist in the fact that in the long run, rational dispositions help us be as we have most first-order normative reason to be (see Kolodny 2008, 458f.; 2009, 376). However, reasons favoring the acquisition of disposition—‘ D-reasons’, for short—do not lend support to the existence of some general sort of C-reason. Even though there will often be reasons that favour what rationality requires, rationality is not itself a source of normativity.
In the second part, I will explore how these Kolodnian upshots bear on the viability of moral rationalism. I will begin by explaining what kinds of views I count as examples of moral rationalism, and which I don’t. In fact, many accounts that have been described as seeking a foundation of morals in ‘rationality’, ‘reason’ or ‘reasonability’ did not appeal to rationality as coherence of attitudes, but to prudence. This goes for views as diverse as contractarianism, classical utilitarianism and moral contractualism. These accounts remain unaffected by Myth Theory, not only because they locate the ultimate source of normativity in prudence instead of rationality, but also because recent theorizing about normative reasons has made it clear that the transmission of normativity from potential ends to potential means depends on objective relations of ‘facilitation’ rather than on an instrumental requirement of rationality (Raz 2005, Kolodny forthcoming).
What Myth Theory does affect are Kantian-inspired accounts such as Korsgaard’s (1996, 2009), specifically, which traces the source of moral normativity to the value of rational agency. If rationality is not a normative source of requirements in the first place, her theory must seem doomed to failure from the outset.
However, there is a view in the close neighborhood of Korsgaard’s that escapes such easy refutation even on Myth Theory’s premises. I call it Dispositional Moral Rationalism (DMR). Here is a rough outline. The view assumes that there is reason for us to be rationally disposed. It claims, furthermore, that this D-reason gives us, if not a reason to satisfy moral requirements on particular occasions, then at least a reason to acquire moral dispositions. Here is how: Rational agency can be considered a (complex) disposition. This disposition is intrinsically valuable, meaning that its very acquirability gives us normative reason to acquire it. What kind of disposition is it? In a Korsgaardian perspective, it includes a disposition to self-legislate oneself by means of universal principles; and to self-legislate ourselves in this way just is what morality requires of us. Rational agency is thus claimed to include a moral disposition. Hence any reasons to cultivate the former will trivially give us a reason to cultivate the latter.
Would it make sense for a Kantian such as Korsgaard to switch to DMR? At the end of my talk, I shall explain on what it depends. DMR would work if rational agency was a value in and of itself. While Korsgaard is clearly committed to this view, considerations from the first part of my talk cast doubts on it. On the (broadly pragmatic) view I find most plausible, our D-reason is a higher-order reason arising from first-order normative reasons; reasons that we have independently of any reason to be rational. These first-order reasons will comprise of either 1) non-moral reasons—most importantly, prudential reasons, i.e. facts bearing on the reason-owner’s well-being; or of 2) moral reasons (such as e.g. the fact that I have given a promise) or 3) of both. In cases 2 and 3, moral normativity co-constitutes the D-reason itself, which would render DMR circular. In case 1, it would seem that whatever reason we have to cultivate rational agency will flow directly from our nonmoral first-order reasons (whatever they may be). So on this horn of the trilemma, rational agency would do no actual work in the theory.
To sum up, Myth Theory about rationality has plausible elements that pose a serious threat to both Korsgaard’s and the dispositional version of moral rationalism. In the end, its prospects depend on whether our D-reason derives from presupposed first-order normative reasons, or is itself primitive. This is why my results leave a lane open for future developments in moral rationalism.
Willing a Broomian Solution to Moral Dilemmas
Franz Altner (University of Vienna)
In discussing moral dilemmas Van Fraassen (1973) considers an interesting way to solve them by appealing to what he calls the existentialist hero. The existentialist hero is someone, who, in case of conflicting moral ideals, does not fail to act but resolves the situation through an act of will. Following this portrayal I propose that this kind of heroism is, as much as it is heroic, also a matter of rationally following through with one’s decisions. I argue for this by introducing a novel rational requirement that provides us with an account of what kind of dispositions and habits are needed for a rational progression from a decision to an action.
I take it that moral dilemmas can be of two kinds. Either such that the moral agent is presented by two or more alternatives that carry equal or similar moral value or weight, given all relevant considerations such as probabilities, or they can come about as two mutually exclusive actions that are both morally incommensurable so that even when additional moral considerations come to bear, none of the options becomes in any way better or preferable from a purely moral standpoint. These moral dilemmas are problematic on a synchronic (at one point in time) and diachronic (over a temporally extended period) level. Synchronically, fear of regret or guilt can leave the agent paralysed and unable to decide which option to take. On the other hand, even if the agent, like the existentialist hero, has willed herself to intend one of the options, her moral predicament might bring her to constantly reconsider and shift from one to the other option. Now clearly an agent that constantly reconsiders her previous resolve is doing something wrong. It is not just problematic from a moral perspective, since her inaction would ultimately mean that she would be unable to fulfil either one of the morally valuable actions, but also from the standpoint of rationality. But how exactly is our existentialist hero irrational. One answer might appeal to the set of requirements that Broome (2013) has argued for. The most suitable candidate would be his requirement of the persistence of intention. Roughly it says that an agent should stick to a previously formed intention unless she has already fulfilled, is unable to fulfil or reconsiders it. But then our agent is not at fault at all, since she always consciously reconsiders her previous resolve. We need something more.
In this talk I therefore want to propose a new, additional requirement that closes the lacuna in Broome’s theory with regard to the question of when to rationally reconsider a prior resolve. This requirement will be characterized by two central notions. First it will be concerned, unlike Broome’s other requirements, with the dispositions and habits that govern the most prevalent form of reconsideration and secondly it will determine the strength of those dispositions that govern the agents non- or actual reconsideration by giving a counter factual analysis of relevant diachronic aspects in the deliberative process that issues the intention. Roughly, I argue that rationality requires that an agent, who has previously formed an intention, must have a disposition to reconsider just in case she possibly wouldn’t have formed her intention in the first place, had she known that the circumstance that obtain now are evidence for a serious problem for her intention. The central goal of this talk is then to show that a version of this requirement can resolve the incoherence over time that an agent facing a moral dilemma might get into and by doing that provide an answer to the intuition of how an act of will by an everyday hero can resolve a morally tough situations. Finally I tackle three possible objections that might be raised against my proposal. The first is that my requirement presents a version of rationality as "responding correctly to believes about reasons". Since I want the requirement to be compatible with Broome’s framework I have to show that it does not fall prey to the arguments brought forward by Broome (2013). Secondly, it might be argued that my requirement strips the present agent of her autonomy to decide what to do, since for her to rationally reconsider the counter factual about her previous deliberation must be true, but this truth depends on the mental states of her previous self that formed the intention. Finally, in a similar vein, one objection might be that my requirement does not have the wide scope property that Broome argues, for similar reasons as presented in the second objection. I will show that all of these arguments are either harmless, can be refuted or are inescapable for requirements dealing with dispositions and habits.
[1] Shafer-Landau (2003), p. 48.
[2] Foot (1972), p. 309.
[3] Falk (1944), p. 7.