Where would JDM research be without gambles?

At JDM workshops or conferences, we are all supposed to be gamblers.  We are asked: Would you prefer a gamble with a probability of .7 to win € 75 and a probability of .3 to win € 235, or another one with a probability of .1 to win € 1000 and otherwise nothing? Would I prefer a gamble with a probability of .65 to win € 75 and a probability of .25 to win € 235, or another one with a probability of .15 to win € 1000 and otherwise nothing?

I would prefer neither. With most of these talks, if I do attend, I doze off. I think they are artificial tasks, with no links to the real world, at least not mine. Why not ask whether I prefer orange juice or beer? Whether I would choose a blind date with a bald-headed artist or a long-haired academic? That would say something about me. You could ask for my arguments, track my eyes while I am comparing the options, and find out what I pay attention to and what decision strategy I use. Ask many people, and you get really usable and interesting information.

I don’t choose between monetary gambles. Don’t I, indeed? To be honest I do: I deliberate about taking out travel insurance. Like many other people, I prefer certainty to gambles. If I take the insurance, I want to pay just enough that I am certain that I will win out if I have to claim. But often no such certainty can be had, and I tend to travel un-insured. The lack of certainty is manifest especially in health care. Let me illustrate this with a real world example (violating the rule that you should not argue from your own experience, or use “I know somebody who” arguments – but who is going to stop me!, and I hope you recognize one of your own dilemmas).

I once saw a tuberculosis consultant, with who I had the following conversation: Consultant: your test, which we asked you to come in for because somebody in your vicinity was found to have active TB, was positive. So take these pills for the next month, and do not combine them with alcohol or any other drug. Me: What does that mean: my test was positive? Consultant: That means that you have anti-bodies to TB, which suggests that you may have latent TB. Me: If you say ‘suggests that you may’, what chances are you talking about, how likely is it that I actually have it? Consultant: You do not have it now, and the anti-bodies may be the result of an earlier exposure to TB; the chances that you will develop it given this positive test are 5 %. Me: That is a low chance of a future event. And what do these pills do, are they any good at lowering those chances? Consultant: They are not foolproof; it is estimated that they lower the chances by 20 %. Me: So from 5 % to 4 %? Consultant (losing his composure): It would seem so, yes. Me: And are there any side effects of these pills? Consultant (sighing when realising I was a ‘difficult’ case): Yes there are possible side effects, some rather serious. He got quite uncomfortable, so I chose not to ask for specific details of the chances of each of the side effects, but announced my decision that I would not take the pills. He exclaimed: But everybody with a positive test takes them! As if that would convince me. After some more discussion he gave in, and agreed that not taking them was quite sensible, and that if I ever felt symptoms, there was still time enough to start taking them. I felt bad, for him, for being stubborn, for pretending to know better; but mostly I felt good, for not taking unnecessary medication. (For those of you who like closure: no signs of TB now, 20 years later (yet J)).

Is this gambling? The opposite: calculating, I would say. Gambling with my health, doctors would say. Am I risk averse or risk seeking? I would say the first, doctors would say the second. Interesting! What would you have done? Now if JDM researchers would study this type of gamble, their talks at conferences would be so much more interesting!

Maybe I’m stretching it. But you must admit: this is about real things, real choices (even though many people would not experience it as a choice, but just take those pills). We can find out about the probabilities of each outcome. So we can construct gambles that are not artificial, but that are about actual choices. Would we prefer to have chemo with a 70% chance of success and a 30 % chance of very serious side effects, or an operation with a 60 % chance of a cure and 40 % chance of death? As a mother, what would you prefer for your very sick child? This is the type of gamble that unfortunately many people today are faced with. It is also the type of gamble discussed at medical decision making conferences, where they present results of studies with real people facing real dilemmas. It would be a relief to hear such talks at JDM conferences and workshops too, instead of talks about winning or losing hypothetical money. You would want to know what type of people would make which choice, and why. Interesting! I would go to all those talks, and stay awake.

 

Cilia Witteman
July 3rd 2014

Tell politicians what we know about good decision making

Recently I was asked to give a lecture about decision making for a governmental committee that assesses furlough requests of delinquents with a preventive detention, to advise the minister of Safety and Justice.
Since in my Presidential Address in Barcelona I had told all of us that we should tell politicians what we know about good decision making, I accepted. After a period of panic because of the audience (Important People: psychiatrists, judges) and because the topic was so general (Decision making: what to say? how to limit myself?) I decided to tell them the following.

Decision makers are expected to follow normatively correct rules of probability and logic, but they most probably rather follow psycho-logic. Examples are rife: people are often inconsistent, they take irrelevant information into account, see too many ‘special cases’, often do not learn from feedback, are influenced by recent experiences, and get tired and bored.
Furtive laughter from a captive audience; stupid people! Other people, yes maybe, but you are not talking about us surely!

Then I presented them with some classic decision problems, such as the Linda problem and the Asian disease, asking for their judgments. That silenced them. I explained that they were excused, since that is what people’s mental make-up looks like – we need to use heuristics to allow us to cope with our complex environment. Also I pointed out that these problems were not completely fair – how can you not use the representativeness heuristic: Linda is obviously not just a dull bank clerk; how can you not be influenced by frames: who wants to send hundreds of people to their deaths?
I quoted Freud, who, according to the psychoanalyst Theodor Reik, had once told him that for decisions of minor importance, he always found it useful to weigh all pros and cons. In matters of vital importance however, such as the choice of a partner or profession, he thought the decision needs to come from the unconscious, from somewhere within.

Indeed, the audience approved, that is how it is with people. You can make lists of pros and cons, but in your personal life you will still decide what feels good.

Then I showed how Darwin had decided to marry. He made two lists, one in favour of marriage and one against. In favour was the possibility of having children, having a companion, musical entertainment that is good for your mood. Against were things such as no freedom, forced visits to in-laws, and loss of time. In the end the list of pros was longer than that of against, and he decided to marry. Ironically the marriage was quite unhappy.

I proposed that from a human perspective, we obviously cannot avoid using intuitions. Indeed, many in the audience were eager to report instances in which they had relied on their intuitions. They saw a delinquent, and immediately knew what to decide. They felt rather proud of themselves at that point.

They again became a bit more modest after Frederick’s Cognitive Reflection Test. It took some people quite some time to realise that a ball cannot cost 10 cents when the bat and ball together cost $1,10 and the bat costs $1,00 more than the ball; some don’t believe it to this day.
I told them that this illustrates that we tend to use our intuition first, and only sometimes, when forced or upon second thought, correct our intuitions by reflection. But also that in many instances our immediate initial response is flawed.

We discussed intuition, and I convinced them (I hope ☺) that intuitions can be correct – provided they are well learned, in Robin Hogarth’s Kind Environment. Dangers should however not be underestimated: tunnel vision, overconfidence, and premature closure. But then again reflecting can also lead to wrong conclusions, so inaccuracy is not the prerogative of intuition.

Since they decide in a group, I showed them the two views on team decision making: Two heads are better than one; versus A camel is a horse designed by a committee. I pointed out dangers of groupthink and unthinkingly follow authority (Milgram being a case in point) versus profiting from the knowledge of your superiors. The bottom-line was that it is always best not to be prejudiced, not to overestimate your own decision making skills, to remain alert and to know the pitfalls, including your personal biases.

Why am I telling you all this? To remind us all that this is what we can and should do: tell policy makers what we know about the problems with decision making.

I am not overconfident that they will take all our warnings to heart. These psychiatrists and judges will still defer to the group, they will still advise more negatively about somebody who has tattoos of bimbo’s on his forceps than about a shy mister Nobody, they will base their decision on what they did previously with a similar delinquent – but maybe, just maybe they may sometimes think twice.
If we succeed in making some policy makers think twice before they make their decision errors, that might save lots of money and sometimes lives. Although that is probably arrogant, it does keep us focused.

So keep up the good work, and let the whole world know about it!

Cilia Witteman
December, 2013

The future of EADM: Four years later

Written by Nicolao Bonini

Following SPUDM in Warsaw, Robin Hogarth addressed, in the first Presidential column, the issue of what EADM can do besides supporting SPUDM conferences. The three long-term goals that he listed all relate to how to “increase the image of decision research in Europe – to have positive effects on research funding, academic positions, and influence that reflects our unique knowledge”. There follow some comments and proposals.

1. Funding of research / teaching initiatives. This, I think, is a crucial aspect. We should do our best to foster initiatives among EADM members. One way to do so is use national funding allocated to support international cooperation (e.g. to support foreign principal investigators, incoming visiting scholars or students). Another way is to take advantage of European programmes – some are designed to strengthen relationships with extra-European countries. Those programmes could support research networks, but also European master courses, summer schools, or joint doctoral programmes. Posting news, announcing calls, or requesting collaboration on our webpage is a way to make EADM members aware of those opportunities (in Kingston-upon-Thames, the Board decided to hire a web-content manager who could also attend to these aspects). However, greater participation is needed to keep our website alive and updated. I wonder if we could do more. For example, appoint an EADM representative who would attend inception meetings at relevant European institutions, taking a propositive role as well as informing EADM members about discussions at those meetings that might be of relevance for J/DM scientists.

2. Decision research community. In the 1960s, there was a distinct European response to the growing interest in decision research, and SPUDM was its main manifestation. An article by Charles Vlek on “A Brief History of SPUDM” will be soon published on our website; future articles/comments will be welcome. We should do more to enhance our identity: not only for the benefit of young students but also for those not in academia (e.g., inform politicians and policy makers about competences available in our community – see next point). We are still collecting material, such as pictures and SPUDM programmes, that will be uploaded on our website. The aim is to give a pictorial history of that initiative and early ideas. All EADM members are encouraged to participate by sending relevant material to Michael Schulte-Mecklenbeck.

3. Beyond academia. I recall discussions with Maule, Hogarth and members of the Board/Association on how to improve the image of our research community outside academia. One suggestion was to use PR to publicize SPUDM and EADM workshops to a broader audience. This is certainly something that should be done. Let me share with you the experience of organizing SPUDM in Rovereto. We made an effort to publicize it widely (e.g., coverage in national newspapers, and national broadcasting of interviews with invited speakers). I believe that there was a substantive return on this effort. I came into contact with people from other disciplines, as well as with policy makers and various stakeholders. This could be done more systematically by a professional PR hired to publicize EADM members’ work, as was suggested many years ago. Alternatively, we could recruit a young scientist with good writing skills who could write regular J/DM research digests1.We could also try to create positive synergies with our sister society, the Society for Judgment and Decision Making, by, for example, organizing a joint EADM-SJDM workshop on “hot” topics that might also be of interest to the general public.

There are many things to do, and many others not yet thought of! So, please, do not hesitate to use our webpage (or to contact me or members of the Board) to offer your comments and your assistance.

Note 1: Thanks to Gaelle for suggestion.

Weighting value and fit in academia

Written by Ilana Ritov

I would like to share with the readers my thoughts about three different issues I have recently been asked to consider and express my opinion about. I believe many of us encounter these questions, and some may have very different answers. The first issue involved hiring new faculty. Candidates were considered for a job opening in my department. As is so often the case, two leading candidates emerged. One of them is doing highly interesting work, and pursuing issues that seem to me important. The other’s work is somewhat less exciting, but is considered to better fit the departmental “needs”. I argued in favour of the former candidate, apparently weighting the intrinsic value of theresearch over and above the matching of the candidate’s interests with those of the department.

The second issue concerned a paper submitted to the journal Judgment and Decision Making, in which I serve as an associate editor. I found the paper highly interesting, as did the other members of the editorial board who read it. However, doubts were raised whether this paper should be published in a JDM journal. The paper did not examine choices, but compared evaluations of health related issues across countries and expertise levels. The decision whether to accept the paper for publication clearly rests on consideration of quality (in this case interest) vs. fit.

Finally, another problem I had to consider recently is whether to allow a student in the conflict management program I chair to take, as an elective, a class about “urban planning from the perspective of sub-populations”. The class would (hopefully) be stimulating and could provide a background that is relevant to some conflict management analyses, but it is not directly related to the core of the program. The student wanted to take the class because she was very interested in the topic. I thought this was a good enough reason, and approved her request.

Needless to say, the three problems are very different in many respects. However, thinking about these three problems simultaneously, I realized they all involve weighting of two major attributes: intrinsic value and fit. Intrinsic value, in our domain, typically refers to how interesting we find the object, be it a research program, an individual paper, or a specific class. The fit is the degree to which the topic matches some pre-defined domain characteristics. More precisely, we think of the extent to which the topic is close to the prototypical exemplar of a category with fussy boundaries.

One factor that has been shown to affect attribute weighting is ‘evaluability’. The easier it is to evaluate an attribute the greater the weight it carries. Perhaps due to the interdisciplinary nature and vague boundaries of our field, it seems to me that we as JDM-researchers find quality easier to evaluate than fit. This suggests that I may have assigned too much weight to quality/interest relative to fit.

Do I overweight one attribute relative to the other? A quick search of the vast literature on attribute weighting did not yield any clear conclusions. Incoherent preferences related to changes in attribute weighting are abundant. However, perhaps due to some self-serving bias, I cannot easily think of another framing in which my preferences with respect to the choices described above would have been different.

“Cool stuff” vs. “incremental” in JDM research

Written by Ilana Ritov

In my previous column I argued for weighting intrinsic value over and above fit. One of the examples I gave involved an editorial decision about a paper. I suggested that the main goal of a journal should be to publish interesting papers. In this column I want to raise the question of what an interesting paper is or should be. This question is also linked to the increasingly criticized criteria for publication.

A few years ago, in a conference that brought together social psychologists and JDMers, I overheard a group of JDMers criticizing the “merely illustrative role” data seems to play in the mostly theoretical talks of the social psychologists, while the social psychologists, for their part made fun of the “found an effect” talks of the JDMers. This exchange may have been incidental, reflecting the views of the individual researchers more than the characteristics of their domains. However, I do find myself wondering whether the current JDM research is really becoming more and more the “found a (bizarre) effect” type?

In his recent paper in Perspectives on Psychological Science Paul Rozin suggests that in evaluating empirical papers, too much emphasis is given to faultless experimental design, at the expanse of the contribution of the research to our understanding of human behavior (http://bit.ly/cZf1Yz). When we find an effect, he says, “we are not rewarded for looking at the generality of the effect. Is it a fragile result of a carefully selected set of parameters? Or is it robust and operative across many situations and/or populations?” In the same paper Rozin expresses a critical view of our field’s overarching embrace of hypothesis-testing methodology. While I do not share his view regarding the role of hypothesis testing, I believe Rozin’s critique of the insufficient weight assigned to establishing the robustness of phenomena is on the mark. Reexamination of the highly conspicuous data on “the benefits of unconscious thinking” provides an exemplary exception, and a demonstration of the need for such research (Gonzales-Vallejo& Phillips http://bit.ly/ccWddc; Calvillo & Penzloza http://bit.ly/coUNeZ; for a meta-analysis see Acker http://bit.ly/cDy0zX).

I believe that the “found an effect” trend is at least partly driven by the policies of the top journals, most notably “Psychological Science”, the flagship journal of APS. As the editor of Psychological Science described it, he would like to publish “…the type of paper you would want to go down the hallway to psychologists who are not in your specialty area and say, ‘Look at this! This is really cool stuff’”. (http://bit.ly/aBgQdE).

But what is “cool stuff”? Granted, much of the literature is about exploring the causes and boundary conditions of effects. However, a complex analysis showing that a meaningful non-trivial effect occurs under some conditions but not under other conditions is likely to be dubbed “incremental”, certainly less “cool” than a surprising counterintuitive simple effect. While most of us, including myself have a taste for the counterintuitive, “cool stuff” that appeals to a large audience seems too likely to be the result of simplified overgeneralization or disputable analyses and could turn out to be more misleading than enlightening. Thus, the wish to appeal to a wider audience, and the considerable benefits that publishing in such journals entails, often results in a chase after this illusive “hit”.

Responding to similar sentiment of discontent with the way ideas and methodological issues are treated within JDM, Andreas Gloeckner and Ben Hilbig proposed a special issue of JDM journal on “Methodology of Judgment and Decision Making Research” (http://bit.ly/bbX7sV). The special issue they will edit will be asking whether it is sufficient to investigate effects, or do we need more complete models and ways of testing that would allow us to select between competing models. I hope this special issue will help elucidate some of the more fundamental methodological questions in our field, and will promote further the discussion of what the ingredients of an interesting and valuable paper are.

The future of EADM

Written by Robin Hogarth

As a professional organization, EADM is a strange animal. It comes to life every two years for the SPUDM conference and then essentially hibernates in the interim. In fact, SPUDM predates EADM and it is important to recall that EADM was created to ensure the continuity of SPUDM conferences. So perhaps that’s all it should do?

And yet, several members feel that EADM should be more than just a support for SPUDM. Some question – with no little justification – the return they get for their annual membership dues. After all, SPUDM conferences are supposed to be self-financing.

Last year, as President of EADM, John Maule instigated a series of reflections on this topic amongst the EADM board members. As your new President, I reported on the substance of these discussions at the recent business meeting in Warsaw at SPUDM 21. However, few members attended the business meeting – and since I feel that we had some important things to say – I am taking this opportunity to report on what we said and to solicit your aid.

Attendance at SPUDM conferences – as well as the high quality of the many contributions – attests to the interest and talent for decision making research in Europe. However, this talent and interest is not matched by institutional support. One reason, I feel, is that we are all so busy doing our own “thing” that we fail to see how we can create synergies for all. For example, we miss out collectively on the many individual successes of our members. I strongly believe that when one of our members is successful professionally we should all rejoice in the achievement and literally take and enjoy some of the credit. A further important problem is that we lack information about what is going on in different parts of Europe and people from outside our organization have very little idea about what we do.

Given these issues, let me be more concrete and specify what the Board considers long-term goals for EADM that go beyond just supporting SPUDM conferences. There are three main goals:

  1. To create more opportunities for research funding for our members. Currently, this is difficult because each country in Europe jealously deals with its own funding and we need to compete with established disciplines. However, with the advent of the new European Research Council that hopes to operate like the National Science Foundation in the US, things might change – see http://www.sciencemag.org/cgi/content/full/313/5792/1371. Clearly, EADM must be aware of what is going on here and be prepared to intervene if necessary, e.g., in the short-term let the ERC know that we exist!
  2. To create positions in academic and related institutions for decision researchers. As anyone involved in placing recent PhD’s on the academic job market knows, Europe is not a seller’s market. There are many barriers and difficulties that result from both our nationalistic tendencies and the lack of regular decision making positions in academic departments.
  3. To have the input of decision researchers in important policy decisions affecting our lives as European citizens. As you are no doubt aware, it is quite normal in many policy debates to seek the input of academic economists and sociologists. Moreover these social scientists are typically willing to provide opinions on issues where, in fact, the findings of decision research might be more relevant. Consider, for example, providing people with information about risks, product safety, and other related topics.

In short, the goals of the Board are to increase the image of decision research in Europe – to have positive effects on research funding, academic positions, and influence that reflects our unique knowledge.

These goals were well-received by the few members who attended the business meeting in Warsaw but the real question is how to achieve them. In the short-term, several actions can be taken:

  1. Initiate discussions with the new European Research Council. Your President has some contacts here and will follow up on this.
  2. Continue to fund small conferences such as we have been doing for the last two years. However, people sponsoring such conferences will be required to investigate their PR potential (see immediately below).
  3. Investigate ways in which EADM can use PR to publicize SPUDM, the small conferences, work done by EADM members, and any other newsworthy activities. John Maule’s son has a small PR firm in London and is willing to help us on this pro bono (the only cost is that he should be allowed to advertise EADM as one of his clients). Clearly we are at an early stage on this project – help is needed!
  4. Develop an attractive and active webpage. In today’s world, we believe this to be essential. The webpage should be something that members access on a regular basis, where you find up-to-date information about what research is going on, funding opportunities, job opportunities, interesting ideas for teaching, and so on. The list of topics is limited only by our lack of imagination. In short, the webpage should become a “living newsletter.” In early brainstorming on this idea, we also thought of having different access points for different people who might be interested in our webpage – members, university administrators, the general public… To move things forward, we have appointed GaëlleVillejoubert to lead a “task force” to develop a web that we can be proud of. So, if you have ideas, please contact Gaëlle. As President of EADM, I strongly believe that it is in our collective interest to allocate some of our budget to this project. The webpage is our face to the world.
  5. I also intend to discuss with Board members of our sister society, the Society for Judgment and Decision Making, whether there are any ways of creating positive synergies between our two organizations along the lines mentioned above.

Finally, if you have any reactions to the above, please contact me or any of the Board members. Our goal is to promote decision research in Europe.

Let’s upgrade posters

Written by Robin Hogarth

This past week, I have spent some time “judging” abstracts for a conference and it got me to think a bit about the purpose of these conferences, their value, and whether they could be better organized.

It is clear that most scientists enjoy going to conferences. In addition to direct work relevance, it is fun to visit different countries and cities, to connect with old friends and to meet others for the first time. The first conferences I attended were at the beginning of the 1970s and I still meet people from those days at different events.  We rarely connect between events but when we do it’s always fun.  I once heard Sarah Lichtenstein mention these kinds of relationships – she used the expression “conference friends.” I also remember the excitement I experienced at those first conferences of actually meeting the people whose papers I had been reading.

In the intervening years, my sense is that conferences on judgment and decision making have increased in frequency, type, and scope. In the early days, those attending the SPUDM and SJDM conferences could fit into a single meeting room and all sessions were plenary. I don’t know if all submissions for presentations were accepted, but my guess is that it was a high percentage. Nowadays, we have parallel sessions, poster sessions, and many papers don’t get accepted for presentation. Fortunately, however, these large conferences are not the only events that take place and recent years have seen an increase in the number of smaller conferences – or meetings – dealing with specific topics.  I personally find these latter events most useful. It is wonderful to think that in the same room you can have assembled almost all the people in the world who are working on the same topic as yourself!   It is an incredible opportunity to exchange concrete ideas.

Given the high value of the smaller meetings, the natural question to ask is whether these will not eventually replace the larger meetings. In other words, if one can attend a few smaller meetings, why bother attending the larger meetings? And this is particularly the case if one has to compete (by submitting an extended abstract) to the organizers of the larger conferences.

What will happen?  Before making any predictions, it may be interesting to examine what has happened in other scientific disciplines. My impressions (not based on hard data) are the following.  First, like all good JDM types, let’s think base rates. I suspect that if you look at all scientific societies, most have grown internationally in recent years. There is just a lot of activity and this has been facilitated by the ease with which we can now travel and communicate across borders. Second, as knowledge advances and essentially becomes more specialized, this is matched by the organizational structures of scientific societies. Thus, one either gets new societies being launched or the older societies create new divisions. This is clearly visible when one thinks about the development of journals in the fields related to judgment and decision making. I suspect that the key variable in all this is the number of active scientists in any field. That is, for a field to be viable on its own (i.e., hold regular meetings, publish journals, and so on) there needs to be a certain minimum number of researchers. I don’t know what this minimum is nor where EADM lies precisely on the distribution of societies but it would be intriguing to attempt a sociological study of this type. Moreover, given the obsession that people seem to have with impact factors and numbers of citations, I would not be surprised to learn that people have started to address these issues using these kinds of data in relation to membership of professional societies. In short, it would be interesting to have some more concrete evidence about the way in which scientific societies have been established, grown (or not grown), broken up, continued or just faded away. What factors distinguish the societies that are more and less “successful”?  I suspect that there are some fascinating regularities in all of this as well as illuminating exceptions.

But let me be more prosaic and get back to the current reality of decision making research where many papers are rejected for the major meetings. Is there anything that can be done about this?  The present system is that most conferences entrust the reading of abstracts to a committee of referees. Moreover – as I have been led to believe– inter-rater reliability of the referees is far from perfect (and having been a referee I understand. It is a very difficult task!)  But what, we can ask, is the alternative?  One might be to select papers at random (e.g., if only 40% of papers can be selected, every applicant has an explicit 40% chance and there are rules to avoid people submitting multiple papers.) Variations on this theme could involve letting people’s odds depend on different characteristics such as whether they presented at the previous conference. However, one can easily see how that could easily induce many dysfunctional consequences. As an experiment, I am intrigued by the notion of applying the random rule and then seeing whether this actually changes the experience of the conference as experienced by the participants (just kidding! This could lead to all kinds of problems!) In short, I think we are “stuck” with using referees in the same way that we need referees for journals and grant reviews. They are not perfect but it is really hard to come up with a better alternative.

In conferences, however, there is an alternative to the presented paper or symposium. This is the poster and my suggestion is that we need to take actions that make this option more attractive to potential attendees. I will be honest and say that, in general, I prefer making a live presentation to an audience as opposed to presenting a poster. At the same time, however, each time I have presented a poster I have received more insightful comments than the usual reactions to a 20-minute presentation.  What I think is needed are ways to make posters more attractive to presenters so that they are not just a “consolation” for those who fail to have their papers accepted (or a way to ensure funding to attend the conference).