Speak Your Mind: TOP Guidelines

par-banner-1

 

“Speak Your Mind” is a PAR webpage feature that allows you to offer insights about big questions in public administration. The responses serve as a community forum for discussion of specific editorial contributions, and the format provides a platform for exchange of different ideas about how we think of public administration as a professional and scholarly enterprise.

 

In the newest contribution for Speak Your Mind, Professor Lars Tummers contends that public administration research is not fully transparent, replications are almost never published, and few open datasets are available. Professor Tummers argues that one fruitful way to move forward is that journals like Public Administration Review should step in and actively promote values like transparency, openness and replication. Please read the essay here.

 

We would appreciate your views on the issues Professor Tummers discusses in his essay. Do you agree that adopting the TOP guidelines is a good way to promote transparency, openness, and replication? What do you see as the biggest benefits of the guidelines? The greatest costs or risks? Overall, do you believe PAR should adopt the guidelines? Feel free to wade in on any or all of these questions, but, most importantly, speak your mind!

 

27 responses

  1. As a final-stage PhD student, I enjoyed reading this piece by Prof Tummers. My two cents on the matter are, of course, based on my limited experience as a scholar in Public Management. Nevertheless, I believe there are several benefits tied to the (gradual) adoption of the TOP guidelines by PA journals (and particularly by PAR as potential “change champion”):

    First, clear guidelines can help manage expectations. When submitting to PA journals it is not always clear what should be said about methods and analysis. There are some useful guidelines for this (e.g. the piece by Lee et al. in PAR), however, I notice an abundance of differences in the way authors report their methods and analysis. For instance, when using linear regression modelling => are all assumptions for linear modelling tested? When using SEM => are all items included in the latent variables or are some discarded? Either way, some form of standardization (and especially transparency) can be extremely helpful. PAR can thus play an important “educational role” by providing PhD students in Public Management with insights into the data reporting and analysis requirements of a top PA journal.

    Second, a positive attitude towards replications can be really helpful. In my field, which focuses on strategic planning in public organizations, the majority of evidence draws on US or UK samples. These studies are extremely innovative but cannot unilaterally be generalized to the continental European context. It can thus be really helpful to replicate the original study and look at similarities / differences, and then try to explain these differences by looking at context. I think this is particularly relevant if we, as a field, want to inform and contribute to policy-making. We cannot only use US or UK evidence to argue the effectiveness of a management process for the public sector as a whole. I recall a recent study in science (“Estimating the reproducibility of psychological science”), that created quite a stir. Similarly, reproducing PA studies might result in different findings depending on context – which implies some nuance when recommending action by governments.

    Third, there can indeed be a value in reporting and publishing null findings. In the first phase of my research, I did a literature review on the impact of strategic planning in public organizations. I remember being pretty full of myself when I found that almost all studies report significant and positive relations between elements of strategic planning and different outcomes. An issue here is, of course, the file-drawer problem. To gain a fuller understanding of strategic planning, and how it works, it would have been extremely helpful to identify studies with null findings. Currently, my insights into the subject are dominated by these positive findings. I don’t know that much on when (and why) strategic planning might not have worked within public organizations.

    Conclusively, I fully support Prof Tummers’ call for a more transparent and open research culture. This will help the academic field of PA as a whole, and will specifically contribute to the academic development of PhD students by (a) offering transparent guidelines on how one should deal with data analysis and reporting in a responsible, sound and ethical manner, (b) provide opportunities for publishing replication studies that can build towards the generalizability of research findings and (c) offer the full perspective on PA questions by also publishing studies with null findings that explain why such a null finding might be present.

    Sincerely,

    Bert George
    PhD student Applied Economics
    Ghent University
    bert.george@ugent.be

  2. Bert–you have a very positive outlook on what the guidelines might produce. If we take this direction, I hope you are right. Let’s see what others have to write. I appreciate you being the first to jump in.

  3. I much enjoyed reading Professor Tummers’s thoughtful and encouraging essay. I agree that a more open research culture will benefit the field of public administration. In my reply, I want to focus on data sharing. Professor Tummers’s points are well-taken, and incentives are indeed helpful. I would like to suggest an alternative approach though: diffusion. When I was finishing my Ph.D. coursework, I was interested in PART scores for a class paper. David Lewis made the data and code from his work with John Gilmour freely available through his webpage: https://my.vanderbilt.edu/davidlewis/data/

    Not only was it wonderful to be able to access the data but it was also instructional to see the code. David Lewis set a standard that I’ve tried to emulate and so I’ve been making available data and code for most of my pieces on my webpage: http://www.petrovsky.ws/ (–> “Data & code”). For one of the political science articles, this was indeed a requirement. For the other articles, it wasn’t. But why not post the materials even in the absence of rules or incentives? Perhaps someone can use a part of the data or code for some other purpose, or find amusement in the tedious ways in which some of the code is written.

    Where research on a data set is ongoing while an article has come out, it may be a viable approach to only post the variables used in the article. In this way, researchers can avoid sharing parts of the data that they are still intensively analyzing.

    Sharing of data and code can benefit others in the field, including students. Without having to identify themselves to the author, they may better understand how the author did something. Perhaps, they will also come up with different, creative ways to use the materials that cost so much effort to produce. Some of us expect government agencies to become more transparent and prefer a proactive provision of information over having to file open records requests. Applying this to myself, I should make available materials where possible, and do so in a way that anyone can access them without asking. I hope many others will join the data and code transparency that David Lewis started in our field!

    Nick Petrovsky
    Martin School, University of Kentucky
    Twitter: @pedroniquito

  4. Dear Nick,

    Thank you very much for your reply. I think that you are a front runner in data and code transparency. You truly set the right example by posting it on your website. We can also learn from neighboring disciplines in this regard, as you rightly point out. Thank you. We will your comments into account when deciding how to move forward in the field (and PAR specifically). We would also encourage others – regardless of what journals do – to follow in your and David Lewis’ footsteps.

    Best,

    Lars Tummers

  5. Good afternoon,

    This is a fantastic article and I am glad to see the lively debate.

    I am pleased to see the discussion has included student involvement in data and research. I my own experience, students have either no access to competent data or access to very limited, outdated, data sets through their respected universities.

    When you put limits on data, you put limits on research capabilities; young scholars are a much needed voice in the field of public administration however the expectation to purchase highly expensive sets or obtain annual memberships for a price is not always realistic. Instead it creates a disadvantage for potentially insightful authors.

    As for presenting null findings and journal attitudes towards null publications, this type of work falls into the classic practitioner-academic publications dilemma. Null findings are important to academics but also very important to practitioners. Whereas case studies can provide best practices for practitioners, many are difficult to publish for a number of reasons and null findings seem to fall into a similar category in my opinion. It is astonishing working in economic development; interacting with other municipalities or ED groups that still use practices that are far outdated and academically proven to be ineffective because nothing has ever said, “hey there, this is outdated and doesn’t work.” However, as most know, journals look for what is “sexy” to publish and not finding relationships is often times ignored.

    I appreciate that Bert brought up replications and that goes hand in hand with various forms of research styles currently being used. It is difficult (or impossible) to replicate research when the coding or the survey/data source is unavailable. I could measure, for example, school board governance and school performance however if I am unable to see how certain elements were defined or coded, I can potentially get a much different result. I think an online outlet where scholars can post work for free with the intention of it being replicated would be very beneficial. Set relatively uniform standards and allow those replicating work to contact original authors about questions and process design would be an amazing tool to have.

  6. Dear Joshua,

    Thank you for your constructive response. I also truly value that you bring the practitioner side into it. As academics, we sometimes think that practitioners are not interested in things like ‘null findings’. But you are correct, this is indeed very important. Finding no effect is also truly interesting (see for instance discussions in medicine on vaccine effects). We will also be taking along your suggestions about data and code transparency, and the uniform standards we could potentially adopt.

    Best,

    Lars

  7. Thanks, Lars, for sharing your thoughts on this important issue!

    I started as a PhD student on September 1 2015, three days after Nosek and colleagues published their replication paper in Science. This means that I do not have a long career with a lot of experiences to build my reflections on. But it also means that I have – qua the timing of my entry to the world of Academia – been forced to think a lot about the issue, and I have had many transparency-and-openness-discussions with colleagues from different sub-disciplines of Political Science.

    As far as I see it, there is no alternative to moving Public Administration research (and Political Science in general) towards a more open and transparent culture. And I don’t think there should be. But changing a culture is not easy, and I think you are absolutely right that journals etc. must play an active role in encouraging changes for the better.

    As you write, openness and transparency can be improved along a variety of dimensions/standards, and it seems to me that some changes will be easier to promote than others. Thus, there are standards of openness that we do as individuals lack incentives to live up to (for example, you mention that we don’t have incentives to make datasets, dofiles etc. publicly available although there is often no good reason for not doing so) and I have the impression that TOP guidelines etc. could have positive and quite immediate effects with regard to such standards.

    However, there are also standards of openness where I think TOP guidelines will not have big effects in themselves. Thus, there are standards of openness where the problem is not non-incentives but the existence of strong but counter-productive incentives, meaning that we are incentivized NOT to live up to these standards. As I see it, this is e.g. true with regard to pre-registering designs and analysis plans – a practice that would create transparency regarding prior hypotheses etc.

    I recognize that not all studies are equally suitable for pre-registration. Sometimes, we don’t have strong hypotheses in advance, and sometimes we work with data sources that make it meaningless to preregister (as is e.g. the case if we work with historical data or register data that can be analyzed before preregistering the analysis plans). But most of the time, at least when we work deductively, I see no really good reason why pre-registering should not be the norm. And I have the impression that most colleagues agree in this in theory (correct me if I’m wrong), but that fundamental incentive structures keep people from using the practice. As Academia works today, our careers depend on publishing and publishing seems to depend – to some degree – on P-values. Therefore, as a good colleague noted in a discussion about the issue, only very prominent researchers can afford the consequences of preregistering studies with a lot of zero-findings.

    Obviously, from an idealistic point of view, this “fear” of zero-findings is not a good argument against pre-registration. In an ideal world, we would be transparent about our expectations and we would divide our analyses into 1) tests of preregistered hypotheses and 2) further explorative analyses and theorizing. However, from a realistic point of view, this is not how things work today and I don’t think TOP guidelines would do much about it (at least not if implemented at the lower levels). If we want to improve in this regard, we will have to do something about the fundamental incentive structures in our field and get rid of the publication biases that make people afraid of zero-findings. And again, I think that both individual researchers and journals are responsible for promoting change.

    On the side of us – the individual researchers – we should more often submit our zero-findings (at least if we find them interesting and do not think they are results of poor designs etc.) and when we serve as reviewers, we should not let P-values guide our recommendations (to be honest, I don’t really know how big this problem is? How many reviewers do actually become much more skeptical when they receive studies with P>0.05 for review?).

    On the side of the journals, I also think they can do more (in addition to adopting TOP guidelines) to actively promote improvements on this standard of openness/transparency. For example, journals could improve as publication channels by introducing publication formats explicitly aimed at important studies that do not find their way to the journals today (studies with zero-findings, replications etc.). More journals could do like JoP and others who introduce “short papers” and similar formats aimed at studies that don’t live up to the norms for publishing today but are too important to be ignored. Or they could – if they are really ambitious – do like Journal of Experimental Political Science and other journals where designs can be peer reviewed and pre-accepted before the data collection takes place, which is both good for the research quality (as poor designs can more often be prevented) and means that authors don’t have to worry about P-values if only they make good designs.

    To sum up, I think that adopting TOP guidelines would definitely be a move in the right direction, and I support your call for such guidelines to be introduced in PAR and other major Public Administration journals. But I hope the journals will also take further steps into consideration in order to do something about the counter-productive incentive structures in our discipline.

    I look forward to following this debate in the months and years to come!

    Best,
    Julian Christensen, PhD student in Political Science, Aarhus University.

  8. There must be a compelling reason for PAR to move in this direction, and I’m not seeing it here. Normativism ( the natural sciences are doing it so the social sciences should also do it) isn’t convincing enough. Where’s the problem? What’s the evidence? What real need would open data serve? It must be a compelling need if we are to impose this time sink on scholars. The science journals can make the case based on the societal risk of publishing spurious data, but we are not trying to cure cancer in our field.

    We should also separate the question of quality of data from that of accessible data. By that I mean that editors and readers already have a legitimate reason to demand better explanation of methods from authors, but that task can be accomplished without open access data, through the review process.

    Finally, I have good reasons to be suspicious of open data because I’ve been there. A large data set I produced was made available to researchers by the research sponsor. Even with a thorough data dictionary to guide them, researchers didn’t necessarily understand how the data was produced and therefore how to use it. Of many attempts to publish from the data set, quite a few used erroneous approaches. My point is that any random set of researchers could easily produce different results through different methodological decisions. Who is responsible for teaching them? For validating the findings? Does every manuscript review turn into a data analysis session?

    It’s possible there are answers to some of my questions. If so, I would welcome responses.

    • Hello Beth,

      Thank you for bringing these concerns to the forefront. I work at the Center for Open Science on our efforts to adopt and implement the TOP Guidelines, among other initiatives.

      The problem that the TOP Guidelines are designed to address is the fact that most of the work of science is inaccessible even to other scientists (for example, during peer review). The end result of much scholarly output is the traditional article, in which the incentive for the author is to create the cleanest, most compelling argument, despite the fact that the current state of evidence is usually messier and more tentative.

      The theoretical existence of this disconnect between scientific values (transparent processes, reproducibles results) and practices (biased reporting and opaque decision making) has been discussed for a while (Cohen, 1962; Tukey, 1969; Kerr, 1998, Ioannidis, 2005), though the actual result of those theoretical problems are only now being empirically supported (e.g. irreproducible results in pre-clinical research and the Reproducibility Project in Psychology, http://science.sciencemag.org/content/349/6251/aac4716). The motivations and processes that most other scholars face are equivalent to those in life science and psychological science research, and so at this point I would argue that there is greater risk in not amending our ways than there is risk in doing so.

      That being said, simply asking scholars to add additional tasks to a workflow is unlikely to lead to change. Therefore, there has to be a system that 1) rewards the transparent and rigorous methods instead of the clean and tidy results and 2) builds tools that enable the change for which we advocate. The TOP Guidelines are one method of achieving #1, by rewarding steps in the publication process that make one’s work more transparent. Other rewards for transparent and rigorous work include the Preregistration Challenge (https://cos.io/prereg), Registered Reports (in which peer review occurs before results are known: https://osf.io/8mpji/wiki/), and badges (https://osf.io/tvyxz/wiki which look silly, but are surprisingly effective: http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002456).

      For #2, that is the motivation behind creating the Open Science Framework (https://osf.io). This is a free and open source workflow tool, the purpose of which is to connect every aspect of the scholarly workflow. This is important because it solves current problems in a lab (grad student turn-over, data management, collaboration), while providing nudges to be more transparent (easy sharing, creating DOIs for datasets).

      You are absolutely right that an open dataset will lead other scholars to other conclusions because of different methodological decisions. The implication is that there are so many possible decisions to make, and there are such strong motivations to find and publish a biased subset of those decisions, that these decisions need to be made transparently, and before combing through a dataset. That process is great for creating new, testable hypotheses, but not for testing that same hypothesis, which invalidates the p-value.

      I’m happy to continue the conversation.

      Best, David

    • Dear Beth,

      Thanks for your thoughtful reply and the important concerns you raise.

      First, do I think there are compelling reasons to move towards an open research culture. For instance, we know that replications are vital for scientific progress. Researchers should conduct replications to test the reliability and context dependence of newly published findings (see http://journal.frontiersin.org/article/10.3389/fncom.2012.00008/full). This is valuable for science and society.

      There are almost no replications published in the PA community. This is one of the reasons a recent PMR special issue has been set up by Richard Walker from the City University of Hong Kong regarding replications. Related to this, p-hacking and positive results are a problem in general (see http://www.akademiai.com/doi/abs/10.1007/s11192-011-0494-7). They are also a problem in the public administration community, as discussed by JPART editor Brad Wright during the latest PMRC. See also the comment of Bert George. TOP guidelines can help in this respect, as they change the incentive structure of authors. PAR could for instance state that they truly encourage replication research and publishing null findings. This can stimulate authors to develop such work, or open up their file drawer.

      You also rightly note that the review process could help in ensuring high quality data. Furthermore, I would argue more checks are needed, because reviewers do not have all information available to them. They cannot check the data, and there have been various instances (from the social sciences) where articles have been accepted based on fabricated data in the most highly ranked journals (see for instance http://science.sciencemag.org/content/332/6026/251).

      I truly value your points about open data. Unqualified users can surely make errors, and this is worrisome. Thank you for raising this issue. Next to this, also the ‘first/primary’ researchers might make errors (according to some), which others would find out if data is available. This could lead to interesting discussions among scholars (see for instance http://www.pnas.org/content/112/50/E6828.extract and http://www.pnas.org/content/early/2015/12/01/1521336112.short). In general, open data can be beneficial (see http://onlinelibrary.wiley.com/doi/10.1111/puar.12288/abstract). I am still wondering how to deal with qualitative data (an issue raised by Donald Moynihan on Twitter “My concerns have to do with qualitative research & weakening incentives for new data collection.”).

      In the end, we should aim towards a more transparent and open research culture. I think there is a compelling need (see also the main article). We should not rush forward, but start the journey slowly, step by step. These discussions really help in this respect.

      Best,
      Lars

  9. I enjoyed reading Lars’s thoughtful sharing, and I also believe that more open and transparent research practices would help advance public administration scholarship. As someone working mostly on issues related to Asian countries, I believe more replication of research should be encouraged. While many may see it as a means to improve the external validity of research findings, I think such a practice would contribute to the development of comparative public administration scholarship, draw scholars’ attention to contextual influence, and facilitate the wider adoption of a diagnostic approach to social science research, which is particularly useful for PAR as it speaks directly to a large international community of public managers and practitioners.

    Meanwhile, the issues of cost and privacy associated with the public sharing of datasets may be hard to address at first. Similar to the promotion of better business practices such as corporate social responsibility, some form of voluntary compliance may be a first step. PAR may develop a platform so that the willing ones may find it easy and safe to share their data, and some form of reputational incentive to encourage early adopters. Overall, I believe building a more open and transparent research community is a meaningful endeavor that we should all strive for.

    Wai-Hang Yee
    University of Hong Kong

  10. I read Lars Tummers’ essay advocating the Transparency and Openness Promotion (TOP) guidelines as a tool for promoting an open research culture on the Friday that news broke of the Brexit vote to leave the European Union, which in turn came only days after news of the ways in which the Bernie Sanders ‘political revolution’ was morphing into the relative normalcy of the Clinton campaign. And all of this occurred amid the incessant blare of the populist effluent of Donald J. Trump’s campaign to “Make America Great Again” via deceit, vagueries, and nativist hyperbole. In the background are the chaos of public order in the Middle East, rising authoritarianism in many countries, the crisis of the business model of commercial publishing, Wikileaks and the very meaning of openness and transparency in publishing in the age of the internet and marketing.

    A Washington Post online headline on June 28, seemed to sum up the spirit of the moment: “Brexit is not just Europe’s problem; it highlights a crisis in democracies worldwide.” In retrospect, there have been warning signs of a coming crisis for liberal democracy at least since the Great Recession of 2008. It strikes me that Post headline sizes things up just about right and that questions of the role of public administration research in whatever precisely it is that is happening in the world today is the larger rubric within which questions of the role of transparency and openness should be seen. Too often such methodological matters in social science research are presented as freestanding, self-sustaining matters not requiring larger justification, and I am concerned that may be the case here.

    Even while fully recognizing that serious scholarship cannot be expected to keep up with the constant drumbeat of headlines, I must still ask what role guidelines for the promotion of transparency and openness in scholarly publication can be expected to play in all of this? Even more to the point, I think, is the prospect of spending extensive amounts of scholarly time and energy examining, debating and parsing such standards when such momentous events are in the offing.

    Given the time it takes to prepare a serious scholarly work like Tummers’ essay, I can simultaneously find no fault with him for making the effort, but suggest that we either link this issue to the more serious issues of the day or set this question aside for the moment. In the rising tide of closed society thinking, open data is an increasingly long term

    My reaction comes down to this: Adopt these guidelines or not. It probably won’t make much immediate difference, in either case. But isn’t it time to connect this question to more serious matters?

    • Dear Roger,

      Thank you for your enlightening response. There are indeed more important problems than what we are discussing here. On the other hand, an open research culture can increase the validity of our work. Valid research is crucial for our legitimacy and impact in the world.

      We decided to have an open discussion on the guidelines to check with the community whether we are on the right track and to analyze which important issues we need to tackle before/during the journey. Based on among else the discussions online (and via email/face-to-face meetings), we will decide whether/how to move forward..

      Best,
      Lars

    • Hello Roger,

      I am from the Center for Open Science and work on the TOP Guidelines. The degree of urgency is of course up for debate, but given that science and evidence-based reasoning *should* play such a critical role in the democratic process, we are all obligated to make the output of our work as transparent and rigorous as possible. Since we are all humans, that requires rewarding scholars for transparent and idealized scientific practices, not questionable practices that lead to less reproducible results.

      The TOP guidelines, especially when applied at their more stringent levels, do just that. Preregistration of analysis plans makes clear what was specified before conducting the study. Registered Reports add peer review to that process, so that expert opinion can weigh in on the design and conduct of study before being unavoidably biased by the results.

      When these issues come back to the Big Questions of our era, I think that rewarding transparency is a necessary condition to a more inclusive and fair society.

      Best,
      David

  11. The issues discussed in Lars’ piece are complex and have significant implications not only for the Journal but also for the professional success and development of scholars of public information and practitioners. While Lars alludes to these, they need to be studied, discussed and analyzed in a consistent and collaborative way.

    I have been working on academic incentives and their role in bridging the academic and public administration sectors. Many of the incentives for a successful academic career are disincentives to collaborate with those practicing in the public administration sector. One of the most significant disconnects is in the weight that curated publications hold for tenure and promotion decisions, including the standing of the journal as well as how often one’s research is cited. Another is the need to demonstrate that one’s research has, and is, making a unique contribution to one’s field. These may serve to deter researchers from replication.

    Several other factors may also come into play, including requirements/expectations placed upon those providing research funding: will funding be available for null hypothesis and replication; data: the richer the data source, the more deep the repository into which the researcher can dip for future research; governance of this type of collaboration and approach to publication: who will establish guidelines and procedures and how will we measure success?

    Despite the potential obstacles and challenges adoption of TOP may present, it also may open rich opportunities for us to make our field more relevant. So, I think we need to have more formal and serious discussion of the adoption of TOP and other possible shifts.

    • Hello Angela,

      I work at the Center for Open Science on the TOP Guidelines. I absolutely agree with the effects that publication, citation, and funding have on the professional success of a scholar. That is one of the guiding principles of the guidelines- to find ways to reward transparency given these reward structures.

      So, for example, journals that encourage replication studies through the “Registered Report” format (https://osf.io/8mpji/wiki/) will reward research contributions to this, currently, un-rewarded aspect of scientific practice. Finding unique and novel contributions on top of the replicated study become more clear once the precise methodologies undergo peer review before results are known, and so can also further one’s career.

      The TOP Guidelines have also been adopted for funding agencies to reward the transparent practices (e.g. https://osf.io/bcj53/).

      Best,
      David

  12. Thanks for writing this thoughtful piece and adding to the growing discussion of open data and replication. Overall, I support the movement, but let me add one concern related to open data: reanalysis (Christakis and Zimmerman, 2013). With open data comes the opportunity for others to reanalyze results as a way to discredit the authors or refute the findings rather than to progress science. If the results of a published study are harmful to one’s research agenda or funding stream or piece of legislation, there are strong incentives to look for flaws, or given one’s bias, simply produce different results. As Beth Gazley mentioned, one could easily produce different results based on different assumptions, models, etc…This risks producing a “scientific cacophony” rather than a clearer understanding of the phenomena of interest. And when the reanalysis refutes the original analysis, does that imply the original authors were wrong, used faulty methods, or worse, hacked their results? Or does it mean that those conducting the reanalysis did?

    As a field, I think we need to be careful with open data to protect the owners from potential outside attacks that arise not in the positive spirit of reproducibility, progress, and transparency, but as a means to discredit findings we don’t like. Levy and John (2016) discuss how the tobacco industry launched a “sound science initiative” in the 1990s. This initiative pushed for open data; but the goal was simply to obtain the data in order to cast doubt on the growing evidence linking second hand smoke to lung cancer.

    Writing on the topic of reanalysis, Lewandowski and Bishop (2016) discuss the differences between checking vs undermining. The write that “Even when data availability is described in papers, tension may still arise if researchers do not trust the good faith of those requesting data, and if they suspect that requestors will cherry-pick data to discredit reasonable conclusions. Research is already moving towards study ‘pre-registration’ (researchers publishing their intended method and analysis plans before starting) as a way to avoid bias, and the same strictures should apply to critics during reanalysis. In general, critics and original researchers should obey symmetrical standards of openness and responsibility and be subject to symmetrical scrutiny concerning conflicts of interest.” The authors offer 10 red flags “that can help to differentiate healthy debate, problematic research practices and campaigns that masquerade as scientific inquiry.”

    While issues related to reanalysis may be more pronounced in the medical, health, and environmental fields, we should proceed carefully. Open dialogues and discussions such as this can help develop frameworks capable of limiting the bias of those conducting the reanalysis, reduce the potential for abusing open data, and avoid a “scientific cacophony” that could harm our progress.

    Christakis DA, Zimmerman FJ. Rethinking Reanalysis. JAMA. 2013;310(23):2499-2500. doi:10.1001/jama.2013.281337.

    Lewandowsky S., Bishop, D. Research integrity: Don’t let transparency damage science. Nature. 2016; 529: 459-461.

    Levy, K., Johns, DM. When open data is a Trojan Horse: The weaponization of transparency in science and governance. Big Data & Society. 2016; 3(1)

  13. Dear Michael,

    You make a truly important addition to the debate. Information is power, and power can be abused. As authors in your cited articles state (Lewandowsky & Bishop, 2016:460):

    “The progress of research demands transparency. But as scientists work to boost rigour, they risk making science more vulnerable to attacks. Awareness of tactics is paramount. Here, we describe ways to distinguish scrutiny from harassment.”

    On a similar vein, Levy et al. note (2016:1):

    “Openness and transparency are becoming hallmarks of responsible data practice in science and governance. Concerns about data falsification, erroneous analysis, and misleading presentation of research results have recently strengthened the call for new procedures that ensure public accountability for data-driven decisions. Though we generally count ourselves in favor of increased transparency in data practice, this Commentary highlights a caveat. We suggest that legislative efforts that invoke the language of data transparency can sometimes function as “Trojan Horses” through which other political goals are pursued. ”

    In my article, I focused on the argument that we need to improve rigour in our discipline, and transparency is one way to do so. I paid far less attention to the ‘dark side’ of transparency (as also noted by Beth Gazley).

    If we move forward with the TOP guideline, I propose that we keep a close eye on this issue. We can among else take into account the ‘red flags’ indicated by Lewandowsky & Bishop and the notion that the same quality standards should apply to the ‘first’ authors and to the authors who reanalyze.

    Best,

    Lars

  14. I very much enjoyed reading the thoughtful essay and I recognize many of the concerns raised. However, I feel I must introduce a note of skepticism.

    My comments are personal and somewhat off the cuff but I hope they will be interesting and will advance the debate. I will offer three comments.

    First, qualitative researchers cannot and should not share data as IRB’s simply will not agree to it and a responsible researcher should control or eliminate the risks to their research subjects. A move to ‘sharing’ data would effectively shut qualitative research out as ‘unshared’ data would immediate be rendered suspect. Moreover, qualitative studies (which by definition) are contextual would be dismissed as unrepeatable and reviewers (who are often skeptical of qualitative research already) would be further emboldened to dismiss the methodology. I recognize that Prof. Tummers is aware of these problems and acknowledges that not all of the eight TOP criteria will always apply but I suspect that an over-subscribed journal may end dismiss anything that did not meet all the criteria. Editors and graduate assistants can become rule following bureaucrats too!

    Second, I feel it is unlikely that a shift to transparency and shared data will produce greater examination of null findings and encourage replication. The medical and scientific disinclines already suffer from problems of fraud, unwillingness to publish null results, and a lack of replication (a problem acute in psychology). Simply placing more data out there would do little to address these issues as they are not driven data but by the fundamental structure of the tenure system – publish or perish and citation chasing encourages novelty.

    Finally, I cannot shake the (often unworthy) suspicion that the demands for transparency and data sharing would impact on junior researchers outside of the largest universities. We are all aware of lots of poor quality studies based on the ‘have method, will travel’ principle. I fear the subtle pressures and tendency towards an oligopoly that would result from a sharing system in which original research becomes the purview of a few large universities who can afford to collect data and everyone else ends up works of their data sets once the most interesting findings have been published.

    I end on the question as to whether or not we are in danger of over-reacting to high profile fraud in political science (LaCour and Green)?

    • Hello Keith,

      I work at the Center for Open Science on the TOP Guidelines.

      I agree that many IRB proposals contain clauses that prohibit data sharing or even mandate data destruction. However, every IRB proposal is essentially an opportunity to bring these issues to the forefront of an interested and invested audience. As funding agencies continue to mandate data sharing, the issue will only intensify, and the need to create solutions that preserve the benefit of open data with the need to protect identifiable information will increase.

      The most certain way to address the problem of publication bias in favor of significant findings, which distorts the weight of evidence, is through Registered Reports. RRs move peer review earlier in the process, and the methods, inference decisions, and decision to publish, are made before results are known. Other steps can move the needle somewhat, such as a researcher preregistering their study without peer review and then citing that preregistration in the final paper, but the editorial and peer review decisions will still be affected by the results, regardless of the rigor of the methods.

      I cannot speak directly to this fear, but I think that more studies need to have higher power. One solution is to make collaboration as easy as possible, so that 3 researchers at small colleges could have access to that same sample size as a single researcher a a large R1. We create the Open Science Framework (https://osf.io) to enable the activities for which we advocate (registrations, public and private workflows, data preservation, and collaboration).

      Over-reaction to high profile fraud? I cannot estimate the frequency of fraud, but I do know that every person responds to the incentives that reward their actions. If we reward transparency over exciting-sounding, but flimsy, results, then individuals will compete for that rigorous metric and not the one that leads to questionable, or fraudulent, activities.

  15. An interesting article and a concise summary of emerging concerns in the scholarly publishing realm. I very much support transparency, especially in light of increasing federal funding requirements for making research data public available.
    The details on research methodology are important; however full disclosure must be balanced against the unintended consequence of reducing the number of words (pages) available for the substantive content.
    Greater utilization of research notes structured in a streamlined fashion such as found in some medical journals could encourage replication. However, I wonder about the impact this would have on graduate students – would replications be suitable for dissertation or thesis research?

    • Hello Aimee,

      I am from the Center for Open Science and work on the TOP Guidelines. Your concern about “feature bloat” is real- who can read a 30 page paper, half of which is detailed methodology only relevant to someone trying to replicate the work? A solution to that conundrum is the Open Science Framework (https://osf.io), a free and open source commons that allow researchers to connect and document their entire workflow. It can provide persistent, citable URLs that allow the content to be accessible in perpetuity (data are supported by an endowment).

      I suspect that replications will never be as “valued” as novel research, because it is simply not new knowledge, but confirmation of someone else’s idea. However, it needs to be rewarded because it is such an important and under-utilized tool in the process of science. Making it a small part of a typical dissertation may be one solution.

      Best,
      David

  16. In his Speak Your Mind essay, Professor Lars Tummers enthusiastically recommends Transparency and Openness Promotion (TOP) for public administration and PAR. Before launching headlong into such policies and practices, we believe that PAR editor James L. Perry—indeed, any editor—and members of the public administration community more broadly need to be aware of the vigorous and contentious debates that have been taking place in US political science around a similar initiative, known as Data Access and Research Transparency (DA-RT).

    DA-RT has been promoted by its proponents as an epistemologically neutral endeavor, necessary to the broad goal of research transparency. In the debates surrounding DA-RT policies, many, including ourselves, have challenged these assumptions. We and others have also pointed to a number of unintended consequences that merit detailed analysis **before** journals jump on a bandwagon of concerns—replication, e.g.—that have their origins in research designs central to the medical and natural sciences, in particular experimentation. We cannot reiterate here the full range of issues that have been put on the table in the political science discussions, but we provide citations below to those debates, including both promoters and those who have sounded criticisms of DA-RT and its implementation.

    Public administration research, like political science, is—as Professor Tummers recognizes at the end of his essay—methodologically diverse. Like DA-RT, TOP has the potential to brand scholarship whose researchers must request an exemption from the policy, as second best. That research is often qualitative or interpretive, involving direct interactions between researchers and research participants, raising questions about TOP’s additional impact on those methods. This also points to the relationship between TOP and IRB policies (as it did with DA-RT), matters that need to be ironed out before a new policy is adopted. Moreover, under TOP (level 2), as with DA-RT, researchers conducting most qualitative or interpretive research are likely to find it more difficult to proceed to peer review, given that editors will decide whether to grant exceptions at the manuscript submission stage. These are but a few of the possible problematic effects of TOP; others can be seen in perusing the sources listed in the bibliography.

    We note that classic articles in PAR recognize the plurality of modes of knowledge generation relevant to public administration. Consider, as one example, Mary R. Schmidt’s 1993 article “Grout: Alternative kinds of knowledge and why they are ignored” (Public Administration Review 53/6: 525-530). More recently, Evelyn Brodkin (2012) has commented on the ethnographic turn inspired by Michael Lipsky’s work and its “implicit encouragement to researchers to get out from behind their desks in order to investigate and even experience the realities of everyday organizational life” (2012, 946, original emphasis; from “Reflections on street-level bureaucracy: Past, present, and future,” Public Administration Review 72/6: 940–949). Would research such as this be published under a TOP-like editorial policy, with whose premises it does not conform?

    Finally, to put the matter in a broader context, in this case the useable knowledge deriving from various forms of research, we suggest reading Colin Talbot and Carole Talbot’s “Sir Humphrey and the professors: What does Whitehall want from academics?” 2014 POLICY@MANCHESTER report, in which the top method, at 77%, endorsed by the civil servants replying to their survey was case studies (Figure 5, p. 16; http://www.policy.manchester.ac.uk/resources/reports/, last accessed June 30, 2016). This survey finding belies the assumption that it is research most resembling the methods used in medicine and the natural sciences which is most valuable for public administrators—one of the concerns influencing the development of TOP-like policies.

    In sum, TOP, like DA-RT, needs in-depth assessment **prior to its adoption**, as it has the potential to damage particular research communities that contribute important kinds of knowledge using methods and methodologies that are not consistent with TOP methodological assumptions and research practices. Those who favor methodological pluralism should insist on this very careful assessment. For that matter, isn’t there a prior question that needs asking—whether TOP is, in fact, needed, let alone desirable? In this day and age of evidence-based policy-making, where is the evidence that “[t]ransparent reporting, replications and open data are vital for scientific progress and developing useful knowledge for practice,” the opening claim in Professor Tummer’s statement? What kind of science, what model of inquiry, is assumed in this statement? If TOP works for that kind of science—and we have not yet seen evidence that it or similar policies do—does it ipso facto work for other kinds? Is it only the “lack of incentives” (idem.) that is keeping scholars from falling into line with such procedures, or might there be good reasons for scholars’ reluctance that has nothing to do with incentives and everything to do with the kinds of inquiry they pursue? And does PAR want to, should it, become the police officer enforcing one model of science on an entire research community?

    Peregrine Schwartz-Shea, University of Utah
    Dvora Yanow, Wageningen University and Käte Hamburger Kolleg Senior Fellow (2016-17), Duisburg

    Bibliography: DA-RT discussion and debate in the American Political Science Association

    1. Background materials

    “Data Access & Research Transparency.” http://www.dartstatement.org/.

    “Dialogue on DA-RT.” https://dialogueondart.org/perspectives-on-da-rt/.

    “Qualitative Transparency Deliberations, on behalf of the APSA Section for Qualitative and Multi-Method Research.” https://www.qualtd.net/

    Elman, Colin, and Diana Kapiszewski. 2014. “Data Access and Research Transparency in the Qualitative Tradition.” PS: Political Science & Politics 47 (1): 43–47.

    Lupia, Arthur, and Colin Elman. 2014. “Openness in Political Science: Data Access and Research Transparency.” PS: Political Science & Politics 47 (1): 19–42.

    2. Responses from the editor of APSA journal Perspectives on Politics

    Isaac, Jeffrey C. 2015a. “For a More Public Political Science.” Perspectives on Politics 13 (2): 269–83.

    ———. 2015b. “Further Thoughts on DA-RT.” The Plot: Politics Decoded (blog), November 16. http://www.the-plot.org/2015/11/02/further-thoughts-on-da-rt/ (accessed May 9, 2016).

    ———. 2015c. “A Broader Conception of Political Science Publicity, Or Why I Refuse DA-RT and Yet Did Not Sign the ‘Delay DA-RT’ Petition.” The Plot: Politics Decoded (blog), December 3. http://www.the-plot.org/2015/12/03/a-broader-conception-of-political-science-publicity-or-why-i-refuse-da-rt-and-yet-did-not-sign-the-delay-da-rt-petition/ (accessed May 9, 2016).

    ———. 2016a. “Is More Deliberation about DA-RT Really So Good?” The Plot: Politics Decoded (blog), January 23. http://www.the-plot.org/2016/01/23/is-more-deliberation-about-da-rt-really-so-good/ (accessed May 9, 2016).

    ———. 2016b. “In Praise of Transparency, But Not of DA-RT.” Symposium, International History and Politics 1/2: 24-29.

    3. “Transparency in Qualitative and Multi-Method Research.” 2015. Symposium, Qualitative & Multi-Method Research 13 (1) (Newsletter of the Organized Section at APSA). http://www.maxwell.syr.edu/uploadedFiles/moynihan/cqrm/Newsletter%2013_1.pdf (accessed June 6, 2016). See in particular:

    *Cramer, Katherine. 2015. “Transparent Explanations, Yes. Public Transcripts and Fieldnotes, No: Ethnographic Research on Public Opinion.” 17–20.

    *Pachirat, Timothy. 2015. “The Tyranny of Light.” 27–31.

    *Parkinson, Sarah Elizabeth, and Elisabeth Jean Wood. 2015. “Transparency in Intensive Research on Violence: Ethical Dilemmas and Unforeseen Consequences.” 22–27.

    4. “Data Access and Research Transparency (DA-RT).” 2016. Symposium, Comparative Politics Newsletter 26 (1) (Newsletter of the Organized Section at APSA). In particular:

    *Hall, Peter A. 2016. “Transparency, Research Integrity and Multiple Methods.” 28–31.

    *Htun, Mala. 2016. “DA-RT and the Social Conditions of Knowledge Production in Political Science.” 32–35.

    *Lynch, Marc. 2016. “Area Studies and the Cost of Prematurely Implementing DA-RT.” 36–39.

    *Sil, Rudra, and Guzmán Castro, with Anna Calasanti. 2016. “Avant-Garde or Dogmatic? DA-RT in the Mirror of the Social Sciences.” 40–43.

    *Yashar, Deborah J. 2016. “Editorial Trust, Gatekeeping, and Unintended Consequences.” 57-64.

    5. “Debating DA-RT.” 2016. Symposium, International History and Politics 1 (2) (Newsletter of the Organized Section at APSA). Including:

    *Subotic, Jelena. “DA-RT Controversy: An Old Methodological War in New Clothing.” 2-4

    *Grynaviski, Eric. “Thinking Holistically about Transparency.” 4-7

    *Alter, Karen. Has Active Citation been a Boon for Replication? Lessons from Law Review Publishing. 10-12

    *Morrison, James Ashley. “Dearly Bought Wisdom: My Experience with DA-RT.” 12-16

    6. Schwartz-Shea, Peregrine and Dvora Yanow. 2016. “Legitimizing Political Science or Splitting the Discipline? Reflections on DA-RT and the Policy-making Role of a Professional Association.” Politics and Gender 12 (in press).

    • Thank you for raising these issues. I am from the Center for Open Science and work on the TOP Guidelines.

      One of the core principles of the TOP Guidelines is to promote the most rigorous standards for the methodologies appropriate to a particular journal or sub-discipline, and I think that there is ample room to promote transparent practices for every type of research.

      As a specific example of research that follows many of the practices outlined in TOP, see work published by our group earlier this year (Kidwell et all, 2016: http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002456). In this study, all of the published articles from 5 journals over a specified period were scored for rates of data and materials sharing. There was no specific hypothesis test or inference test made in the study, because the sampling was comprehensive. However, the methods were preregistered and the analysis script and data were provided at the time of publication.

      “Like DA-RT, TOP has the potential to brand scholarship whose researchers must request an exemption from the policy, as second best.”

      I do not think that legal or ethical constraints to becoming more transparent will result in a perception of inferiority. On the other hand, scholars who are unwilling to make more parts of their workflow available to other researchers can and should have to justify that decision. I think it is very reasonable to expect that transparent research practices be considered the norm, and that exceptions be explicit and justified.

      “This also points to the relationship between TOP and IRB policies (as it did with DA-RT), matters that need to be ironed out before a new policy is adopted.”

      It is true that many IRB policies prevent data sharing. However, I do not expect that reality to change unless there is reason to do so. If publishing data sets are to be become rewarded parts of the research lifecycle, then there will be gradual but persistent pressure to update IRB policies that take into account the benefits of data sharing while protecting potentially-identifiable information. Until then, providing a statement that data cannot be shared because of a previously-made promise not to do so will just remind the community of the need to update those policies.

      “This survey finding (in which the top method, at 77%, endorsed by the civil servants replying to their survey was case studies) belies the assumption that it is research most resembling the methods used in medicine and the natural sciences which is most valuable for public administrators—one of the concerns influencing the development of TOP-like policies.”

      There are two points about the above conclusion that require consideration. The first is the notion that the methods behind a case study cannot be made transparent. I do not believe that is true, as there are many parts of a qualitative study that can and should be made as clear as possible- How were the answers coded? How were the subjects selected? Who was excluded from participating? How did the researchers make conclusions about the data from the case study, and was that method specified before or after seeing the dataset?

      The second is the idea that policy makers are not, usually, scholars. We as a research community have an obligation to make our work as applicable to policy makers as possible, but if there is type of research that is most easily used by policy makers, then perhaps other types of scholarship should take lessons from it and be made more accessible to that community. I do not think you would use the results of that survey to propose that theoretical analysis (30%), comparative analysis (59%), or operational research (41%) be abandoned. I do think that this suggests that these types of research should be synthesized in a way that is useful, because they presumably have value. I suspect that case studies are preferred because they are written in an accessible and relatable way, but they should not be substituted for other types of evidence when inappropriate. And when case studies are the most appropriate form of evidence, they should be reported with as much transparency as possible.

      “Would research such as this (Schmidt 1993, Grout, Alt forms of knowledge; Street level knowledge) be published under a TOP-like editorial policy, with whose premises it does not conform?”

      I do not see a conflict between “case study” or “street level knowledge” research and transparency standards. If the researchers are transparent about the methods employed and the process by which they drew any conclusion, then the work would be publishable even under a TOP level 3 implementation.

      “In sum, TOP, like DA-RT, needs in-depth assessment **prior to its adoption**, as it has the potential to damage particular research communities that contribute important kinds of knowledge using methods and methodologies that are not consistent with TOP methodological assumptions and research practices.”

      I posit that the fears outlined above will not materialize if PAR were to encourage or mandate more transparent research practices. Furthermore, I posit that there are other, unexpected, consequences that will only be uncovered as the standards are implemented. Finding solutions to those consequences will build tools and practices that allow more parts of the scientific process to be evaluated by the research community. That independent evaluation is a core value of science and should be supported.

  17. Professor Tummers wrote an excellent essay that raises fundamental issues with which I am very familiar with, as I’ve taken part of the discussions in the political science realm, from a qualitative researcher perspective. Many of the responses to Lars’ essay have addressed these points at much longer length than I could possibly do, and I can’t make justice to the debate in a few short lines, so I’m just going to raise a few points from a very personal perspective.

    1) I am wholeheartedly in favor of more open science, replications, transparency, pre-registration and a number of elements of a more robust, causal-mechanism-driven social science. I work with experimental methods (I was originally trained as a chemical engineer, so I did experiments before experiments were cool in social sciences) and therefore I espouse many of the views of several posters here, including Professor Tummers.

    2) I am trained as a political scientist who works in a public administration department. As a result, I have read (and written about) the debates (which were actually happening in political science well before public administration caught up to them). This debate hasn’t ended and is still heated. The LaCour and Green and Alice Goffman issues have only contributed to elevating the level of discussion and the need for an open conversation about this.

    3) I do, however, conduct qualitative (primarily interview-based, ethnographic and discourse analysis) research. I don’t want to enter into a discussion about whether the epistemology and ontology of social sciences should be divided in qualitative and quantitative methods. This debate is never going to end, so I strongly recommend reading Tom Pepinsky and Jay Ulfelder on why the discussion on qual/quant divides may actually be detrimental rather than useful.

    4) Based on the fact that I conduct ethnographic work in very vulnerable communities, I am rather wary of sharing raw data (e.g. my field notes) with anybody to leave them for interpretation. Identification of vulnerable populations in any kind of social science reporting is not well seen, and many would see it as actually harmful. The IRB principles are there for a reason. We need to protect those communities we study and avoid any kind of harm to them.

    5) Let’s not forget the fact that I’m a pre-tenure, tenure-track professor (a factor that plays into me being disincentivized to share raw data – what if someone who writes/analyzes data faster than me comes up with a faster analysis than I can possibly produce? My tenure committee isn’t going to judge me for data production, they’ll judge me for publications in high impact journals).

    6) Even when I do quantitative analysis, I very often create my own datasets and therefore I am wary of sharing them before I have exploited them (see point 5). Transparency is rad, and I’m all for letting people judge the quality and rigor of my work by examining my databases and how I processed them (e.g. publishing code, etc.) BUT I am not ok with someone publishing a paper with my database BEFORE I get a chance to do so. Again, we are not rewarding data production, we are rewarding journal article/book publication.

    7) What I think is missing from the conversation is a series of tables (sorry, I think in diagrams and tables) creating possible scenarios and then describing EXACTLY what the TOP guidelines would require from a researcher to comply with them. For example – my next paper is on the politics of water privatization in Mexico. Do I need to publish the raw data? Do I need to submit the raw transcripts of my interviews? Should I post them anonymized (anonymity will reduce the ability of researchers who might want to replicate my study to learn more of the contextual elements of my analysis)? I think that’s what is missing. A simple, visual, easy guide with several scenarios, containing MANY examples from the qualitative and interpretive traditions.

    In summary, do I think public administration scholarship could use more transparency? For sure. We need to be able to test whether claims in PA journals actually can be sustained by the evidence presented and by the data collected. But PA suffers from the same emerging chasm between quantitative and qualitative research. I don’t want PA to fetichisize quantitative analysis as more rigorous. There is plenty of qualitative, interpretive, ethnographic research that is robust and analytical. I want a more rigorous PA scholarship and stronger, robust, testable research designs. That’s what I want, and that’s what I hope we can all contribute.

    (and yes, I just saw the 2016 IPMJ summary article on ethnography in public management research, BTW). I also participated in the session at PMRC where the analysis presented by Ospina, Esteve and their PhD student showed, and I quote Esteve from Twitter “7.5% qual studies in top PA journ. last 5 years. Presenting it at PMRC & S. Ospina :)”

    Thanks Andy Whitford and Don Moynihan also for excellent discussions on Twitter on this topic.

  18. As an empirical researcher who has had an article replicated with the same data I used, I am in favor of open data and transparency. I see these as two separate things. To me the most important transparency issues, in order of importance are:
    1. Publication of important (quantitive) findings where the true hypothesis is that the null cannot be rejected. Or, at least, that this is the finding.
    2. Publication of replication, or repeated examination studies.
    3. Open data.

    For the first topic, I recommend that PAR (and other public administration related journals) commit to using 10-25% of their research publication space for articles of this sort. This commitment would encourage researchers to pursue topics where they have a suspicion that some policy, practice, or other potential cause/correlate has no discernable effect/correlation and to pursue publication even when the expected result is not found. Such pursuit would not be limited to the spare time of fully promoted professors.

    For the second topic:
    With respect to replication, clear benefits are: 1. Verifying that the results are as reported. 2. Validating information about the data. 3. Keeping researchers honest.

    With respect to repeated examination, I tire of responding to the question: “What is the new contribution?” There are at least three reasons why every study should be repeated quite a few times: 1. Repeated may be as close as one can get to replication. 2. Human subjects may be notably variable; even with the highest quality research methods, one study is insufficient to settle any matter about them. Further, studies may not use random sampling across the entire universe over which the researcher wishes to imply the results can be generalized. The researcher may use cautious words to account for this lack of generalizability, but in the absence of repeated studies, the results may be treated as more generalizable than they really are. As a notable example, revenue forecasting bias differs in different regions and it differs with respect to jurisdictions that face different fiscal realities. Without repeated studies, this fact may not be discovered. 3. Results may not hold up over time.

    For the third topic, while open data is, in principle, desirable, it is incumbent on its proponents to address the potentially substantial obsticals:
    1. Researchers may not have access to an appropriate platform.
    2. Junior researchers may fear the loss of control over data that is key to their tenure and promotion.
    3. Data may be restricted due to human subjects requirements.
    4. Preparation of data for publication on an open platform may be time-consuming and costly. The social sciences are generally not as well funded as the physical sciences; consequently, this barrier may be particularly hard to overcome.

  19. I am always seeking for brand spanking new info upon this significant
    subject, and was specifically happy the moment I locate sites which have been well-written and
    well-researched. I want to thank you for providing this
    exceptional information, and i also look ahead to reading more out
    of your blog in the long run. http://njmassage.info/

Leave a reply to David Mellor Cancel reply