Maria Chiara Pievatolo, The Scale and the Sword: Science, State and Research Evaluation
Presentation for English-speaking readers and abstract
If this were only a domestic issue, the fact that some of the international literature on research assessment in Italy appears misleading to many Italian-speaking researchers would not be so important. Now, however, the ANVUR, the Italian agency for research assessment appointed by the government, is participating in the research assessment reform process initiated by the COARA coalition, in a way that is not only inconsistent, but may put the entire COARA project at serious risk of failure. Therefore, we decided to present a translation of a 2017 article dealing with Andrea Bonaccorsi’s closed-access book La valutazione possibile. Teoria e pratica nel mondo della ricerca. Bologna. Il Mulino, 2015. Andrea Bonaccorsi is a former member of ANVUR’s board of directors, who has attempted to provide one of the broadest theoretical justifications for the Italian research assessment system, which is pervasive, centralized, mostly bibliometric, and under government control. A highly abbreviated English version of Bonaccorsi’s argument, which is also behind a paywall, can be found here.
Anonymous peer review has been, and continues to be, an important part of the process leading to the publication of an article in scholarly journals that are still built on the “affordances” of print technology. Two or more scholars from suitable disciplinary fields, chosen at the discretion of the journal’s editorial board and protected by anonymity, are asked to give an ex ante opinion on the acceptability of a text for publication. What the reviewers reject never sees the light of day, nor do their opinions and any discussions with the authors.
The Italian version of this essay, on the other hand, was born – together with a twin written by the jurist Roberto Caso – as an experiment in open peer review, which at the time was – and still is – rather unusual in Italy, where the state assessment of research requires anonymous peer review in order for a work of scholarship to be considered scientific. Open and post-publication peer review, however, would make it possible to moderate centralized and hierarchical evaluation systems by making the entire discussion public, recognizing the merits of the reviewers, and exposing any conflicts of interest.
To deal with Andrea Bonaccorsi’s justification of state evaluation of research, we will pretend – for the sake of discussion – that the system he theorizes produces a faithful snapshot of the way the scientific community evaluates itself. But even so, it can be shown that his justification leads to a research evaluation system that is practically despotic and theoretically retrograde. The system is despotic because it transforms an informal and historical ethos into a fixed rule of administrative law, which ceases to be an object of choice for the scientific community. And it is retrograde because, by establishing this rule, it blocks evolution in a still image, like the Sleeping Beauty’s castle, which cannot be overcome without further bureaucratic intervention.
In addition to the main argument, there are two ancillary parts: the first deals with the question, proposed by Bonaccorsi, of the empirical verifiability of some of the criticisms made against him; the second is an examination of a sample of quotations used by him to support some important passages. Finally, the conclusion briefly outlines the ideal and critical perspective of open science that inspires this paper.
In this spirit, whenever possible, we have cited legally accessible versions and reviews of the paywalled sources that Bonaccorsi prefers, so that the reader can check our argument without having to overcome further economic barriers.
A final warning: the term “state evaluation” is modeled on the term “state capitalism“. Just as state capitalism is an economic system in which the state is involved in business and profit-making economic activity, state science and state evaluation of research suggest that the state, directly or indirectly through its agencies, is involved in defining what is good science and what is not. In Italian we use “di stato” after a noun both in a neutral sense (“esame di stato”: state examination) and in a polemical sense, as in “delitto di stato” (a crime committed by the state itself). Translating “valutazione di stato” as “centralized evaluation” would lose the nuances of the Italian expression, while the adjective “governmental” might suggest a Foucauldian undertone that would betray the spirit of this essay, whose main point is closer to Kant: discussing how research is evaluated is a waste of time if we do not address the question of who is entitled to do it.
- Introduction: an impossible dialogue?
- What is meant by ‘research evaluation’?
- Science as a social institution
- Caesar est supra grammaticos
- “Stop, you are so beautiful!”: research evaluation as a theoretical issue
- Rule of law, rule of men
- With moles’ eyes
- Black swans: a small (and incomplete) experiment in citation analyis
- For the public use of reason
- Bibliography
1. Introduction: an impossible dialogue?
Although it is open to public review, this article is not an open peer review of Andrea Bonaccorsi’s book La valutazione possibile.1 His book is behind a paywall, which is detrimental to the transparency of the debate and, for some non-monetary facets,2 to the interests of the author himself. Moreover, a debate can only be between peers if the parties are and see themselves as such. But this was not the case during the period when Bonaccorsi was a government-appointed member3 of the Governing Board of the ANVUR; and the very conclusions of his book show that he himself does not treat his fellow researchers as peers.
Why, Bonaccorsi wonders, has the Italian research evaluation system attracted so much criticism? According to him, the motives of its critics can be reduced to three:4
- the emotionality caused by the personal involvement of those who see research as something more than a job like any other;
- their misunderstanding of the technical and impersonal nature of the assessment of their work;
- for some, the fear that their illusion of being better than others will be exposed by the objective gaze of research evaluation and that their ‘heavy academic interests’ will be jeopardized.
Bonaccorsi, who reports that he had to be taught by Alessandro Pizzorno5 that “people can misinterpret the way they are judged”, observes that
all three explanations contain elements of truth. Among the opponents of evaluation, we inextricably find those who object out of a sincere love of science, those who rightly fear the undesirable effects of exclusion and discouragement, and those who defend themselves, their group and, more generally, their academic interests.6
When is the thinking of others likely to be reduced to a phenomenon to be explained psychologically, economically, socially or biologically, rather than a question to be answered? According to Robert K. Merton, an author often cited by Bonaccorsi,7 this happens when society is so divided that its groups develop impermeable, competing and mutually alien universes of discourse. Even the critics of evaluation seem so alien to Bonaccorsi that he degrades them from subjects of discussion8 to objects of explanation. If they resist evaluation, they must be either emotional, incapable of understanding, or dishonest with themselves and others. But if he sees us as such, it would be futile to try to discuss with him as equals: our arguments – reduced to a manifestation of emotionalism, inability to understand, or dishonesty – would not be taken seriously, not even to be refuted. Nevertheless, his book deserves to be examined as an attempt to provide a theoretical justification for political interference in research that is almost unprecedented, at least in states that claim to be governed by the rule of law.
2. What is meant by ‘research evaluation’?
Even we, as researchers discussing Bonaccorsi’s book, are ‘evaluating’ a ‘research product’ – to use ANVUR jargon;9 and likewise, when we cite or refer to a colleague’s text for reasons unrelated to academic strategies, we are performing an activity of ‘evaluation’ that is intrinsic to research itself. For Bonaccorsi, however, ‘evaluation’ means something more: “an exercise in the explication, formalization and aggregation of judgments already present in the communities of experts.”10
Who is doing this exercise? It is difficult to understand because Bonaccorsi usually uses passive or impersonal sentences11 without ever giving an explicit operational definition of evaluation and its actors. There is, however, one passage in which he makes a passing reference to its actors, typically – as we shall see – confusing the political and legal discussion with the scientific one. It is worth quoting in full, albeit temporarily out of context:
We have seen that Supiot makes an important polemical argument: equivalence conventions are an operation performed by both the legal and the statistical system, but the former is a transparent and contestable operation, while the latter is opaque and not subject to discussion. This is a false argument. In the field of evaluation, the equivalence convention has been a subject of discussion for at least half a century, ever since de Solla Price began to study the long-run dynamics of science and Garfield introduced citation analysis. It is so much a subject of discussion that there are several scientific journals devoted exclusively to it. The existence of a lively debate obliges decision-makers (governments, ministries, agencies) to justify their decisions in a documented way.12
In the above passage, Bonaccorsi invokes, albeit selectively,13 the authority of Alain Desrosières. According to Desrosières, the purpose of statistics is to group individual objects by reducing the abundance of reality to taxonomies, the construction and adoption of which open the way to further actions, both cognitive and political.14 These taxonomies produce conventional classes of equivalence – the conventions of equivalence mentioned in the above quotation – which are and should always be discussed. Discussion, in turn, is destabilizing because it disrupts ordinary administration and its taxonomies by attempting to construct an alternative. But it is also unavoidable if the future is not to be confined to categories elaborated in the past and for the past. Yet, Desrosiéres, like the jurist Supiot15 and unlike Bonaccorsi, sees a tension here.16
One could compare this tension with the more general tension resulting from the fact that many debates bear simultaneously on substantial objects and on the very rules and modalities of the debate: the constitution, the functioning of the assemblies, the means for designating the representatives. Any constitution lays down the rules of its own modification. But that is the point: statistical information does not present itself in the same way. The “indisputable facts” that it is called upon to provide (but that it
has also helped to certify) do not contain the modalities of their own debate.17
Bonaccorsi’s argument that the statistical “facts” on which politics and administration rely are still subject to scientific debate overlooks a not insignificant detail: while scientific debate can continue indefinitely in search of a conclusion that can no longer be refuted,18 political and administrative debate must end with decisions that are imposed even on those who disagree. It is precisely for this reason that the rule of law tradition has developed forms, procedures, checks and balances that are not needed in scientific debate.19. Bonaccorsi himself admits it: however much they claim to document their decisions, “governments, ministries, agencies”20 are “decision-makers”.
The research evaluation to which Bonaccorsi refers is therefore not the one carried out by scholars, among peers, but the one imposed by the state, by its ministries and politically appointed agencies, such as the Italian ANVUR. As can be seen from the exegesis of his text, Bonaccorsi takes this circumstance for granted, although he never explains it in detail, preferring instead to dwell on an attempt to demonstrate the theoretical correctness of his reconnaissance operation rather than its scientific and political justification.
3. Science as a social institution
When, as in Italy, the evaluation of research is a state evaluation, carried out by a government-appointed agency with strong and intrusive administrative powers, it is necessary to address the preliminary question of its legitimacy. In order to separate the scholarly discussion about the way in which research is evaluated from the question of the legitimacy of the evaluating agency, we will provisionally concede something controversial, namely that the evaluation based on Bonaccorsi’s canons provides an accurate picture of the quality of research in the country in which the evaluation takes place. Would a state evaluation of research that was, in hypothesi, scientifically flawless be legitimate in a republic that wanted to continue to uphold the tradition of the rule of law?
Bonaccorsi presents the evaluation of research, which he oversaw as a member of the ANVUR board, as a reconnaissance operation that merely makes explicit and formalizes canons already shared by the scientific communities subject to it. This exercise, he argues, can be based on R.K. Merton’s sociology of science.
For my part, I have no difficulty in starting with the main normative model of modern science, that of Robert K. Merton. In its best-known formulation, scientists are universalist, communitarian, disinterested and skeptical.21
Merton refused to present his sociology of science as a sociological theory of knowledge aimed at identifying the alleged social foundations of valid knowledge. He intended his sociology of science to be descriptive and empirical, despite some later charges of idealization22 against it.23
Science is a deceptively inclusive word which refers to a variety of distinct though interrelated items. It is commonly used to denote (1) a set of characteristic methods by means of which knowledge is certified; (2) a stock of accumulated knowledge stemming from the application of these methods; (3) a set of cultural values and mores governing the activities termed scientific; or (4) any combination of the foregoing. We are here concerned in a preliminary fashion with the cultural structure of science, that is, with one limited aspect of science as an institution. Thus, we shall consider, not the methods of science, but the mores with which they are hedged about. To be sure, methodological canons are often both technical expedients and moral compulsives, but it is solely the latter which is our concern here. This is an essay in the sociology of science, not an excursion in methodology. Similarly, we shall not deal with the substantive findings of sciences (hypotheses, uniformities, laws), except as these are pertinent to standardized social sentiments toward science. This is not an adventure in polymathy.24
In dealing with the institutionalized ethos of the modern scientific community, Merton understands the institution in its general sociological sense as a system of formal or informal and historically situated social norms. His sociological study of the scientific ethos is normative only in the sense that it examines the ways in which society influences thought and discovers and describes socially accepted norms. It is not normative in the sense that it claims to impose them, once discovered and formalized, as criteria for scientific verification. If Merton had understood his work in the latter sense, he would have had to brand as invalid, for example, the foundations of Western mathematics because they were developed in groups, such as the Pythagorean community, whose esoteric practices were very different from those of modern scientists.
However, let us concede, for the sake of argument, that the snapshot of the ethos of the scientific community that Bonaccorsi thinks he can take using Merton’s theory is perfectly accurate. Going far beyond Merton’s intention,25 let us even concede that it can also form the basis of an evaluation of research carried out by a government agency such as ANVUR. And let us also imagine that such an agency evaluates research using a combination of the following criteria: bibliometric indices calculated on the basis of proprietary databases, anonymous peer review of texts that may be behind paywalls and, in the field of the humanities and social sciences, the opinions of expert committees and the inclusion in lists of scientific journals and journals of excellence (“di classe A”) based on their reports.26 Finally, suppose that the government-appointed authority in charge of the research evaluation justifies itself to the researchers as follows: “I am not imposing anything on you other than what you are already imposing on yourselves: my criteria are indeed just an up-to-date elaboration of what you have already informally developed in the course of modernity.”27 Would it be right?
An Answer to the Question: What is Enlightenment?28 written by Immanuel Kant in 1783 and carefully avoided by Bonaccorsi29 can help to develop at least two arguments that justify a negative answer.
4. Caesar est supra grammaticos
Kant’s essay on the Enlightenment (Ak VIII 39-40) considers the case of a society of clergymen, governed as an aristocracy or even as a democracy, which, after a process of free deliberation, commits itself to a certain unalterable creed. Is such a deliberation morally acceptable? Kant resolutely denies it: such a commitment would ‘violate the sacred rights of humanity’. Even if it were democratic, an ecclesial community that opted for such a perpetual commitment would inconsistently deprive future generations of the same right to debate and think for themselves that it has enjoyed in deliberating its own doctrine. An ethos that is codified as binding ceases to be a freely accepted practice and becomes a coercive legal norm. But this very obligation cuts off the root of its legitimacy, which is the free choice of the community of reference, not only yesterday or the day before, but also today and tomorrow.
Kant, who lived under an absolute monarchy, remarks that, in general, “the touchstone of whatever can be decided upon as law for a people lies in the question: whether a people could impose such a law upon itself (AK VIII 39); “what a people may never decide upon for itself, a monarch may still less decide upon for a people; for his legislative authority rests precisely on this, that he unites in his will the collective will of the people” (AK VIII 39-40). Similarly, what the scientific community cannot decide for itself, except by depriving future generations of the freedom of research that it has enjoyed, can still less be imposed on it by a government authority, even in the limited case where the evaluation methods are actually error-free.
The extension of Kant’s argument from religion to science is not based on an arbitrary interpretation. The essay on the Enlightenment continues as follows (AK VIII 40):
It even infringes upon his majesty if [the monarch] meddles in these affairs by honoring with governmental inspection the writings in which his subjects attempt to clarify their insight, as well as if he does this from his own supreme insight, in which case he exposes himself to the reproach Caesar non est supra grammaticos, [Caesar is not above the grammarians] but much more so if he demeans his supreme authority so far as to support the spiritual despotism of a few tyrants within his state against the rest of his subjects.
This passage suggests another question: what if the government’s assessment is not perfect, as we have conceded for the sake of argument, but is based on questionable criteria and metrics?
Kant was dealing with a monarch, albeit an enlightened one, but his arguments against monarchical interference can be extended to the sovereign people, and a fortiori to those who claim to be acting in their service by virtue of opaque government appointments.30
According to the Enlightenment essay, the state tries to interfere in research either by having its agents participate as peers in the scientific conversation or by directly using its power to establish the supremacy a school of thought.
The answer to the first type of interference can be the same as that given by Bishop Piacentius to Sigismund of Luxembourg, who, not wanting to be corrected for his Latin errors, claimed to be above grammar as Emperor of the Holy Roman Empire: Caesar non est supra grammaticos. If a state evaluator of research presented himself as a peer in the scientific conversation, he could not impose his evaluation criteria by coercion and, above all, he would jeopardize the very authority that invested him. Wanting to discuss Latin grammar, Sigismund found himself in an embarrassing dilemma: either accept the correction of a subject against his own majesty, or impose his own error on grammar by the sword.
If the state chooses the second horn of the dilemma, it will have research evaluated by offering the use of its sword to those scientists most inclined to comply. The result will be a state-controlled science that, as in the darkest pages of 20th century history,31 will not only host and impose state errors, but will also systematically stifle any true and free scientific debate.
In the secret article of the Perpetual Peace (AK VIII 369), Kant draws a comparison between philosophy, an inferior faculty in the university system of his time, and two superior and vocational faculties, jurisprudence and theology, the former at the service of the reason of the state and the latter at the service of revealed truth.
The lawyer who has made his symbol the scales of right along with the sword of justice does not usually make use of the latter merely to keep all extraneous influences away from the former, but when one side of the scales refuses to sink he puts the sword into it (vae victis) and a lawyer who is not also a philosopher (at least in morality) is greatly tempted to do so, since his office is only to apply existing laws but not to investigate whether such laws themselves need to be improved, and he counts this rank of his faculty, which is in fact lower, as higher because it is accompanied by power (as is also the case with the other two faculties).32
What Kant says about jurists who are not also philosophers can be extended to all scholars who are organic to political power. The scale stands, in particular, for justice and, in general, for the disinterested judgement that should be appropriete to research. But those who also wield the sword of service to the state will be tempted to throw it on the scale, like Brennus, to put an end to the debate with an iron argument. The majority will bend the knee; those who do not will suffer the consequences. Peer review will become impossible, both for the many who submit and for the few who dissent, who will face a debate that is no longer disinterested but inevitably agonistic.
The evaluator who is empowered by the sword of the state would like to have a scientific legitimacy by claiming that he is using as a metric the same scale that is recognized by the majority of researchers. If this is the case, however, it is one of two things:
- If the state evaluation replicates the weighing already done by the scientific community and gets the same result, then it is superfluous because it reproduces the same value structures – or, less optimistically, academic power structures – that already exist.
- on the other hand, if the state evaluator claims to use the same scale, but gets – and enforces – a different result, then he must have added another weight to the scale: the unspoken and unjustified.33 weight of the sword.34
There are not only empirical studies showing a correlation between state evaluation and academic conformism, but also a widespread institutional awareness of the lack of diversity it creates.35 However, in our argument, which focuses on the future and freedom rather than the past and its burdens, this circumstance is only accidental. Even if researchers were free and strong enough to refuse to comply with the evaluation agency, our conclusion would remain: state evaluation of research is a despotic exercise.
5. “Stop, you are so beautiful!”: research evaluation as a theoretical issue
Although he served as a government official in a country whose constitution recognizes, in addition to general freedom of expression, a special freedom for the arts, sciences and their teaching,36 Bonaccorsi takes the legitimacy of government evaluation for granted. The only criticism he acknowledges is theoretical: “The basic critical argument is that it is not possible to make qualitative judgements on research that have the qualities of independence and impartiality necessary for the evaluation to have the character of neutrality.”37 Thus, from his point of view, the issue can be reduced to a struggle between two opposing positions.
- Evaluation of research is possible because scientific communities have developed their own intersubjective criteria, which are substantiated by citations, as R.K. Merton’s sociology of science supposedly shows; the evaluator only needs to formalize an already socially shared ethos.
- The evaluation of research “does not reflect reality”,38 because the scientific community evaluates itself and builds its internal hierarchy on the basis of social power relations, which also influence the purposes and ways in which researchers cite each other.
This dichotomy pits Merton against authors as diverse as Kuhn, Bourdieu and Foucault. In Bonaccorsi’s book, such an aut aut, certainly more articulated39 than summarized here, is central to his argument. Taken seriously, it leads the reader to believe that the critique of state research evaluation and its bibliometric indicators depends on relativistic positions that resist the evidence of numbers and reduce science to a system of hierarchies and power. Bonaccorsi’s aut aut, however, can be dialectically resolved into an et et, by asking whether relativism is really necessary to discover, as sociologists of science, hierarchies and power constellations in the academic communities we study.
Pierre Bourdieu, for example, would have given a negative answer. In his preface to the English edition of Homo academicus, he rejects postmodern irrationalism: a sociology that reflects on itself, its own academic practices, their social determination and their influence on its own scientific discourse can make itself epistemologically more self-aware and thus freer from this very determination.40 According to Merton himself,41 the sociology of science describes the ethos of the scientific communities it studies, but it does not treat any particular historically situated ethos as a necessary and sufficient condition for the validity of science, because it is not a sociological theory of knowledge. If historians or sociologists of science describe the Pythagoreans as an esoteric sect, they are not obliged to treat the Pythagorean theorem as a power device; nor are mathematicians obliged, if they find its demonstration within the formal system of Euclidean geometry convincing, to recommend the closure of public universities and their transformation into sects. Sociological description – which insists on historically situated objects – is one thing, the validity of science is another.42 Bonaccorsi has difficulty distinguishing between the two, only because he inadvertently assimilates the ethos described by sociologists to a norm of validation prescribed by evaluators.
Let us suppose, for the sake of argument, that Bonaccorsi’s text was written some forty years ago, and that it photographs in hypothesi a happy situation43 in which the ethos of the historically existing scientific community is perfectly congruent with the conditions of validity of science. That he is therefore right to write that
The criteria used for the ex-post evaluation should be the same as those used for the ex-ante selection of contributions when they are accepted for publication in scientific journals or in the series of scientific publishers.44
Let us assume that his representation of the scientific community, based on his reduction of the Mertonian sociology of science, is unchallengeable and that there is a general consensus on the use of bibliometrics and anonymous reviews for evaluation purposes. Let us suppose, then, that scientists compete with good sportsmanship for recognition of their own originality through publication in prestigious journals, and that they cite the work of others only to pay their debts to them. In such a situation, would we have sufficient theoretical reasons to say “Verweile doch! Du bist so schön!” and to distill from the snapshot of that moment a set of formal criteria for evaluating research?
The answer is no, for at least two reasons:
- The fatal gesture of freezing the moment in order to extract a metric from it crystallizes as universal and necessary something that is, as empirical, particular and contingent;45
- The establishmnent of metricx has a reflexive effect on researchers, leading them to adopt strategic behaviors that undermine its very capacity for measurement. As Mario Biagioli writes in Nature,46
All metrics of scientific evaluation are bound to be abused. Goodhart’s law (named after the British economist who may have been the first to announce it) states that when a feature of the economy is picked as an indicator of the economy, then it inexorably ceases to function as that indicator because people start to game it.47
The reflexivity that turns a measure into a target is the uncertainty principle of the social sciences: quotations can only be the currency of science if they are the currency of nothing else. In this respect, state evaluation is not only practically despotic, it is also theoretically retrograde, in more than one respect.
6. Rule of law, rule of men
A reader of Bonaccorsi might think that this conclusion – state evaluation is a practically despotic and theoretically retrograde operation – does not do justice to his argumentative effort. Indeed, our analysis has mostly treated it as irrelevant, because it has addressed the preliminary question of the practical and theoretical legitimacy of a state evaluation based on the enforcement of the results, absolutized, of mere empirical generalizations. The reader may wonder, however, whether it is not our very concession in hypothesi that lends his claims a dogmatic arrogance that goes beyond the author’s intentions. Actually, Bonaccorsi seems to present the evaluation of research as a process based on equality and friendship: the very competition for recognition of the priority of a discovery transcends individual egoism because it is necessary for social certification, even when scientists like Darwin felt that it contradicted the spirit of disinterested research.
And so, Merton concludes, we need institutions that remove scientists from these dilemmas: “Peers and friends in the scientific community do what the tormented Darwin will not do for himself.”48
In the passage quoted by Bonaccorsi, Merton was referring to a famous episode: Darwin was working on his theory of evolution when he received an unpublished article from the younger Alfred Russel Wallace with conclusions so similar to his own that he considered abandoning publication. Charles Lyell and Joseph D. Hooker – the “peers and friends” of Bonaccorsi’s quote – stepped in to ensure that credit was given to both by organizing, at the Linnean Society in London, the joint public reading of the two naturalists’ papers that sparked the Darwinian revolution. The year was 1858.
Darwin’s ‘peers and friends’, however, were not state evaluators of research. Their pressure to publish was a friendly insistence, very different from the publish-or-perish of today’s bureaucratized and normalized science, which did not spare even future Nobel laureates.49 The Victorian gentlemen involved belonged to a small community that shared an informal ethos, in a network of interpersonal relationships and face-to-face discussions, far removed both from the fetishism of ‘publication’ in commercial journals that brand scholarship and from administrative coercion. For a 21st-century state evaluator, to imagine himself as Sir Charles Lyell or Sir J.D. Hooker is not only innocently deceptive: it is, as we shall see, legally and politically dangerous.
The reader may object that however much the research evaluator of the twenty-first century may have lost his genteel demeanor, it would still be wrong to treat him as a despotic doctrinaire. This would be demonstrated by Bonaccorsi’s reference to Charles S. Peirce, whom he sees as a proponent of a fallibilist sociological theory of knowledge.
But only the scientific method guarantees the formation of correct beliefs in the face of confrontation with facts. The guarantee of correctness is not internal to reasoning, but derives from confrontation with an “external permanence” that can “affect every man” This external permanence is science, understood both as the body of knowledge available at any given time and as the competent community that produces and disseminates that knowledge.
…there is no philosophical reason to conclude that, given enough time, it is not possible to find a solution to the scientific problem that will lead to agreement among the currently available beliefs. If we keep looking, in a few hundred years, ten thousand, a million, or a billion years, there is no reason to believe that the method of scientific inquiry will fail. This is, of course, a paradoxical response that could be translated in terms of a regulative ideal. Indeed, what is postulated is a community of competent people, of indefinite breadth, capable of carrying out research for an infinite time. One can speak of the construction of an asymptotic consensus.50.
Research evaluation, if we apply to it the canons that Bonaccorsi believes can be derived from Peirce, would be the result of a process whose procedures can reach an intersubjective consensus among those who are recognized as authoritative in the relevant scientific community. This agreement has as its ideal horizon that of a possible truth that a community of competent researchers of indefinite size will arrive at in an equally indefinite time.51
But the article “How to Make Our Ideas Clear,”52 which Bonaccorsi quotes, is enough to show that Peirce’s position is not the one he ascribes to him.
The opinion which is fated to be ultimately agreed to by all who investigate, is what we mean by the truth, and the object represented in this opinion is the real. That is the way I would explain reality.
But it may be said that this view is directly opposed to the abstract definition which we have given of reality, inasmuch as it makes the characters of the real depend on what is ultimately thought about them. But the answer to this is that, on the one hand, reality is independent, not necessarily of thought in general, but only of what you or I or any finite number of men may think about it; and that, on the other hand, though the object of the final opinion depends on what that opinion is, yet what that opinion is does not depend on what you or I or any man thinks.
Peirce defines reality as “that whose properties are independent of what anyone can think they are.”53 Reality is, therefore, an external permanence in principle different from science, which is just a cumulative collection of thoughts. We, however, can only know real things through their effects: hence, only through our beliefs. How, then, do we measure their truth?
Peirce uses the measure of intersubjective agreement: truth is “The opinion which is fated to be ultimately agreed to by all who investigate” and the object it represents is the real. Can we conclude, then, that reality is science? This is Bonaccorsi’s conclusion, but not Peirce’s.
According to Peirce, the impossibility of bypassing the mediation of thought does not mean that reality, which he defined as independent of our thinking, is what “we” think: “reality is independent, not necessarily of thought in general, but only of what you or I or any finite number of men may think about it.” Therefore, the opinions of a particular existing scientific community, as the opinions of a finite number of people, are not “reality. “54 If the state evaluator were to identify the most widely held opinions, either through bibliometrics or through panels of appointed experts, and to adopt them as the standard for evaluation, he would be transforming the currently dominant opinions into norms. But such a method would be similar to what Peirce called the “method of authority”.55 For its actual reference would not be an ideal “community of experts of indeterminate breadth and infinite horizon,” but rather a group of limited breadth and finite horizon whose tentative beliefs would be elevated to the standard of judgment by the decree of some “decision-maker”.
The confusion between, on the one hand, the prevailing views within certain scientific communities subject to the power of the state and, on the other hand, the regulatory ideal of endless research in the asymptotic horizon of “thinking in general” is not without consequences. Because of this confusion, Bonaccorsi presents the evaluators appointed by the government not as government officials, but as researchers open to discussion, working among others in the vineyard of science. Again, this representation is not just an innocent deception: it is politically and legally dangerous. The distinction between the government of men and the government of laws of Plato’s Statesman (293a-296a) will help to understand why.
Plato wrote that in an ideal situation, where there are statesmen equipped with science, a government without laws is preferable. The law, in fact, is like an ignorant and stubborn man: it does not allow its commands to be transgressed and it does not accept being questioned, even if someone has come up with something better than what has been established.
In the world of politics, where science is lacking, the rigidity of the law is not a flaw because it keeps the exercise of power in check. The concepts and procedures developed over the centuries by legal tradition and the rule of law have gradually improved its functionality for this purpose.
In the dialogue Protagoras, Socrates cannot agree with his interlocutor on the method of discussion. The sophist Hippias suggests a conventional solution: elect a chairman who will regulate the debate by controlling the timing of each speech. Socrates objects (338b-338e), for a reason that Gabriele Giannantoni explains as follows:
Socrates rejects this idea, arguing that since such a person cannot be inferior to the disputants (because he would not be up to the task) or equal to them (because he would be useless), he must be superior: but who is “wiser” than Protagoras or superior to him? Indeed, an arbiter is useless in a free and common inquiry.56
Being scientific is not a question of merit or product, but of method or process: stiffening research procedures with rules and presidents, even if democratically elected, would mean freezing them and removing them from scientific discussion. Scientific communities whose orientation depends on a horizon of possible truth, such as that of Plato’s ideas or even Peirce’s reality, do not need a rigid law that crystallizes immutable criteria of quality, but an ethos that is only informally shared and can therefore easily evolve.57
If the state evaluator sees himself as a peer and a friend, participating, like Socrates, in an informal ethos, and forgets that his authority derives not from community recognition but from ministerial appointment, he will be inclined to practice the informality of the government of men at the expense of the formality of the government of laws. Peers and friends can afford to change their minds by changing the standard by which they judge their colleagues: but an administrative authority that ‘changed its mind’ about the criteria for evaluating research would be awarding rewards and punishments according to retroactive rules. A researcher can afford to change a text published on the web overnight; an administrative authority cannot change documents that publish its rules in the middle of the night without undermining legal certainty.
These examples are taken from the small and incomplete gallery of legal horrors collected by Roberto Caso58 while studying Italian evaluation practice. These horrors are not accidental: the theoretical root of the ANVUR’s difficulty in remaining within the bounds of the rule of law59 is a deception, or perhaps an illusion: The fact that the state evaluator believe that shis authority depends not on his appointment by the government but on an alleged informal recognition of all researchers as his peers.60 In short, he believes that he enjoys the scientific title to exercise the government of men when he should be subject to the limits of the government of laws – assuming, for the sake of the argument, that state interference in the exercise of the public use of reason is justifiable.
7. With moles’ eyes
The fourth chapter of Bonaccorsi’s book aims to “use the tools of the social sciences to test whether there is large-scale empirical evidence to support the alarmism about valuation, or even weak signals that may point to imminent and important risks. “61 His arguments are of marginal relevance to our critique, which portrays state evaluation as despotic and retrograde even while conceding in hypothesi that Bonaccorsi’s empirical generalizations62 are true for the past. However, his chapter deserves to be considered, at least in part,63 in order to suggest some not-so-original reflections on the use of experience in research governance.
In 1931, of the more than 1200 professors teaching in Italian universities, just over twelve refused to swear allegiance to the Fascist state. One of the regime’s newspapers could thus contemptuously describe them as “sublimated to one per thousand”.64 The rebels were indeed few by the standards of normal science. Similarly, the number of Jewish university professors dismissed under the racial laws in 1938 was negligible, though somewhat larger. A Bonaccorsi living in 1939 might have repeated that the chemical-political process by which a minority of professors went from solid to gas was so small that its significance was merely anecdotal.65 But Giorgio Israel66 shows that its impact on Italian research was profound and lasting.
Even before the racial laws were promulgated, the zealous minister had initiated a census of Jewish teachers, delivering race determination forms to all school and university authorities. On the eve of the laws’ promulgation, Tevere and Vita universitaria published lists of Jewish university professors and lecturers, demanding that they lose their chairs. Vita universitaria also published a list of school textbooks written by Jewish authors, the use of which was to be prohibited. The Italian-Jewish community numbered less than 50,000, plus another 10,000 foreign Jews who had been living and working in Italy for many years. Nearly 4,000 of them, including professors, members of the armed forces, public-and private-sector office workers, members of the liberal professions, and businessmen were stripped of all civil rights; about 6,000 students were expelled from schools and universities. Some 174 upper-secondary-school teachers were dismissed, along with 99 full professors -about 7% of all full professors in the country. The disciplines most affected were: medicine, 18; mathematics, physics and chemistry, 17; law,23; the arts and philosophy, 20. Of course, this disproportion (7% of full professors, when Jews accounted for only 0.1% of the Italian population) sparked the customary attempts to account for the Jewish pre-eminence in culture and science. Some described it as the result of a plot; others, as demonstrating that science had taken a turn for the worse and become an expression of the typically Jewish mindset, inclined towards abstraction and formalized theories far removed from intuition. It should be noted, however, that the clash between “Jewish science” and “Aryan science,” a classic theme in German racism, did not have a strong following in Italy, except in a series of articles published in “La difesa della razza” and “La vita italiana”: its main theorists were Julius Evola and Guido Landra.
A rapid perusal of the names of the university professors dismissed gives some idea of thecultural devastation caused by the racial laws.
The Italian school of physics, (the “Via Panisperna boys”), renowned the world over for its pioneering research in nuclear physics, was wiped out. Scientists of the caliber of Bruno Rossi (the founder of cosmic-ray theory), Enrico Fermi (whose wife was Jewish), Emilio Segrè, Ugo Fano, and Eugenio Fubini emigrated to the United States. Franco Rasetti, although an “Aryan,” emigrated to Canada rather than remain in a country responsible forsuch disgraceful behavior. Giulio Racah moved to the Hebrew University of Jerusalem, Leo Pincherle emigrated to England, and Sergio De Benedetti and Bruno Pontecorvo to Paris (the latter ultimately to the USSR).
In mathematics, the quality of the loss was more significant than the actual numbers. The list begins with Vito Volterra, nicknamed “Mr. Italian Science,” and Tullio Levi-Civita, perhaps the greatest Italian mathematician of the time, author of fundamental contributions io the mathematical aspects of relativity theory, and includes Beppo Levi, Guido Fubini (the creator of differential projective geometry), Guido Castelnuovo, and Federigo Enriques (the leaders ofthe Italian school of algebraic geometry).
Another casualty was the Turin school of biology, founded by the distinguished histologist Giuseppe Levi. In addition to Levi himself, two of his students and future Nobel laureates were lost to Italian science: Salvatore (Salvador Edward) Luria and Rita Levi Montalcini. The biomedicai sciences lost Maurizio Mosé Ascoli, Camillo Artom, Mario Camis, Amedeo Herlitzka, and Mario Donati, the greatest Italian surgeon of the day. Chemistry lost two of its main protagonists in the industrial sector: Giacomo Mario Levi and Giorgio Renato Levi. In the field of statistics, mention must be made of Giorgio Mortara – the only researcher in statistics and demography who could match theprestige of Corrado Gini – and Roberto Bachi, who emigrated to Palestine where he later founded the Israeli Central Bureau of Statistics. Perhaps the most ironic loss was of several of the regime’s most authoritative theoreticians in the field of “corporative” law: Gino Arias, Giorgio del Vecchio, and Guido Tedeschi (who emigrated to Palestine); as well as the economists Bruno Foà, Gustavo del Vecchio,and Marco Fano. Other distinguished names included the eminent geographer Roberto Almagià- all the maps hanging in schools and offices had to be replaced, because they bore his name -the literary historian Attilio Momigliano, and the philosopher Rodolfo Mondolfo.67
Today it is not difficult to see the devastating effect of those “small numbers” whose names were Vito Volterra, Enrico Fermi or Rita Levi Montalcini. And it is easy to see, in retrospect, that a university that expels Piero Martinetti and destroys the careers of Aldo Capitini, is a more hospitable ground, even in terms of academic reproduction, 68 for petty clerks of research than for free scholars capable of spreading “the spirit of a rational valuing of one’s own worth and of the calling of each human being to think for himself.”69. Removing one research worker from the university and replacing him with another is numerically irrelevant. But removing Volterra, Fermi,70 Martinetti and Capitini and replacing them with zealous conformists causes scientific, cultural and spiritual damage that is difficult to calculate in the medium and long term. Researchers – people – are not numbers.
Generalizations from experience are fragile. The task of empirically verifying the statement ‘all swans are white’ is endless. To falsify it, however, is very easy: all it takes is to find at least one swan that is not white to nullify the verifying power of thousands or millions of white swans. Even if it were only one, the dark swan could not be dismissed as “anecdotal”. What is true of swans is even more true of researchers: the presence or absence of an Enrico Fermi and a Vito Volterra or, on the other hand, of a Diederik Stapel and a Paolo Macchiarini is not ‘anecdotal’. A research evaluation system that would exclude or marginalize the former and elevate the latter to academic stars should be seriously reconsidered.71
Moreover, a naive reliance on experience makes it easier to believe in false generalizations. It does not follow from “almost all mafiosi are Italians” that “almost all Italians are mafiosi”: the same ANVUR agents would feel unfairly discriminated against if they were treated as mafiosi when traveling abroad simply because of their passports. 72 What, then, about a state evaluation that inferred from the hypothetical “almost all good researchers publish in top journals” that “almost all those who publish in top journals are good researchers” and rewarded the latter accordingly? Descriptively, such a conclusion is just a logical fallacy; practically, however, it produces obvious injustices.
Treating being Italian as a proxy for being a Mafioso, and therefore discriminating against Italians in the criminal law, is logically equivalent to treating publications in a journal belonging to a certain list as a mark of excellence and discriminating positively in favor of its authors. The unjustified negative or positive discrimination does not depend on the falsity of the premise – it may well be true that almost all mafiosi are Italians and that almost all good researchers publish in the journals included in a certain list – but on a gross, albeit widespread, error of inference. From the fact that some Italians are part of Cosa Nostra, or that some researchers who are considered good publish in certain journals, it does not logically follow that the majority of Italians are mafiosi because they are Italian, or that the majority of those who publish in those journals are excellent because of that mere fact. A similar argument applies to citations: from the fact that some (or even almost all) good researchers are highly cited, it does not follow that almost all those who are highly cited are good.73
As Nicola De Bellis brilliantly argues, “the fact that the best scholars always publish in the same journals is not seen for what it is, i.e., the manifestation of a social dynamic of self-enclosure of academic élites on which commercial publishers can build their fortunes, but as an indicator of intrinsic quality. Almost as if a Platonic-Aristotelian essence of “top-journality” were embodied in these journals, promoting their bibliometric indices as much as their ability to attract the best works.”74 This is not empiricism: it is a form of magical thinking that would be no less so if it were shared or attributed by the scientific community as a whole.75
8. Black swans: a small (and incomplete) experiment in citation analysis
According to Bonaccorsi, “citation is not a rhetorical device: it is a condition of access to scientific communication: a paper that does not correctly cite the authors who have worked on the same topic will not be accepted by scientific journals and will simply not see the light of day”.76 If, for strategic or rhetorical reasons, citations were to be inaccurate or misleading, they would become useless for any bibliometric evaluation of research, although they would still be of some use for the descriptive purposes of sociology of science. We therefore expect Bonaccorsi’s citations to be a model of accuracy: who, if not him, is more interested in adhering strictly to his own theory? Let’s try to learn from him.
Roberto Caso has already dealt with Bonaccorsi’s misleading and instrumental reading of Merton’s sociology of science. We have already shown how La valutazione possibile selectively appealed to the authority of Desrosières, while quietly omitting the very points that brought Desrosières closer to Supiot’s positions. We could charitably77 categorize this practice as part of the game of interpreting and partially reusing the work of others for one’s own purposes. But to what extent is this rather self-serving game of citations – and avoidance of citations – compatible with Bonaccorsi’s allegedly “objective” nature of bibliometrics as a research evaluation tool?
To answer this question, we will analyze how Bonaccorsi uses his art of citation in dealing with (8.1) various sociological accounts of centralized systems of research assessment and the use of bibliometrics; (8.2) the issue of “particularism” in science, which might challenge the idea that scientific communities are structured according to an objective hierarchy that is out there and only needs to be elicited by state evaluators; (8.3) the alleged compatibility between democracy and government by numbers.
8.1. ‘Strong’ research evaluation systems and amateur bibliometrics
Bonaccorsi includes “authors such as Whitley and Gläser” among the “critics of evaluation” since they see it as “a parallel and overlapping principle of authority with respect to the self-regulation of scientific communities because it provides standardized indicators that are used by administrations.”78 The works to which he refers, without citing any of their passages, are two books79 edited by the sociologists of science Richard Whitley and Jochen Gläser, which collect studies of “weak” and “strong” national research evaluation systems. The systems they call “strong” regularly conduct institutionalized and public evaluations according to formalized rules and procedures to produce rankings that influence funding decisions.80.
In “strong” systems, evaluation is materially delegated to academic elites who gain a privileged position.81 “Strong” systems can have consequences whose intensity is inversely proportional to the degree of autonomy, including financial, of universities.
- Researchers become more aware of the need to compete for the recognition of the evaluating élite.
- Because evaluation is centralized, the need to develop criteria of quality and relevance for entire disciplinary fields also centralizes judgments about researchers, universities, and research institutions;
- Centralization and normalization reduce the diversity of research goals and methods, especially those that challenge the dominant orthodoxy;
- The reinforcement of disciplinary norms and goals makes it more difficult to develop new research goals and fields, and makes innovation more dangerous for researchers.
- The normalization, formalization, and publication of quality rankings make the university more stratified – that is, more hierarchical.
When evaluation is in the hands of a single centralized authority that imposes its criteria, normalization and hierarchy are hard to avoid, unless other peripheral variables come into play, as the two sociologists rightly point out.82 Are these really “criticisms,” or are they not rather almost obvious observations? A centralized evaluation system can certainly avoid using predetermined and uniform criteria, either by adopting the practice of evaluating works extemporaneously and idiosyncratically, or by constantly changing the parameters,83 even ex post: in this way, however, it will inevitably enrich the collection of legal horrors edited by Roberto Caso.
How to overcome such dangers? Bonaccorsi says nothing about this, as if it were enough for him that the decisions of the central evaluation agency are or appear to be “rationally” justified,84 and even pointing out some obvious consequences of centralization can be reduced to a criticism unworthy of refutation.
Jochen Gläser is also co-author, with Grit Laudel, of The Social Construction of Bibliometric Evaluations.85 The essay does not express a radical doubt about bibliometrics: it only focuses on its social perceptions and uses, to ask why research evaluation so often applies it in an amateurish version. But according to Bonaccorsi, this is enough to label this approach as “constructivist“.
Citations are only one aspect of research quality.86 When we cite someone else’s work, we do so not arbitrarily, but because we assign it some importance in the economy of our discourse, for reasons that may be either “Mertonian” recognition of others’ credit, or “constructivist” persuasion of readers who may also be bibliometric evaluators. In order to use bibliometrics in a scientific way, we must be aware of some well-known caveats. (1) Its statistics can only be meaningful for large numbers, i.e. for a large amount of citation data, which, a fortiori, should be complete, especially when considering the performance of individual research units. (2) In many areas of the natural sciences, an article reaches its peak in citations three years after publication, which means that the most recent reliable bibliometric data concern articles from three years ago. (3) Publication and citation habits differ between disciplines: citation data can therefore only be compared if they are normalized to field-specific reference values, which in turn are influenced by the way the fields themselves are classified.87
For a long time, however, there was only one reference citation database, that of the former Institute for Scientific Information, now renamed Clarivate Analytics. This monopoly, in the hands of a commercial company, has been much criticized for its lack of control, accuracy, completeness and transparency, and for the short time window of its most successful product, the Journal Impact Factor, which is calculated over two years. But the marketing of the ISI has succeeded in oscuring these limitations in the eyes of most. 88
Meanwhile, the corporatization of university administration had made bibliometrics attractive because it appeared to be more scalable and cheaper than peer review, and gave a greater impression of objectivity because it was based on the aggregation of a much larger number of opinions. Above all, its magic numbers seemed to offer politicians and administrators a way to evaluate research without having to rely on the mediation of researchers. In this situation, it became easy to ignore all the caveats of scientific bibliometrics.89
Moreover, the position of bibliometric science was weak: 90 it was fragmented, because the data it works with are private and expensive, to the extent that many analyses that depend on direct access to the raw data of the former ISI cannot be replicated; isolated from the rest of the social sciences; poorly institutionalized; lacking common criteria; and subject to commercial influences, i.e. the temptation to put satisfying the demands of paying customers before scientific rigor. The combination of weaknesses and conflicts of interest with the managerial need for a quick and seemingly objective system for evaluating research has led to an epidemic of ‘amateur bibliometrics“, to which research itself has become a victim.91 In other words, the flaw of do-it-yourself bibliometrics is its very low scientific reliability, not its “standardization”.
According to Jochen Gläser and Grit Laudel, bibliometrics can only become less amateurish and less prone to conflicts of interest if it emancipates itself from its customers and data suppliers and creates a citation database that is public and controllable by everyone. On the other hand, according to Bonaccorsi, the dependence of bibliometric evaluation on proprietary, monopolistic and commercial databases seems to be only a meaningless historical accident, unworthy of further discussion.
A statistical system was built on these conventions, which was taken over by commercial enterprises while waiting for a public initiative comparable to that of the national statistical offices.92
8.2 A citation experiment: Stephen Cole and “particularism” in science
Bonaccorsi’s cavalier attitude toward those who fear that “evaluation contributes to the death of the university by stabilizing power relations by (falsely) legitimizing them from a scientific standpoint and thus producing conformity to academic power”93 depends on his belief that such fears have no empirical basis. Among the American literature he claims to be referring to, the most recent text, written by Merton’s student Stephen Cole, dates from 1992.
Validation has been painstakingly studied, especially by examining hiring and promotion procedures in American universities and comparing the weight of strictly scholarly criteria with that of particularistic criteria,94 in particular age, gender, and the prestige of home departments. In an important book summarizing this literature, Cole concludes:
The research which has been conducted at an aggregate level has not produced any conclusive evidence that particularism plays more than a small role in the way in which scientists and scientific work are evaluated.95
Bonaccorsi’s quote from Cole is an excerpt from a larger paragraph that deserves to be quoted in full:
The above represent only a very small unsystematic sample of cases of particularism I have come across. Many readers of this chapter, I am sure, can add cases from their own research or personal experience. Such cases pose an important paradox. The research which has been conducted at an aggregate level has not produced any conclusive evidence that particularism plays more than a small role in the way in which scientists and scientific work are evaluated. Yet at the day-to-day individual level, science, like other institutions, seems to be riddled with particularism.96
Cole’s aggregate analysis certainly suggests that the ways in which scientists and scientific work are evaluated are hardly affected by particularism. Cole adds, however, that such findings are at odds with everyday experience. In the passage above, he mentions a “small unsystematic sample” selected from cases that have come to his attention “in twenty-five years of participatory observation in academic science:”97 biased conclusions from discordant peer reviews, implicit exchanges of favors, academic vendettas, discriminatory hiring – a brilliant male researcher with an abrasive temperament seems more acceptable than a brilliant but similarly abrasive female researcher -and so on. Cole takes the experience too seriously to dismiss all these cases as “anecdotal”: instead, he asks why his aggregate analysis has failed to capture situations familiar not only to sociologists of science but to anyone who has had even fleeting contact with an academic career.
The empirical studies that began in the late 1960s of the previous century were devoted mainly to the U.S. scientific community and considered only a few grounds for discrimination – age, religion, gender, race, rank of Ph.D. department, rank of current department, past receipt of scientific rewards – both because they were easier to handle statistically and because of the interests of the time. 98 It was precisely this selection, Cole explains, that led to the neglect of the most important reasons for particularism: “the positive and negative feelings of individuals toward other individuals and the embeddedness of scientists in networks of interpersonal social relations.”99 Contrary to Bonaccorsi’s artificially limited interpretation, Cole concludes that much of the evaluation in science must be classified as particularistic.
In analyzing the difficulty in determining whether to classify a particular evaluation as being based upon particularistic or universalistic criteria, I have concluded that most evaluation in science must be classified as particularistic, but that at least analytically it is possible to distinguish several different bases for particularistic judgments. These bases would include both irrelevant nonscientific statuses and scientific affiliations of the scientist being evaluated as well as persona valences based upon a wide array of determinants, which range from cognitive evaluations of the scientist’s work to the scientist’s personality, political views, or institutional connections with the evaluator.100
If we take the trouble to read it rather than just quote from it selectively, Cole’s book is not a summary of empirical research that is more or less outdated and of little relevance to a discussion of state evaluation of research:101 it is a serious confrontation with constructivism, resulting in a realist-constructivist theory for which science is partly socially constructed, in the laboratory and in society at large, and partly dependent on empirical facts.102
Cole characterizes constructivism as (1) a challenge to the idea that science is a purely rational activity, (2) a relativistic epistemology that treats solutions to scientific problems as underdetermined, and (3) a reduction of the cognitive content of the natural sciences to the results of social variables and processes.103
Against positivism, Cole recognizes that science is socially influenced: empirical facts do not speak for themselves, but can be selected and evaluated by biases and interests within research communities. However, radical constructivism cannot explain why, among so many ideas and theories, all socially influenced, some gain acceptance in scientific communities and others do not.104 An articulate sociology of science should be able to distinguish, at least analytically, between its cognitive and its social components. 105 Otherwise, its result would be reduced to a useless tautology, according to which science, insofar as it is socially determined, is socially determined, 106 which would miss the specificity of science as a social institution, as in Hegel’s night in which all cows are black.
In other words, Cole’s data analysis is part of an articulate theoretical reflection:107in contrast to Bonaccorsi’s cut-and-paste, he does not claim to have empirically verified the absence of particularism in the part of the American academy he has studied. Rather, in the face of conflicting experiences, he questions the effectiveness of his statistical tools and tries to improve them to the point of recognizing the existence of forms of particularism in much of scholarly evaluation, related to interpersonal relationships and social network positions, that might be attenuated in decentralized evaluation systems,108 only because scholars can always seek their fortunes elsewhere.
On citations, let’s read what the constructivist Bruno Latour writes:
There is something worse, however, than being criticised by other articles; it is being
misquoted. If the context of citations is as I have described, then this misfortune must happen
quite often! Since each article adapts the former literature to suit its needs, all deformations are fair. A given paper may be cited by others for completely different reasons in a manner far from its own interests. It may be cited without being read, that is perfunctorily; or to support a claim which is exactly the opposite of what its author intended; or for technical details so minute that they escaped their author’s attention; or because of intentions attributed to the authors but not explicitly stated in the text; or for many other reasons. We cannot say that these deformations are unfair and that each paper should be read honestly as it is; these deformations are simply a consequence of what I called the activity of the papers on the literature; they all manage to do the same carving out of the literature to put their claims into as favourable as possible a state. If any of these operations is taken up and accepted by the others as a fact, then that’s it; it is a fact and not a deformation, however much the author may protest.109
Cole quotes this passage to criticize it: Latour describes the rhetorical strategies of scientists, but does not explain what determines the success of some and the failure of others. Exclusively rhetoric, or something else as well? 110 In our own small way, we will also experience this by measuring the fate of a book like La valutazione possibile, which at some not negligible argumentative junctures theorizes in a Mertonian and cites a constructivist way.111
8.3 Democracy and numbers
For Bonaccorsi, research evaluation is democratic because it defends the autonomy of science and its procedures are rationally justified. 112 However, if democracy were based only on the above principles, it could function without elections: in some areas, citizens’ preferences would be calculated by algorithms based on data, proprietary ot not, and in others, some groups of experts, under the control of a board appointed by the government, would make decisions on behalf of citizens. And since the only constraint would be the obligation to provide rational justification, dissenters who were not emotional, dim-witted or dishonest could still use pen and paper to persuade the ‘decision-makers’ to revise their deliberations. But such a regime, in which the kratos would be more visible than the demos, would be better described as an enlightened despotism or a technocracy. Why does Bonaccorsi call it “democracy”?
The mathematician Alessandro Figà Talamanca113 reported how the journal impact factor was used within Italian biomedicine to challenge local academic power and make way for a generation of researchers with a more international vocation. Pitting ‘objective’ numbers against the claims of academic big wigs might seem a ‘democratic’ move, in such a situation. But Bonaccorsi goes beyond this contingent context.
To my mind, this thesis must be overturned and statistics must be given a fundamental role not only in scientific progress but also in the advancement of democracy. The history of social statistics shows how the extension of measurement served to combat the weight of tradition and the old social order, which provided classifications based on traditional forms of knowledge, anchored in unexplained origins.114
His footnotes, which appeal to the authority of voluminous tomes without indicating which of the hundreds of pages support such a thesis, arouse the reader’s curiosity. Among the texts he cites,115 the one with the broadest historical perspective is Trust in Numbers by T.M. Porter. Porter116 Is it a chronicle of the triumph of democracy, enlightened by numbers, over the obscurantism of tradition? If Ravetz’s review of it is to be believed,117 not exactly.
Porter wonders where the prestige of quantitative methods has come from, and refuses to see its cause in the success of modern science, preferring to go backwards and look for its origins in society rather than in science. Without denying their possible validity and usefulness, numbers, graphs and formulae are studied primarily as communication strategies.118
Quantification is generally a way of maintaining distance, of minimizing the need for personal trust and in-depth knowledge.119 Which systems, which sectors, which disciplines have had and need this depersonalization, and why? The two most widespread historiographical theses – based on two rival and incompatible preconceptions – produce the usual nights in which all cows are black: the first narrates quantification as the affirmation of increasingly powerful research methods, the second as the ideological imposition of a power system.120
The positive value of the word “objectivity” is rarely called into question. Even if it is considered metaphysical to understand it as conformity to the external object as a thing in itself, it is still valued as disciplinary objectivity or, preferably, as mechanical objectivity. Disciplinary objectivity – think of that of particle physicists – is based on assumptions and procedures that sound esoteric to the uninitiated: positivists and laypeople therefore value more mechanical objectivity, which consists of personal restraint and following rules. 121 This is also what the rule of law (rule of law) would require: but the very presence of judges, lawyers, jurists, trials, and appeals indicates that legal rules, variously modulated by the most skilled interpreters, cannot be applied mechanically. 122 Instead, when we are dealing with quantitative social scientists, mechanicism looks much more immediate and seems to make the judgment of experts more trustworthy, even if their expertise remains in reality at least partly esoteric and linked to informal knowledge and practices that are difficult to make explicit.
This is why a faith in objectivity tends to be associated with political democracy, or at least with systems in which bureaucratic actors are highly vulnerable to outsiders. The capacity to yield predictions or policy recommendations that seem to be vindicated by subsequent experience doubtless counts in favor of a method or procedure, but quantitative estimates sometimes are given considerable weight even when nobody defends their validity with real conviction. The appeal of numbers is especially compelling to bureaucratic officials who lack the mandate of a popular election, or divine right. Arbitrariness and bias are the most usual grounds upon which such officials are criticized. A decision made by the numbers (or by explicit rules of some other sort) has at least the appearance of being fair and impersonal. Scientific objectivity thus provides an answer to a moral demand for impartiality and fairness. Quantification is a way of making decisions without seeming to decide. Objectivity lends authority to officials who have very little of their own.123
Porter does indeed see a connection between the practice of quantification and contemporary democracy. This connection, however, does not concern its essential character, the self-determination of the sovereign people, but rather its bureaucratic-administrative function, i.e., not surprisingly, what it has in common with technocracy. With one difference: in a democracy, numbers are, at least in principle, a fallback justification for the use of bureaucrats who lack authority and political legitimacy; in a technocracy, on the other hand, they might as well be the main or only legitimacy.124
Porter’s book, which puts this general thesis to the test by comparing it with more than one historical case, would also deserve to be read and not just quoted. This time, however, Bonaccorsi is right: if democracy is reduced to a bureaucracy without legitimacy, its connection with numbers – which allows it to make decisions without appearing to do so – is indeed clear. Nor is it surprising that state evaluation, which claims to democratize research, instead strives to bureaucratize it, establishing bibliometrics, medians, thresholds, and parameters that, because they are quantitative, seem objective but in fact say very little about what is being researched and written.
9. For the public use of reason
Of late we can observe distinctly that the German universities in the broad fields of science develop in the direction of the American system. The large institutes of medicine or natural science are ‘state capitalist’ enterprises, which cannot be managed without very considerable funds. Here we encounter the same condition that is found wherever capitalist enterprise comes into operation: the ‘separation of the worker from his means of production.’ The worker, that is, the assistant, is dependent upon the implements that the state puts at his disposal; hence he is just as dependent upon the head of the institute as is the employee in a factory upon the management. For, subjectively and in good faith, the director believes that this institute is ‘his,’ and he manages its affairs. Thus the assistant’s position is often as precarious as is that of any ‘quasi-proletarian’ existence and just as precarious as the position of the assistant in the American university.
In very important respects German university life is being Americanized, as is German life in general. This development, I am convinced, will engulf those disciplines in which the craftsman personally owns the tools, essentially the library, as is still the case to a large extent in my own field. This development corresponds entirely to what happened to the artisan of the past and it is now fully under way.
As with all capitalist and at the same time bureaucratized enterprises, there are indubitable advantages in all this. But the ‘spirit’ that rules in these affairs is different from the historical atmosphere of the German university. An extraordinarily wide gulf, externally and internally, exists between the chief of these large, capitalist, university enterprises and the usual full professor of the old style. This contrast also holds for the inner attitude, a matter that I shall not go into here. Inwardly as well as externally, the old university constitution has become fictitious. What has remained and what has been essentially increased is a factor peculiar to the university career: the question whether or not such a Privatdozent, and still more an assistant, will ever succeed in moving into the position of a full professor or even become the head of an institute. That is simply a hazard. Certainly, chance does not rule alone, but it rules to an unusually high degree. I know of hardly any career on earth where chance plays such a role.125
As early as 1917, Max Weber spoke about us: the state evaluation of research is only the last act in the transformation of the university into a state-capitalist enterprise.126 Bureaucracy and the material and spiritual precarization of researchers – like the transformation of students into clients – are part of this process. Like Marx’s proletarian, the researcher is not master of his or her own means of production, which in the humanities and social sciences is essentially the library, and lets others determine the meaning of his or her work. Weber, who died in 1920, could foresee our alienation, both material and spiritual: today our works can have a value only if they are given away to an oligopoly of publishers and journals whose “excellence“, in Italy, is decreed by an agency appointed by the government.
The randomness of academic careers is one of the arguments used to support the oft-repeated claim that “any evaluation is better than no evaluation at all.”127 With Weber, we could respond that traditional, internal evaluation of research remains preferable to state-based, external evaluation.
Only where parliaments, as in some countries, or monarchs, as in Germany thus far (both work out in the same way), or revolutionary power-holders, as in Germany now, intervene for political reasons in academic selections, can one be certain that convenient mediocrities or strainers will have the opportunities all to themselves.128
We might also recall that the Humboldtian university – the German model whose crisis Weber announced – is also affected by chance, because it is inspired by a principle that distinguishes it from the school.
Moreover, it is a peculiarity of the higher scientific institutions that they always treat science as a problem that has still not been fully resolved and therefore remain constantly engaged in research, whereas the school deals with and teaches only finished and agreed-upon bits of knowledge.129
Frontier research, precisely because it goes beyond what is established and accepted, is subject to a degree of subjectivity and even abuse in its evaluation, which can perhaps be mitigated in decentralized systems, but never eliminated. 130 Weber, -Professor at the German University, wrote that young people interested in academic careers must be asked “Do you in all conscience believe that you can stand seeing mediocrity after mediocrity, year after year, climb beyond you, without becoming embittered and inwardly corrupt?”131
Nonetheless, we might eristically conclude that no external evaluation can be more effective and more respectful of research freedom than that exercised within the scientific community, among peers:
One of the ways of controlling scientific judgements is the reputational censorship that can be exercised by a vigilant and open scientific community. A necessary condition for this censorship to be exercised is that there is a clear attribution of responsibility.132
But how can a scientific community be and remain vigilant and open? How can responsibility be clearly assigned? It is not 1858: the university is no longer the preserve of an élite but is open to the masses; the scientific community has become, with a few exceptions, much larger and more disorganized than the one that gathered in the London headquarters of the Linnean Society to listen to the memoirs of Darwin and Wallace; the oligopolists of commercial publishing and citation databases dominate the academic game with their interests in profit and power.133
As Max Weber foresaw and Jerome Ravetz recognizes, the industrialization of research and the transformation of the university into a state-capitalist enterprise has fractured and disempowered the community on which the ethos of science was based. Such a community was made up of researcher-craftsmen who were free to choose what to study without too much economic or competitive pressure, and who were bound together by an informal network of consensual knowledge and norms; moreover, its small size made it possible to build reputations based on personal acquaintance. Nowadays, a comparatively solid quality control has yet to be found.134 Bonaccorsi himself tried to justify the state evaluation by presenting it as the distillation of the self-evaluation of the same scientific community whose autonomy he had helped to dismantle as a member of the ANVUR board.
If the distributed and informal quality control of the scientific community is replaced by bibliometrics and journal fetishism, fraud becomes easier and more attractive. 135 According to Telmo Pievani 136 the causes are structural.
Scientists who cheat are not isolated cases, but the product of degenerative mechanisms that facilitate misconduct and that have become more acute in recent years. These include the excessive pressure to publish; the fierce competition in certain fields; the frenetic pace of scientific paper production (about two million papers published each year); the need to keep the media visibility of one’s results high at all times in order to obtain funding (increasingly turning the communication of science into marketing); the business of paid scientific journals and pirated journals; the sense of impunity resulting from poor controls; the defensive instincts of the scientific communities themselves; the voracious search for citations to boost their bibliometric indices; the allure of some scientific stories that are too good to be true, and so succeed for a while despite their shaky foundations (even the scientific community has its confirmation biases, which sooner or later collapse in the face of observations that refute them).
If the structures are so strong, it is in vain to invoke ethics as a deus ex machina descending from a heaven far removed from the earth and its customs, or to replace it with a state evaluation, to whose agents government appointment would confer a superior morality. What is collapsing is the very system of relationships that produces, as Pievani puts it, “a free community of equals who learn from their mistakes and constantly self-correct”.
As for the attribution of responsibility, the use of bibliometrics to make decisions without seeming to decide137 and anonymous peer review are designed to avoid it. Let’s quote Georgio Israel again:
The anonymity of the reviewer, on the other hand, is a silly and scandalous idea. Those who have to sign a review, and thus put their reputations on the line, are quite careful about what they write, whereas-and there are many examples of this-an anonymous reviewer can afford the luxury of making hasty, superficial judgments, or even blatantly false statements, with the most diverse of intentions, and without having to pay any price for it. The proliferation of selection procedures using anonymous evaluators, far from guaranteeing the seriousness and objectivity of the judgment – it is argued that the anonymous evaluator would be free to express himself without the restraint dictated by his possible relations of acquaintance or friendship with the evaluated or the fear of retaliation – induces unethical, if not downright improper, behavior. Why is anonymity necessary? A person belonging to the world of research and academia should be able to conform to criteria of “science and conscience” and not be afraid to defend decisions made on that basis. Instead, anonymity risks providing cover for intellectually shallow or ethically incorrect behavior.
Anonymity transforms what could be – and has been138 – a public discussion among scholars into an opaque exercise of power without interaction with the authors. Understanding editorial publication as a mark of scholarly quality and diligence may have made some sense in the age of printing, whose technological and economic limitations forced a pre-selection of texts destined to see the light of day. Now, however, it would be neither costly nor difficult to treat publication as a trivial act, and to postpone peer review to a later conversation open to all.139
In Italy, the main obstacle to this simple revolution is a despotic and retrograde evaluation of research, which does not promote publicness but privatization.
I suggest that all shortcomings in the current publication system are rooted in the fact that it has drifted away from Science ethics, **with publication – peer review, evaluation and dissemination – being privatized. A process whose rationale is to be open, transparent, and community-wide has become trapped in editors’ mailboxes**. **The validity and value of a scientific work are both decided once and for all time, by two or three people in a process that is confidential, private, anonymous, undocumented, and with short deadlines.** Here, I use the term “privatization” not mean that the process is conducted by private companies, but to imply it concentrated in a few hands. Whilst some may consider that private publishers charge exorbitant (and unaffordable) prices for their journals, my arguments still stand if the current system was entirely run by public institutions, learned societies or any non-profit organization.
Understood in the sense explained by Michaël Bon when presenting his alternative, privatization is much more than the outsourcing of data and research evaluation processes into the corporate hands of oligopolists animated by commercial interests: it is the dispossession of that publicly accessible and non-ancillary space for discussion that the scientific community had been able to invent and defend around the first modern scientific journal, the Philosophical Transactions, even if only as a club-good.141 of the Royal Society.
To grasp the difference, which is quite obvious, between Renaissance magic and modern science, it is necessary to reflect not only on content and methods, but also on the images of knowledge and the images of the knower. There are certainly many mysteries in our world, and many theorists and practitioners of the arcana imperii live in it. There are also many and often not “honest” dissimulations. Even in the history of science there have been dissimulators. It should be emphasized, however, that after the first scientific revolution, in the scientific literature and in the literature about science, there is and can be no eulogy or positive assessment of dissimulation, in contrast to what has largely happened and is happening in the world of politics. Dissimulation, not making one’s views public, is simply cheating or betraying. Scientists, as a community, can be forced into secrecy, but they must be forced. When such coercion occurs, they protest in various ways or even, as has been the case even in this century, resolutely rebel against it. The particle of in the linguistic expression “Kepler’s laws” does not indicate any property at all: it only serves to perpetuate the memory of a great personality. Secrecy, for science and within science, has become a disvalue.142
In state evaluation, bibliometrics, as the supposed science of science, depends on proprietary data; the authority of the evaluators rests not on the judgment of scientists but on government appointment; the names of the reviewers are secret; and public discussion is a waste of time for those engaged in producing “research products”. In 1957, Merton wrote that outright fraud – such as cheating and falsifying data – is relatively rare in the scientific world because “personal honesty is supported by the public and testable character of science”.143 Would he hold the same view today?
Roberto Caso reports on an article written by Bonaccorsi in collaboration with others, available in open access and offered for open peer review on the F1000 platform. The fact that the names of the reviewers were public made it possible to verify that the authors were employed by ANVUR at the time the article was published, while three of the reviewers were involved in research projects funded by the same agency, and one of them subsequently published a positive review of La valutazione possibile. However, the publicness and openness of the comments on F1000 allowed an independent reader, at his own risk, to point out some weaknesses in the paper.144 If the paper had been published in a subscription journal after a secret peer review, no one would have known that the reviewers and authors belonged to the same institutional and social network; the independent commenter, if he could afford to read the article, would have been able to submit his comments only separately and not necessarily with the same visibility.
Despite these problems, between 2000 and 2010, while the number of scientific articles published increased by 40 percent, there was a tenfold increase in retractions, i.e., the withdrawal of articles already published in closed peer-reviewed journals because they were found ex post to be affected by plagiarism, fabrication or falsification of data, and other misconduct. Is this a sign of good health? According to Ivan Oransky and Adam Marcus, who report these figures, not necessarily:145 a system based on publication fetishism encourages honing the art of cheating and fails to reward those who read texts carefully and discover their weaknesses. The publish or perish turns publication into the agonistic gesture of those who must win at all costs, and ex post criticism into an antagonistic gesture, because the space of public discussion, in which each researcher presents in good faith the best he has managed to achieve and others help him correct his mistakes, has again been privatized and thus suppressed. We must ask if and how it can be reconstructed.
Technically, it would be enough to use the146 tools we already have. The open World Wide Web147 was designed for the research world to allow anyone to share documents with anyone. In Europe, we could even now bypass commercial publishers and use overlay journals, which report on and review texts already deposited in open access institutional and disciplinary repositories by subjecting them to closed or, preferably, open peer review.148 All that now remains in the dark, with an esoteric aura that has nothing to do with public scholarship, would be said in the light.
In particular, putting research online in all its stages would show something that the print-based world has kept hidden: the process, complex and never completed,149 that leads to the – tentative – conviction that a theory has some value.
When science was a type of publishing, it aimed at producing knowledge that was – like a publication -broken off from its source because it was embodied in a physical thing with a life of its own. The new issue of Nature arrives on the desk of the scientist, and she sighs in relief. Her research is out there at last. If, heaven forbid, a truck were to hit her this morning, the knowledge wouldn’t die with her. It now has a life of its own that can be tracked and weighed. But now that science is becoming a network, knowledge is not something that gets pumped out of the system as its product. The hyperlinking of science not only links knowledge back to its sources. It also links knowledge into the human contexts and processes that produced it and that use it, debate it, and make sense of it. The final product of networked science is not knowledge embodied in self-standing publications. Indeed, the final product of science is now neither final nor a product. It is the network itself—the seamless connection of scientists, data, methodologies, hypotheses, theories, facts, speculations, instruments, readings, ambitions, controversies, schools of thought, textbooks, faculties, collaborations, and disagreements that used to struggle to print a relative handful of articles in a relative handful of journals..150
According to David Weinberger, networked science is more like science as experienced by researchers than as reported by the mass media and,151 we might add, by state evaluators who reduce research to “products. The Royal Society was a forum for discussion, although only an élite participated in its philosophical transactions. The open World Wide Web could be just as much, but under the eyes of everyone. On one condition: that researchers were encouraged to make their work public, and rewarded for sharing texts and data and taking responsibility for showing that science is a problem not yet fully solved, instead of kneeling before a fetishistic celebration of those who privatize science and hide it behind anonymity, usually in the service of the publishing oligopolies or the state.
These are not new ideas: in his essay on the Enlightenment (Ak, VIII, 37), Immanuel Kant distinguished between a public use of reason – made by every human being, regardless of his or her profession, when speaking as a scholar to the society of the citizens of the world – and a private and mechanical use of reason, to which those who act as officials within certain collective organizations are bound. While the private use of reason can be restricted, its public use must be left free: people who speak as scholars must be allowed to speak to everyone and say what they think, as they think it, as they want to say it.
This freedom is culturally and socially important because it can spread “the spirit of a rational valuing of one’s own worth and of the calling of each human being to think for himself” (Ak, VIII, 36). Access to and participation in the debate on problems that have not yet been fully resolved helps those who are constrained by the mechanisms of the state or the corporation to understand that what seems predetermined and unquestionable is not at all so, that they themselves can be more than their obedience to orders, that their subjugation can be overcome, that the world can be changed.152
Certainly, a society that defends the freedom of the public use of reason and accepts to pay scholars who are free and critical thinkers rather than functionaries must accept that efficiency is not its only and main problem. It must therefore be able to recognize that, as a society, it is imperfect and still collectively engaged in research, even in the seemingly technical question of how to disseminate and discuss science. From this point of view, a structurally despotic and retrograde state evaluation does not simply hurt the privileges of a minority, but contributes to the construction of a general model of society that is equally despotic and retrograde. The digital revolution has only made clearer what was already clear in Kant’s time and well known to the authors of the Italian Constitution: freedom of research and culture is not only a freedom for scholars: it is a freedom for everyone, because everyone can enjoy and participate in reason when its use is truly and not rhetorically public.
Bibliography
Anderson, Margo J. 1988–2015. The American Census. New Haven & London: Yale U.P.
Archibugi, Daniele. 2004. “Chi ha paura della bibliometria?” In Partecipare la scienza. Roma: Biblink. http://www.irpps.cnr.it/it/system/files/Partecipare_la_scienza.pdf.
Baccini, Alberto. 2016. “Collaborazionisti o resistenti. L’accademia ai tempi della valutazione della ricerca.” Roars. http://www.roars.it/online/collaborazionisti-o-resistenti-laccademia-ai-tempi-della-valutazione-della-ricerca/.And so, Merton concludes, we need institutions that remove scientists from these dilemmas: “Peers and friends in the scientific community do what the troubled Darwin cannot do for himself.”
Biagioli, Mario. 2016. “Watch Out for Cheats in Citation Game.” Nature. http://www.nature.com/news/watch-out-for-cheats-in-citation-game-1.20246.
Bon, Michaël. 2015. “Principles of the Self-Journal of Science: Bringing Ethics and Freedom to Scientific Publishing.” http://www.sjscience.org/article?id=46.
Bonaccorsi, Andrea. 2013. “La valutazione di Bertoldo.” il Mulino, no. 2, marzo-aprile. doi:10.1402/72991.
———. 2015. La valutazione possibile. Teoria e pratica nel mondo della ricerca. Bologna. Il Mulino.
Bourdieu, Pierre. 1988. Homo Academicus. Stanford: Stanford U.P. https://monoskop.org/images/4/4f/Pierre_Bourdieu_Homo_Academicus_1988.pdf.
Brembs, Björn. 2013. “What Ranking Journals Has in Common with Astrology.” RT. A Journal on Research Policy and Evaluation 1 (1). https://riviste.unimi.it/index.php/roars/article/view/3378.
———. 2015. “What Should a Modern Scientific Infrastructure Look Like?” April 27. https://bjoern.brembs.net/2015/04/what-should-a-modern-scientific-infrastructure-look-like/.
Bucci, Enrico. 2017. Cattivi scienziati. Torino: ADD. http://www.scienzainrete.it/contenuto/articolo/roberto-satolli/chi-ha-paura-dei-cattivi-scienziati/marzo-2016-0.
Bulmer, Martin, Kevin Bales, and Kathryn Kish Sklar, eds. 1991. The Social Survey in Historical Perspective, 1880 1940. Cambridge: Cambridge U.P.
Caso, Roberto. 2017. “Una valutazione (della ricerca) dal volto umano: la missione impossibile di Andrea Bonaccorsi.” doi:10.5281/zenodo.375968
Cole, Stephen. 1992. Making Science : Between Nature and Society. Cambridge (Mass.): Harvard U.P.
Croce, Giulio Cesare. 2013. Bertoldo e Bertoldino (col Cacasenno di Adriano Banchieri). Progetto Manuzio. https://www.liberliber.it/online/autori/autori-c/giulio-cesare-croce/bertoldo-e-bertoldino.
De Bellis, Nicola. 2009. Bibliometrics and Citation Analysis. From the Science Citation Index to Cybermetrics. Scarecrow Press.
———. 2017. “Shut up and dance: l’universo morale della bibliometria tra principi universali e banalità del fare.” ESB Forum, February. http://www.riccardoridi.it/esb/fdo2016-debellis.htm.
De Martin, Juan Carlos. 2017. Università Futura. Torino: Codice.
Desrosières, Alain. 2010. La Politique des grands nombres. Histoire de la raison statistique. Paris: La Découverte (1993); traduzione inglese Thc politics of large numbers: a history of statistical reasoning, Cambridge, Massachusetts, and London, England, Harvard U.P. 1998.
———. 2013. Pour Une sociologie historique de la quantification. Paris: OpenEdition Books. doi:10.4000/books.pressesmines.901.
Di Donato, Francesca; 2006. “Università, scienza e politica nel ’Conflitto delle facoltà’.” Bollettino telematico di filosofia politica. https://btfp.sp.unipi.it/dida/streit/.
Fecher, Benedikt, Sascha Friesike, Isabella Peters, and Gert G. Wagner. 2017. “Rather Than Simply Moving from ‘Paying to Read’ to ‘Paying to Publish’, It’s Time for a European Open Access Platform.” LSE Impact Blog. http://blogs.lse.ac.uk/impactofsocialsciences/2017/04/10/rather-than-simply-moving-from-paying-to-read-to-paying-to-publish-its-time-for-a-european-open-access-platform/.
Figà-Talamanca, Alessandro. 2000. “L’Impact Factor nella valutazione della ricerca e nello sviluppo dell’editoria scientifica.” In SINM 2000 : un modello di sistema informativo nazionale per aree disciplinari. https://www.roars.it/online/limpact-factor-nella-valutazione-della-ricerca-e-nello-sviluppo-delleditoria-scientifica/.
Fiori, Simonetta. 2000. “I professori che dissero ‘no’ al Duce.” La Repubblica. http://storiaxxisecolo.it/antifascismo/antifascismo5.html.
Flaherty, Colleen. 2015. “The Costs of Publish or Perish.” Inside Higher Education. https://www.insidehighered.com/news/2015/10/12/study-suggests-pressure-publish-impedes-innovation.
Giannantoni, Gabriele. 2005. Dialogo socratico e nascita della dialettica nella filosofia di Platone. Edited by Bruno Centrone. Napoli: Bibliopolis.
Giglioli, Pier Paolo. 2012. “Segni dei tempi.” Roars. http://www.roars.it/online/segni-dei-tempi/.
Gillies, Donald. 2012. “Economics and Research Assessment Systems.” Economic Thought Paper Review, 23–47.
Gläser, Jochen, 2007 “The Social Orders of Research Evaluation Systems.” In The Changing Governance of the Sciences: The Advent of Research Evaluation Systems.
Gläser, Jochen, and Grit Laudel. 2007 “The Social Construction of Bibliometric Evaluations.” In The Changing Governance of the Sciences: The Advent of Research Evaluation Systems, 101–23. http://www.laudel.info/wp-content/uploads/2015/12/2007_The-social-construction-of-bibliom-eval.pdf.
Greco, Pietro. 2010. “Il ’Sidereus Nuncius’ e l’origine della comunicazione pubblica della scienza.” Scienza & Filosofia, 3https://www.scienzaefilosofia.com/2018/03/26/il-sidereus-nuncius-e-lorigine-della-comunicazione-pubblica-della-scienza/.
Harnad, Stevan. 2003. “Back to the Oral Tradition Through Skywriting at the Speed of Thought (Ranimer La Tradition Orale Par La Ciélographie à La Vélocité de L’esprit).” In Les Défis de La Publication Sur Le Web: Hyperlectures, Cybertextes et Méta-Editions. https://halshs.archives-ouvertes.fr/sic_00000315/.
Ioannidis, John P. A.. 2005. “Why Most Published Research Findings Are False.” PLoS Medicine 2. doi:10.1371/journal.pmed.0020124.
Israel, Giorgio. 2013. “Il fascismo e la scienza.” In Chi sono i nemici della scienza? Giorgio Israel. Some parts translated in Israel, Giorgio. 2004. “Science and the Jewish Question in the Twentieth Century: The Case of Italy and What It Shows”, Aleph 4. 2004, pp. http://www.jstor.org/stable/40385735?origin=JSTOR-pdf
Johns, Adrian. 2009. Piracy. Chicago: The University of Chicago Press.
Jozan, Raphaël. 2009. “La politique des grands nombres.” Revue critique d’ecologie politique. http://ecorev.org/spip.php?article786.
Kant, Immanuel. 1784. Beantwortung der Frage: Was ist Aufklärung? http://kant.korpora.org/Band8/033.html.
———. 1796. Zum ewigen Frieden. http://kant.korpora.org/Band8/341.html.
———. 2011. Sette scritti politici liberi. Edited by Maria Chiara Pievatolo. Firenze: Firenze University Press. https://btfp.sp.unipi.it/dida/kant_7.
Latour, Bruno. 1987. Science in Action. How to Follow Scientists and Engineers Through Society. Cambridge Mass.: Harvard University Press. https://expectationandexpertise.files.wordpress.com/2012/09/b-latour.pdf (é visibile solo il I capitolo).
Lee, Frederic S., Xuan Pham, and Gyun Gu. 2012. “The Uk Research Assessment Exercise and the Narrowing of Uk Economics.” https://mpra.ub.uni-muenchen.de/41842/.
Longo, Giuseppe. 2015. “Le conseguenze della filosofia.” https://www.di.ens.fr/users/longo/files/Le-conseguenze-filosofia.pdf.
———. 2016. “Complessità, scienza e democrazia.” ROARS. http://www.roars.it/online/complessita-scienza-e-democrazia/.
Martin, Ben R. 2017. “What’s Happening to Our Universities?” Prometheus. doi:10.1080/08109028.2016.1222123.
Merton, Robert K. 1941. “Znaniecki’s the Social Role of the Man of Knowledge.”
———. 1942. “The Normative Structure of Science.”
———. 1945. “Paradigm for the Sociology of Knowledge.”
———. 1957. “Priorities in Scientific Discovery.”
———. 1961. “Singletons and Multiples in Science.”
———. 1973. The Sociology of Science. Theoretical and Empirical Investigations. Edited by Norman W. Storer. The University of Chicago Press. Chicago and London.
———. 2000. “On the Garfield Input to the Sociology of Science: A Retrospective Collage.” In The Web of Knowledge. A Festschrift in Honor of Eugene Garfield, edited by Blaise Cronin and Helen Barsky Atkins, Information Today. Medford, NJ.
Merton, Robert K., and Harriet Zuckerman. 1972. “Age, Aging, and Age Structure in Science.”
Mieli, Paolo. 2010. “Il riciclaggio dei docenti: da antisemiti a democratici.” Nuova Rivista Storica. http://www.nuovarivistastorica.it/?p=2014.
Mori, Massimo. 2016. “Recensione a A. Bonaccorsi, La valutazione possibile.” Rivista di Filosofia, no. 1. doi:10.1413/82726.
Musselin, Christine. 2005. Le Marché des universitaires: France, Allemagne, Ètats-Unis. Paris: Presses de la Fondation Nationale des Sciences Politiques. http://spire.sciencespo.fr/hdl:/2441/3cr7jj61bs68cvg9962c1ckaj/resources/aust2005-2.pdf.
Oransky, Ivan, and Adam Marcus. 2016. “Two Cheers for the Retraction Boom.” The New Atlantis. http://www.thenewatlantis.com/publications/two-cheers-for-the-retraction-boom.
Oransky, Ivan 2022. “Retractions are increasing, but not enough” Nature. https://www.nature.com/articles/d41586-022-02071-6.
Osborne, Robin. 2013. “Why Open Access Makes No Sense.” In Debating Open Access, edited by Chris Wickham and Nigel Vincent. The British Academy. https://www.thebritishacademy.ac.uk/publications/debating-open-access/.
Peirce, Charles S. 1877. “The Fixation of Belief.” Popular Science Monthly 12 (November): 1–15. http://www.peirce.org/writings/p107.html.
———. 1878. “How to Make Our Ideas Clear.” Popular Science Monthly 12 (January): 286–302. http://www.peirce.org/writings/p119.html.
Pievani, Telmo. 2016. “I buoni e i cattivi nella scienza.” Micromega. http://ur1.ca/qsn8i.
Pievatolo, Maria Chiara. 2013a. “Web.” http://archiviomarini.sp.unipi.it/567/.
———. 2013b. “Una questione di potere: la discussione scientifica nel Protagora.” Bollettino telematico di filosofia politica, February. https://btfp.sp.unipi.it/it/2013/02/una-questione-di-potere-la-discussione-scientifica-nel-protagora/.
———. 2016. “Anonimo scientifico.” Bollettino telematico di filosofia politica. https://btfp.sp.unipi.it/it/2016/10/ex-oriente-lux/.
Platone. Theaetetus. https://data.perseus.org/citations/urn:cts:greekLit:tlg0059.tlg006.perseus-grc1:142a.
Porter, Theodore M. 1995. Trust in Numbers : The Pursuit of Objectivity in Science and Public Life. Princeton U.P. Princeton. http://www.andreasaltelli.eu/file/repository/Excerpts.pdf.
Poynder, Richard. 2016. “Open and Shut?: The Open Access Interviews: Sir Timothy Gowers, Mathematician.” http://poynder.blogspot.com/2016/04/the-open-access-interviews-sir-timothy.html.
Ravetz, Jerome R. 1971–1996. Scientific Knowledge and Its Social Problems. Transactions Publishers. New Brunswick and London. http://www.andreasaltelli.eu/file/repository/Scientific_Knowledge_and_Its_Social_Problems.pdf.
———. 1997. “Book Review: In Numbers We Trust.” Issues in Science and Technology. http://issues.org/13-2/ravetz/.
———. 2016. “How Should We Treat Science’s Growing Pains?” The Guardian. http://www.theguardian.com/science/political-science/2016/jun/08/how-should-we-treat-sciences-growing-pains.
Roars, Redazione. 2017. “Caso Macchiarini. Il giornalismo indipendente è più efficace della peer review?” Roars. http://www.roars.it/online/caso-macchiarini-il-giornalismo-indipendente-e-piu-efficace-della-peer-review/: see, in English, Leonid Schneider’s blog: https://forbetterscience.com/tag/paolo-macchiarini/.
Rossi, Paolo. 2015. La nascita della scienza moderna in Europa. Roma-Bari: Laterza.
Russo, Lucio. 2008. La cultura componibile. Napoli: Liguori.
“San Francisco Declaration on Research Assessment (DORA).” 2013. http://am.ascb.org/dora.
Savigny, Friedrich Carl von. 1814. Vom Beruf Unsrer Zeit fFür Gesetzgebung Und Rechtwissenschaft. Heidelberg. https://www.deutschestextarchiv.de/book/view/savigny_gesetzgebung_1814?p=144.
Savigny, Friedrich Carl, and Anton Friedrich Justus Thibaut. 1831. Of the Vocation of Our Age for Legislation and Jurisprudence. Transl. by Abraham Hayward. https://books.google.at/books?id=JvukCY8Rbz0C&redir_esc=y
Smaldino, Paul E., and Richard McElreath. 2012. “The Natural Selection of Bad Science.” Royal Society Open Science. doi:10.1098/rsos.160384.
Stigler, Stephen M. 1998. Statistics on the Table: The History of Statistical Concepts and Methods. Cambridge Mass.: Harvard University Press. https://web.archive.org/web/20161116115321/http://www.americanscientist.org/bookshelf/pub/statistical-gauntlet/.
Taleb, Nassim N. 2010. The Black Swan: The Impact of the Highly Improbable, Second Edition. Random House.
“The Shackles of Scientific Journals and How to Cast Them Off.” 2017. The Economist. http://www.economist.com/news/leaders/21719480-and-how-cast-them-shackles-scientific-journals.
Von Humboldt, Wilhelm. 1809–10. “Über die innere und äussere Organisation der höheren Wissenschaftlichen Anstalten in Berlin.” http://edoc.hu-berlin.de/docserv/docviews/abstract.php?id=30376. Transl. as On the Internal and External Organization of the Higher Scientific Institutions in Berlin (1810) https://germanhistorydocs.ghi-dc.org/sub_document.cfm?document_id=3642&language=english
Weber, Max. 1919. Wissenschaft Als Beruf. https://de.wikisource.org/wiki/Wissenschaft_als_Beruf. Engl. Transl. by H.H. Gerth and C. Wright Mills, Science as a Vocation, in Max Weber: Essays in Sociology, pp.
129-156, New York: Oxford University Press, 1946 https://sociology.sas.upenn.edu/sites/default/files/Weber-Science-as-a-Vocation.pdf
Weinberger, David. 2011. Too Big to Know. New York: Basic Books. http://www.nicopitrelli.it/tag/la-stanza-intelligente.
Whitley, Richard. 2007 “The Consequences of Establishing Research Evaluation Systems for Knowledge Production in Different Countries and Scientific Fields.” In The Changing Governance of the Sciences: The Advent of Research Evaluation Systems.
Whitley, Richard, and Jochen Gläser eds. 2007. The Changing Governance of the Sciences: The Advent of Research Evaluation Systems. Dordrecht: Springer Netherlands.
Whitley, Richard, Jochen Gläser, and Lars Engwall, eds. 2010. Reconfiguring Production Knowledge. Changing Authority Relationships in the Sciences and Their Consequences for Intellectual Innovation. Oxford: Oxford U.P.
- Bonaccorsi 2015.↩
- Knowledge of a paywalled book is likely to depend on the possibly biased open access reviews of its critics.↩
- On this appointment, see Caso 2017, sect. 5.↩
- Bonaccorsi 2015, 108–9.↩
- On Bonaccorsi’s use of Pizzorno, which goes beyond the issues analyzed here, see Mori 2016, 146–47.↩
- Bonaccorsi 2015, 109, italics added. All translations of Bonaccorsi’s book passages are mine.↩
- Merton 1945, 7–11 in Merton 1973.↩
- Bonaccorsi 2015, sect. V certainly reports a conversation, but with the architectural historian Carlo Olmo, who, as vice-president of a group of expert evaluators at ANVUR, was directly or indirectly invested by the ANVUR itself.↩
- Robin Osborne, a classicist and critic of Open Access, commented, on the interpretation of publications as ‘products’ rather than part of a process: ”Publishing research is a pedagogical exercise, a way of teaching others, not a way of giving others information which they are expected to handle on the basis of what they have already been taught.” (Osborne 2013).↩
- Bonaccorsi 2015 II.5, p. 50.↩
- Just one quote, taken from the last page of his book, as an example (italics are ours). With regard to the humanities and social sciences, “it is therefore necessary to take note of the existence of irreducible differences, without underestimating them. Pluralism and controversy pose a very serious problem of trust in evaluation. First, it is necessary to demonstrate to the scientific communities that one is aware of the seriousness of the problem. Then it is necessary to implement organizational solutions that can mitigate the problem (consultations, pluralistic selection of experts, rotation of experts, short-term assignments, use of a large number of referees, triangulation methods, combination of different evaluation methods)”. It is left to the reader to assign an “agency” to these constructions.↩
- Bonaccorsi 2015 IV.3, p. 89, emphasis added.↩
- Bonaccorsi 2015 IV, n15, n17.↩
- Desrosières 2010, p. 289. For example, the usual equivalence classes that divide the scholarly literature into articles, monographs, reviews, and so on, bring together potentially and actually different objects: some reviews may be richer than an article, some articles may be more complex than a monograph, and so on. Moreover, in times of media revolution, some works – for example, a work like this – may not fit into any of the typologies adopted.↩
- Supiot 2005; Supiot 2015. Gouvernance “is ‘objective’ management according to formal, potentially mechanizable, context-independent rules that formalize methods of optimization. Instead, human law is interpreted, debated first in the agora at the time of approval, then by the government or courts that apply it in their own spheres and give it contextual meaning”, in Longo 2016.↩
- An accessible summary of Desrosières’ book can be found in the review by Jozan 2009.↩
- Desrosières 1998 p. 325, but also Desrosières 2013, sect. V.3. Bonaccorsi does acknowledge theoretically (Bonaccorsi 2015, sect. IV, n17) that for Desrosières statistics are ‘riddled with implicit assumptions’, but he does not mention their specific political problematicity – perhaps to avoid being forced to take Supiot’s critique seriously.↩
- Plato, Theaetetus 172d-173c .↩
- And precisely for this reason, as Supiot (Supiot 2001, 19) notes, the position of the regulatory authorities is particularly problematical.↩
- Bonaccorsi 2015 IV, n15, n17↩
- Bonaccorsi 2015, sect. I.2 .↩
- Bucchi 2004.↩
- On the distinction between sociology of science and sociological theory of knowledge, see Merton 1941, 41-42 now in Merton 1973.↩
- Merton 1942, 268 now in Merton 1973. Against Merton, Bonaccorsi 2015, sect. III instead ventures into polymathy: ‘Evaluation must resolutely enter the epistemic debate of the human and social disciplines in order to listen to and understand the tensions and problems, and to better focus its tools’ (p. 54). As Caso 2017 sect. III remarks, Merton did not write his 1942 essay to empirically demonstrate that government and its agencies could evaluate research. On the contrary, his intention was to show how the scientific community stands up when it comes into conflict with an intrusive state such as the totalitarian state: “Incipient and actual attacks upon the integrity of science have led scientists to recognize their dependence on particular types of social structure” (Merton 1942, 267); “a frontal assault on the autonomy of science was required to convert this sanguine isolationism into realistic participation in the revolutionary conflict of cultures. The joining of the issue has led to a clarification and reaffirmation of the ethos of modern science” (Merton 1942, 268). Scientists respond by reaffirming the principles of their ethos, which has historically been vindicated against the violence of the state and refuses to be constrained by rules written by the government. ↩
- Merton (Merton 2000) did appreciate the Science Citation Index, but as a tool for the sociology of science, which, as we have seen, is for him a descriptive discipline. It is also revealing that Merton recognized Garfield’s merit from an economic rather than a scientific point of view. If sociologists of science – he remarks – had invented bibliometrics as a research tool, they would not have been able to get it funded: Garfield, on the other hand, managed to get funding by presenting it as a tool for bibliographic research in physics, chemistry and biology. Even an earlier text written by Merton with Harriet Zuckerman (Merton and Zuckerman 1972, 508 n28, now in Merton 1973) stated that citation analysis is a useful tool for sociology and the history of science and nothing else. “Merton, who celebrated the potential of citation indexes as a new and long-awaited tool of sociological analysis, played a constant role as mentor and advisor in the perfection of Garfield’s creature, though he never personally worked in the field of citation theory and analysis. He also underlined the rudimental character of citation-based scientometric indicators and insisted on possible phenomena causing the loss of significant data in citation analysis, first of all the “obliteration by incorporation,” namely the emergence of key documents so important for a research field that they end up embedded into the corpus of currently accepted knowledge, being no longer cited in references. ‘To the extent that such obliteration does occur,’ he wrote, ‘explicit citations may not adequately reflect the lineage of scientific work. As intellectual influence becomes deeper, it becomes less readily visible’.” (De Bellis 2009, 57).
- Bonaccorsi exempts the social sciences and humanities from bibliometrics because of their epistemological pluralism. Bonaccorsi 2015, sect. II.2, p. 40: “An important implication for the distinction between the hard sciences and the humanities and social sciences is the following: while for the former the achievement of agreement can be centered on the search for experimental evidence, for the latter the possibility that agreement can only be achieved ‘in a hundred or ten thousand years’ necessarily shifts the terrain of comparison. In the presence of non-convergent judgments, it is unthinkable to subordinate the search for agreement to the discovery of evidence, because no one knows if and when it will be found, and in any case the passage of time would change its nature. The ground shifts to the criteria of method, to the requirements of “well done research”. It is not necessary to embrace postmodern relativism to see the naive positivism of Bonaccorsi’s distinction between the natural sciences and the humanities, which neglects the not only historical contribution of philosophy and theology to the cognitive construction of mathematics and physics. See, for example, Longo 2015.
- Bonaccorsi 2015, sect. II.5, p. 50 actually says that evaluation defends “the autonomy of science in democratic societies”.
- Kant 1784. As an English reference version, we will use Mary Gregor’s translation, which can also be read without a paywall here. ↩
- Bonaccorsi avoids Kant even when he encounters him in his readings. For example, he criticizes Alain Supiot for quoting the esotericist Guénon (Bonaccorsi 2015, sect. I n39) in a footnote to the prologue of Supiot 2005 (p.XI n12, English translation) in support of the not so esoteric idea that we should not consider numbers independently of the qualitative character of what we are counting. However, the keen reader of footnotes misses Supiot’s reference to An Answer to the question: what is Enlightenment?, in the immediately following paragraph (p. XII-XIII, English translation) of the main text. Even to demand to be treated as free beings is deplorably ‘anti-modern’ and ‘anti-scientific’?
Sapere aude! ‘Have courage to use your own reason!’ Kant’s famous precept reminds us of the act of faith on which the Enlightenment rests: faith in the human being as rational being. We believe in the Enlightenment if we believe that the human being is capable of thinking freely. Such an act of faith should not prevent us from examining the conditions under which the human being may become a rational being, but it should prevent us assimilating the human being to an animal or a machine, or professing to explain him or her away through external determinants. Whenever the discipline of the human sciences attempts to imitate the natural sciences, reducing people to objects that can be programmed and explained away, it becomes a mere relic of Western dogma, a pitiful reminder of the decomposition of scientific thought as it busies itself with eliminating the very questions it should be addressing.
- Supiot 2005, sect. V.1.2 describes regulatory authorities as agencies with technocratic rather than democratic legitimacy. Although they are supposed to be independent of the state and private operators, their independence is questionable, both because the state appoints their members and because they are the target of intense private lobbying – which is difficult to control, because they are not politically accountable as they are not elected. For an accessible version, see Supiot 2001, 1.↩
- Suffice it to mention Lysenko in Stalinist Russia, Deutsche Physik in Nazi Germany, and the effects, even in the long term (Mieli 2010), of the Italian racial laws (Israel 2013, 2004).↩
- Kant 1796.↩
- If the criteria for evaluating research are the same as those endorsed by the scientific community, why is it necessary to impose them on researchers? Instead of wielding a sword borrowed from the state and pretending that its bureaucratic power is somehow scientific, it would have been better – since decisions in this area are not so urgent – to heed Jerome Ravetz‘s warning (Ravetz 1971–1996, 15, emphasis added): “When we consider the issues in which science is employed, as with risks and environmental policy, we find a situation that is very different from that of the craftsman-scientists of old, who chose their problems and then investigated them under the guidance of the criteria of value and adequacy established by a communal consensus of their peers and mentors. In policy-relevant research, that haven is no more; now we have, typically, ‘facts are uncertain, values in dispute, stakes high, and decisions urgent.’ The fields of science that are employed tend to be low in prestige and in strength, and are frequently what I call ‘immature’ or ‘ineffective.’ Now, here is the problem: if quality control in traditional research depends on such a special community, sheltered from the harsher realities of the world of affairs, and led by persons of ability and commitment, then how is there to be any effective quality control where everything is partisan, contested, and conflicted? This problem is quite serious, for if quality control of science fails and is seen to fail in these policy debates, then by default it will be brute political power that decides.”What Savigny said (Savigny 1814, 134; translated in Savigny and Thibaut 1982, 180) about too early codification applies here too; “When the Jews at Mount Sinai were tired of waiting for the law of God, they framed in their impatience a golden calf, and the genuine table of the law were broken to pieces thereupon.”↩
- The sword metaphor is not exaggerated: Bonaccorsi himself (Bonaccorsi 2013, 258) used a much more violent comparison, accusing his critics of treating evaluation like Bertoldo’s tree: ‘everyone solemnly declares that they want it, but then they demand to choose the tree down to the last detail, and they never choose it’. Bertoldo, however, hesitated in his choice because the tree he was looking for was the one on which he was supposed to hang.(Croce 2013, 84–85) .↩
- Empirical studies of the effects of the UK’s RAE (now REF) on economics show that the edification of intellectual despotism through research evaluation is not only possible, but actually occurs. See for example Lee, Pham and Gu 2012 and Gillies 2012. For a broader perspective, see also Martin 2017.
- See the reports of the I subcommittee of the Constituent Assembly (18 October 1948, 22 October 1948).↩
- Bonaccorsi 2015, sect. II.1, italics added. The possibility of which Bonaccorsi speaks is merely technical, not moral. His response to Supiot is flawed by the fallacy of ignoratio elenchi. According to Supiot (Supiot 2015, p. 285-286 Engl. transl.), it is dangerous to let calculation dominate the legal sphere because ”it eliminates any thought for people of flesh and blood. In order to avoid this, a sense of measure must be preserved in every practice of quantification. Law can help maintain or restore this measure, by making it obligatory to observe the adversarial principle in the way numbers are treated and interpreted, whenever the results are to have normative force. However, restoring this sense of measure cannot be achieved without challenging politically the power which the plutocratic ruling classes have won in most countries today. Their motivation is anything but mystical, and their unbridled greed and destructive power make Marx’ s critique of capitalism from 150 years ago once again acutely topical.” Bonaccorsi (Bonaccorsi 2015, sect. II.4, p.47, n16) ) responds by recalling Balinski and Laraki‘s work on the feasibility of aggregating even qualitative judgements into a quantitative format. Supiot is concerned with the moral permissibility of reducing people to numbers, objects rather than subjects of administration; Bonaccorsi only discusses the technical possibility of doing so, missing the point entirely: “The irony of Supiot is intentional: the book discusses at length methods of aggregating votes in political elections, but also in wine competitions, sports competitions or school grades. A commonplace of anti-quantitative polemics is that some things (art, science, rights) cannot be treated as if they were bottles of wine.” Bonaccorsi’s systematic confusion between theoretical and moral judgement is also noted in Mori 2016, 144-45. Ruth Chang notes that comparability is sufficient to produce an order of values, but in relation to moral, not theoretical, options: “Descartes already taught us that, while one can suspend judgement (read: renounce evaluation) for theoretical objects, it is impossible not to make a decision (i.e. to assign value) in the case of a practical alternative” (Mori 2016, 145).↩
- Bonaccorsi 2015, sect. I.2. p. 15 .↩
- Bonaccorsi is aware of the diversity of his critics’ positions, but tries to reduce them to one more convenient straw man: see e.g. Bonaccorsi 2015, sect. I.2 p. 18 on the use of quotations.↩
- Bourdieu 1988, XIII. In general, no critical theory can be fully and consistently relativist without being subject to Plato’s critique of Protagoras of Theaetetus, 161c.↩
- See the quotation from Merton 1942 above.↩
- In Book I of the Republic, Socrates does not need to deny, in sociological terms, that people can be dominated by the will to prevail in order to refute Thrasymachus: it is enough for him to make the sophist realize that a scientific demonstration cannot use the will to prevail as an argument..↩
- It is worth noting, however, that the first edition of J. Ravetz’s above mentioned book dates from 1971. Even then, Ravetz feared that the transition from a science limited to a small, idealistic community of craftsmen to an industrial, economically motivated science would undermine its ethical systems of self-control and expose it to corruption and the spread of mediocrity (Ravetz 1971-1996 XI). See also Ravetz 2016.↩
- Bonaccorsi 2015, sect. I.1 p. 13.↩
- For example, the use of pre-publication selection as a proxy for quality is linked to a contingent circumstance, i.e. the economic and technological constraints of print. It is no coincidence that physicists started publishing earlier – via ArXiv – as soon as technology made it possible, and started branding for quality later, via traditional and non-traditional journals.↩
- Biagioli 2016.↩
- On the distortions caused by bibliometric fetishism, there is not only a rich literature (Baccini 2016 topic 5; Baccini et al. 2019, which documented the Italian fulfilment of such an easy prophecy after the publication of the Italian version of this article), but also collective positions such as the 2013 ‘San Francisco Declaration on Research Assessment (DORA)’.↩
- Bonaccorsi 2015, sect. I.2 p. 14. The quotation is from Merton 1957, now in Merton 1973, 307. Bonaccorsi’s Italian version specifies an expression that is generic in Merton’s text: “Other members of the scientific community do what the tormented Darwin will not do for himself. Lyell and Hooker take matters in hand and arrange for that momentous session in which both papers are read at the Linnean Society. And as they put it in their letter prefacing the publication of the joint paper of “Messrs. C. Darwin and A. Wallace,” “in adopting our present course … we have explained to him (Darwin) that we are not solely considering the relative claims to priority of himself and his friend, but the interests of science generally.” Italics added.↩
- Flaherty 2015 .↩
- Bonaccorsi 2015, sect. II.2, p. 42, some italics added.↩
- The interpretation proposed here is a generous one: Bonaccorsi actually applies this thesis only to the social sciences and humanities, because he believes that for the other sciences experimental evidence is sufficient (supra, note 26).↩
- Peirce 1878 , italics added.↩
- “That whose characters are independent of what anybody may think them to be” (Peirce 1878 ).↩
- As Mori 2016, 8 also notes, Bonaccorsi’s interpretation of Peirce confuses scientific research on nature with the opinions shared by researchers or, more precisely, by research evaluators, even though they are very different objects: research has as its reference point the confrontation with nature, shared opinion instead an agreement or convention. This confusion, in turn, turns Peirce’s philosophy into a sociological theory of knowledge, in the sense used by Merton, mentioned above.↩
- Peirce 1877.↩
- Giannantoni 2005, 66 italics added, translation mine. For a broader reflection on this Platonic passage, see Pievatolo 2013b.↩
- The method of discussion is also part of research; it must therefore be freely debatable. It is not by chance, for example, that modern science asserts itself against the medieval tradition of esotericism with a “battle in favour of universal knowledge, comprehensible to all because it can be communicated and constructed by all” (Rossi 2015, 26, translated by me). See also Greco 2010.↩
- Caso 2017, sect. 5 .↩
- See, for confirmation, Cass., sez. un., 28 febbraio 2017, n. 5058.↩
- It is worth mentioning in this context the letter of rejection to ANVUR written by Pier Paolo Giglioli (Giglioli 2012) .↩
- Bonaccorsi 2015, sect. IV p. 85.↩
- The metaphor in the title of this section is taken from a passage by I. Kant, On the common saying: That may be correa in theory, but it is ofno use in practice AK VIII, 277, which criticizes the presumption of believing “in a wisdom that can see farther and more clearly with its dim moles’ eyes fixed on experience than with the eyes belonging to a being that was made to stand erect and look at the heavens.” Trasl. by Mary Gregor.↩
- We have already discussed in footnote one argument – the ignoratio elenchi with which Bonaccorsi responds to Supiot’s criticism – contained in Bonaccorsi 2015, sect. IV.↩
- Fiori 2000 .↩
- The Bonaccorsi of 2015 (Bonaccorsi 2015, sect. IV.6) writes something similar about the ‘distorting effects’ of bibliometrics in research evaluation, attempting to deny, on the basis of ’empirical evidence’, that their effect is anything more than particular and temporary: “Critics of evaluation have listed them with punctiliousness: salami slicing, artificially increasing the number of co-authors, cross-citations and cliques, manipulation of the impact factor, honorary authorship, imposition of citations by referees of journals or even by editors, emergence of predatory journals, conformism, increased incentives for scientific fraud, disincentives for interdisciplinary research. Each of these distorting effects is documented with reference to one or a few cases, with anecdotal evidence that is then repeated within a specific literary genre in order to create an overall impression of the danger of evaluation and to raise a cry of alarm”. However, even in 2025, the literature on such issues was extensive and far from anecdotal: here, to give just a few examples, in addition to M. Biagioli and J. Ravetz, Ioannidis 2005 and Smaldino and McElreath 2012, as well as what is discussed in Pievatolo 2016: among the doomsayers there was Richard Horton himself, editor of The Lancet..↩
- Israel 2013.↩
- Israel 2013 (partially traslated in Israel 2004, 242-244).In Giorgio Israel’s blog for an account, also autobiographical, of the long and bewildering academic career of the racist scientist Sabato Visco: https://gisrael.blogspot.it/2005/12/un-bel-tacer-non-fu-mai-scritto.html.↩
- The power of those who achieve positions of academic prestige is, it is true, only personal (Bonaccorsi 2015, sect. IV.1), but its effects are substantial in determining funding and, above all, careers. And it can be further enhanced if it is reinforced by state evaluation (see note 35 on the British experience).↩
- Kant 1784 Ak VIII 36.↩
- Even if (with Merton 1961, now in Merton 1973, 343-70) we were to treat scientific discovery as the fruit of institutional science rather than of a solitary genius, we would have to recognize that the removal of a brilliant researcher from a country’s university causes systemic damage to that country. The fascist regime did not prevent Enrico Fermi from making his discoveries: it forced him to make them elsewhere, to the benefit of others and to his own detriment.↩
- Although this article has conceded, for the sake of argument, that such events are ‘anecdotal’, the issue has recently even landed on the pages of the The Economist (“The Shackles of Scientific Journals and How to Cast Them Off” 2017).↩
- On this way of thinking see Taleb 2010, 76 ss. ↩
- Even Bonaccorsi believes in this magic (Bonaccorsi 2015, sect. IV.2, p. 88), inferring from “whatever measure one takes of the eminence of an individual scientist or of a journal or institution, the number of citations shows a strong correlation with this outcome” the possibility of using the number of citations for individual evaluation. If almost all good researchers are highly cited, then almost all highly cited researchers are good (and almost all Italians are Mafiosi). See also Brembs 2013.↩
- De Bellis 2017. trans. by me.↩
- Bonaccorsi 2015, sect. IV.3 justifies evaluation based on top journals and bibliometrics by claiming that this is what the scientific community wants, despite the now numerous and organised dissenters, such as the signatories of the cited 2013 “San Francisco Declaration on Research Assessment (DORA)” or of the COARA coalition. This is like justifying discrimination against Italians as mafia members by claiming that Italians – all of them – accept and legitimize the Mafia.↩
- Bonaccorsi 2015, sect. I.2, p. 14.↩
- Less charitably, however, this could also appear to be a use of citation as a ‘weapon of persuasion, a tool for asserting one’s position in the scientific field and for increasing one’s symbolic capital’, as mentioned in Bonaccorsi 2015, sect. I.2, p. 18.↩
- Bonaccorsi 2015, sect. I.2, p. 17.↩
- Whitley and Gläser 2007 and Whitley, Gläser, and Engwall 2010 ↩
- According to Whitley, 2007, pp. 9–10:
Strong RES, in contrast, institutionalise public assessments of the quality of the research conducted in individual departments and universities by scientific elites on a regular basis according to highly formalised rules and procedures. These assessments are usually ranked on a standard scale and published so that the relative standing of universities and departments can be readily ascertained. In most cases, they are organised around existing disciplines and scientific boundaries. Such peer-review-based evaluations directly affect funding decisions, often on a significant proportion of research organisations’ income, and so can have a considerable impact on the management of universities and similar organisations.
The impact of developing and implementing research evaluations on knowledge-production is likely to be especially noticeable when these are relatively ‘strong’ in this sense. Five major consequences of institutionalising such systems can be summarised in the following terms.
Firstly, by focusing attention on evaluations of the outputs of their work, researchers are likely to become more aware of the need to compete with others to gain recognition from scientific elites and coordinate their projects with those of others. This means that they will seek to contribute to the collective goals of their field as understood by current elites and so research in general should become more integrated around these goals as evaluation systems become more influential.
Secondly, as evaluators in these peer-review-based RES are forced to judge the relative merits of research outputs, they will develop and apply standard criteria of quality and intellectual significance for the field as a whole, thereby centralising judgements across individual researchers, universities and other research organisations. As they continue to do this on a regular basis, these standards and goals will become institutionalised as dominant in the field, and so the level of strategic task uncertainty, i.e. the degree of uncertainty about the intellectual importance of particular research strategies and outputs for collective goals (Whitley 2000: 123-124), should decline.
Thirdly, this centralisation and standardisation of research goals and evaluation criteria throughout scientific fields means that the diversity of intellectual goals and approaches within sciences should decline over time, especially where they challenge current orthodoxies. As evaluations become more important for both researchers and their employers, the costs of pursuing deviant strategies increase, and pressures to demonstrate how one’s work contributes to dominant disciplinary goals will grow. These are especially strong for junior researchers who need to show the merits of their research as assessed by current disciplinary priorities and standards in order to gain employment and promotion.
Fourthly, such reinforcement of disciplinary standards and objectives is likely to inhibit the development of new fields and goals that transcend current intellectual and organisational boundaries by increasing the risks of investing in research projects that do not fit within them. Increasing competition for reputations and resources based on them, resulting from strong evaluation systems heighten the risks of moving into novel areas and adopting techniques and frameworks from other fields. Intellectual innovations will therefore tend to be focused on current sciences and their concerns. Radical intellectual and organisational innovation is thus less likely in societies that have strong, institutionalised research evaluation systems because these reinforce conservative tendencies in determining intellectual quality and significance.
Finally, the standardisation, formalisation and publication of quality rankings intensify the stratification of individual researchers, research teams and employer organisations. By regularly conducting and publicising such judgements, strong evaluation systems heighten awareness of one’s relative position in the science system and encourage both individual and organisational strategies to enhance them.
This stimulates the scientific labour market and, over time, is likely to concentrate resources and the most valued skills in elite universities, as the UK RAE seems to have done. However, such effects will differ between academic systems organised in different ways as well as between different kinds of scientific fields.
- Gläser, 2007, 247 : ”by inviting the élite to evaluate research, and by limiting eligibility for some of the funding to the élite, science policy-makers construct it as a visible group that is separated from its community by special treatment and is given new powers within their fields. As Whitley has argued in his introductory chapter, neither effect is straightforward or deterministic.”↩
- According to Whitley, 2007, 19: “In general, then, many of the expected consequences of implementing strong RES should be considerably reduced in public science systems that combine considerable diversity of research funding agencies and foundations with high levels of university autonomy and strategic capacity based on control of their own resources. Especially for élite research organisations in societies that accord them considerable social prestige and independent access to finance, the largely intellectually conservative implications of adopting strong evaluation systems may be restricted. Conversely, where research funding is available from only one or two state agencies and private funds are in short supply, and elite universities have limited social and political support, these consequences can be expected to be quite marked,”↩
- This was the far-sighted advice of Archibugi, 2004, 47: “Since the evaluated group adapts easily (facta lex, inventa fraus the most skeptical would say), it is not unwise to change the criteria of evaluation frequently”.↩
- Bonaccorsi 2015, sect. II.5, p.50.↩
- Gläser and Laudel, 2007 to which we refer for the literature on bibliometrics on which the authors based their thesis.↩
- Gläser and Laudel, 2007, 103.↩
- Gläser and Laudel, 2007, 104–5.↩
- Gläser and Laudel, 2007, 105–8. The monopoly is now an oligopoly because it has been joined by another commercial database, Elsevier’s Scopus, which is also proprietary and has a conflict of interest since Elsevier is also the largest oligopolist in scholarly publishing.↩
- Gläser and Laudel, 2007, 108–12. Read in particular the interview with the Australian historian, p. 111.↩
- Gläser and Laudel, 2007, 112–16.↩
- Gläser and Laudel, 2007, 116–18. The literature cited by Gläser and Laudel is mostly closed access. For an open access paper illustrating these and other problems, see De Bellis 2017.↩
- Bonaccorsi 2015, sect. IV.3 p. 89. Just as a matter of historical curiosity, see who are the initiators of the very recent Initiative for Open Citation Data: https://i4oc.org/#founders.↩
- Bonaccorsi 2015, sect. IV.I, p. 86.↩
- In the language of Mertonian sociology of science (Merton 1942, 270-73), “particularism” is the influence of biases and subjective criteria on the evaluation of theoretical claims and decisions about scientific careers.↩
- Bonaccorsi 2015, sect. IV.1, p. 86 .↩
- Cole 1992, 175. The part of the text quoted by Bonaccorsi is indicated in italics (added).↩
- Cole 1992, 172.↩
- Cole 1992, 160–61.↩
- Cole 1992, 176 : “we have failed to study how other variables, such as personality attributes, have influenced the evaluation process. But, most important, it seems likely that the type of analysis we have conducted has failed to study the most significant basis for particularism: the positive and negative feelings of individuals toward other individuals and the location of scientists in networks of interpersonal social relations. If a scientist votes to grant a reward to a particular individual because that scientist likes the recipient or, conversely, votes to deny a reward to a particular individual because that scientist dislikes the applicant, this action would seem to be an example of particularism at work! But if the sum of likes and dislikes is not correlated with any of the independent variables included in our studies of the stratification process (both statuses external to science and affiliations internal to science), particularism can be rampant but not show up in any of the statistical analyses.”↩
- Cole 1992, 190, italics added.↩
- The data analyzed in the book are from the U.S. and refer to decentralized evaluation processes based on a kind of consent on the part of those being evaluated: those applying for NSF funding or to be hired in a department do so without coercion and only if they have some degree of trust in their evaluators. Cole was aware of this in 1992 and appreciated it, albeit hypothetically: ”Decentralized evaluation systems would seem to reduce the consequences of particularism. If this is true, it might mean that societies that have more highly centralized evaluation systems in science would suffer more negative effects, unless other mechanisms reduced the operation of particularism at individual levels. This proposition merits further investigation” (Cole 1992, 205) .↩
- Cole 1992, X.↩
- Cole 1992, 35.↩
- Cole 1992, 38.↩
- Cole 1992, 63: “The social constructivists fail to make a distinction between social and cognitive influences on the doing of science, and some go so far as to say that this distinction makes no sense (Latour, 1987). The basis of their argument is that science is a communal activity, one which cannot be meaningfully practiced in isolation from others. The fate of any scientific work is in the hands of other people, and work by one scientist can affect the career and work of others. For a scientific production to become a fact requires the recruitment of many allies inside and outside the scientific community. The outside allies, such as funding agencies, are necessary to make the scientific work possible in the first place. Once the work is produced, inside allies, including other scientists who will support the new idea, are necessary for the work to be accepted as a fact. In trying to recruit allies scientists interact with other people both directly and, through the literature, indirectly. This interaction is characterized by the same type of social processes which characterize interaction in other realms of social activity. Therefore, science is just as social as any other activity. There are few, if any, sociologists who would choose to disagree with this position. And in the case of science, given that the social elements have long been ignored owing to the positivistic bias of those who have studied and written about it, to point out that science is inherently social is an important and useful contribution. But by rejecting even the analytic distinction between social and nonsocial (cognitive) influences on the doing of science, the constructivists impede our ability to specify how sociological variables influence which aspects of science in what ways.”↩
- Cole 1992, 187.↩
-
Think, for example, of the distinction, born out of the confrontation with Bruno Latour, between core science and frontier science; the former, which ends up in the textbooks, is science consolidated by the elaboration and validation of the scientific community, while the latter, which is innovative and still under discussion, is thus so uncertain and unpredictable that its funding is largely a matter of luck (Cole 1992, 83 ff.). The critics of government evaluation and its normalizing effect (Longo 2016) can still find support in some of Cole’s findings: the first victim of normalization is indeed frontier research (Flaherty 2015). See also Russo 2008, 20-22.↩
- Cole 1992, 205.↩
- Latour 1987, 39–40.↩
- Cole 1992, 45.↩
- Another example: “In Homo academicus, Bourdieu had examined the criteria for selecting professors in French institutions and concluded that they were non-comparable, idiosyncratic rules that could only be explained by academic struggles within the field. The French sociologist Christine Musselin revisits the issue and argues that Bourdieu greatly overestimates extra-scientific factors: “institutional placement explains no more than 15 percent of the variability in selection criteria” (Bonaccorsi 2015, sect. IV.1, pp. 86-87). Musselin allegedly published this percentage in Le Marché des universitaires: France, Allemagne, Ètats-Unis (2005). It is a closed-access book of just under 300 pages that Bonaccorsi cites as a reference, without indicating where such a percentage would be revealed, to criticize an author who is quoted only once (p. 71). But we could not find it. In general, Musselin examines recruitment procedures (pp. 10-11), using publicly available data and interviews with members of selection committees for mathematics and history in 22 French, German and USA departments (p. 18), in an inclusive rather than normative approach. This pre-selection makes it difficult to generalize the results to other disciplines. However, in the course of the research, the disciplinary variable does not prove to be decisive (p. 20). Musselin groups together Bourdieu’s positions and those of the Mertonian school because they consider academic labour supply only “endogenous”, i.e. internal to the scientific community, and neglect its “exogenous” aspects, i.e. its relations to the economics and politics of research: “Luttes pour le pouvoir, champs de forces, controversies scientifiques, développement de la connaissance: dans tous les cases, ces sont des processus internes à la communauté universitaire, ou engagés par elle, qui prévalent. Le développement de l’offre résulte de mécanismes endogènes. Sur ce point, les sociologues des sciences d’inspiration mertonienne et les tenants du ‘programme fort‘ se rejoignent au-delà de toutes les divergences qui les opposent” (p. 71). Even in the formation of the judgements of the recruitment commissioners, Musselin distinguishes “le jugement comme il doit être” from the object of her research, which is “le jugement comme il se fait”. Although their conclusions are discordant, Mertonians and “particularists” normatively use the same unit of measurement, namely the ideal of pure science (pp. 140-144): for her, they therefore belong to the same category, since they do not empirically study how decisions are made. According to her empirical study, the decision-making process is procedurally uniform, a kind of “bricolage cognitif” (p. 175) which, after discarding clearly irrelevant candidates, takes into account – when selecting within a short list – scientific aspects, but also other more contingent factors, from the candidate’s ability to teach to his personality and even his behavior at dinner (p. 176), with a general tendency to favor scholars more similar to the commissioners themselves (pp. 176-178). But the unpredictability of each individual outcome does not imply the unpredictability of the process (p. 145): winning or losing any particular roll of the dice is certainly random, but a large number of rolls tend to conform to the empirical law of chance. In this sense, then, the judgments are “ni scientifiques ni aléatoires” (p. 288). ↩
- Bonaccorsi 2015, sect. II.5.↩
- Figà-Talamanca 2000, sect. 10.↩
- Bonaccorsi 2015, sect. IV.3, p. 89.↩
- Reading Stigler’s 1998 collection of essays is certainly entertaining, especially for those who, like Bonaccorsi, claim to be followers of Merton, but its very structure does not provide the organic historical perspective we would need. The social history of censuses in the US, with all its political complexities (Anderson 1988-2015), is certainly interesting, but too specific to support such a broad thesis – not least because censuses are a much older practice than contemporary liberal democracies. The 1991 edited volume by Bulmer, Bales and Kish Sklar, which focuses on the English-speaking world, engages in analyses that are too specific to establish a general historical correlation, although it does begin by recalling that the origins of social censuses can be find in William the Conqueror’s Domesday Book (p. 5) and describes their use by all sorts of social reformers.↩
- Porter 1995.↩
- Ravetz 1997. Ravetz’s interest focuses on the last chapter of Porter’s book, Is Science Made by Communities?, which is consistent with his above mentioned thesis. The autonomy of the scientific community, i.e. its capacity for self-correction, presupposes a strong Gemeinschaft, such as that of the Victorian geologists involved in the Great Devonian Controversy or the high-energy physicists. Such communities share an ethos, a practice of discussion, and a set of informal ties that make their publications merely notarial records: their relationship to the actual negotiation, internal to the community, is comparable to the relationship of final press releases to the diplomatic negotiations that preceded them. On the other hand, mechanical objectivity – according to rigid rules in the drafting of articles, data analysis and theoretical formulations – characterises the current scientific Gesellschaft, which is weak, contradictory and very exposed to external criticism.↩
- Porter 1995, VIII.↩
- Porter 1995, IX.↩
- Porter 1995, 6.↩
- Porter 1995, 3–4.↩
- Porter 1995, 4–5.↩
- Porter 1995, 8. Italics added.↩
- Porter, citing Homo Sovieticus by Aleksandr Zinoviev, notes that the system of the former Soviet Union and the management of any large Western corporation have in common quantification, planning, and interventionism (Porter 1995, 43). .↩
- Weber 1919, 5–6 (tras. 2-3). Weber’s lecture, published in 1919, was delivered in 1917. In his text, the “German” model is the Humboldtian model; the “American” model is the capitalist-bureaucratic model.↩
- Here’s how the Italian university is described in a book published a hundred years later: “A university enslaved to a centralized bureaucracy-what some ironically call a ‘vetero-communist’-of evaluation and control. A university reduced to a handful of indicators in order to artificially create a market of universities competing for scarce resources (students, state funding, money for research projects). A university whose professors are under pressure to publish as much as possible in ever narrower areas of knowledge, further reducing the university’s ability to address the great issues of our time, which instead require a broad perspective. A university whose students pay higher and higher fees because the university experience is increasingly seen as an investment with private returns rather than a public good. A university that struggles to gain a few places in rankings that are managed, often opaquely and with highly questionable methodologies, by commercial entities.” (De Martin 2017, 62–63) .↩
- Archibugi, 2004, 45–46. This thesis, applied to medicine, would force us to hold in the highest esteem the physiCian who, in the absence of antibiotics, treats pneumonia with bloodletting, because “any treatment is better than no treatment at all”.↩
- Weber 1919, 7 (trask. p. 24).↩
- von Humboldt 1809–10, 230↩
- See supra.↩
- Weber 1919, 9 (trans. p. 4).↩
- Figà-Talamanca 2000 .↩
- See Caso 2017, sect. III and IV .↩
- Supra. “In the past it was much more difficult to copy and take undue credit: the number of scholars in a given scientific discipline was so small that almost all of them knew each other personally, and a researcher was valued because his ideas and discoveries were known to the entire community, since they were first presented in the academies and then discussed through active correspondence in letters. Publication was the last means of making one’s findings ” (Bucci 2017, 21, trans. mine) .↩
- And no, it is not “anecdotal”: “Macchiarini’s is not the story of a single scientist, but of his collaborators and co-authors, of the referees who commented on the articles, of the editors who published them, of the journals that rejected letters from other scientists raising doubts about Macchiarini’s practices, and of the funders who financed his research precisely on the basis of publication success. From this point of view, the Macchiarini affair is an exemplary story of a dramatic systemic failure of the ‘scientific community’, whatever the meaning of this expression may be” (Roars 2017).↩
- Pievani 2016.↩
- See Porter’s citation above.↩
- The peer review of the first modern scientific journal, the Philosophical Transactions of the Royal Society, was not anonymous: memoirs submitted to the Royal Society were carefully read by two of its members, whose report was the beginning of a public discussion.: “perusal gave rise to conversation; conversation inspired experiments; experiments led to reports and correspondence; and publication then restarted the cycle. Quite simply, this was how the experimental philosophy worked” (Johns 2009, 63).↩
- See for instance Poynder 2016: “I like to do the following thought experiment. Imagine (admittedly rather implausibly) that the internet had come into existence before people started doing research in mathematics in any great volume. People would have posted their mathematical findings online, and after a while would probably have found that there was some need to organize the literature. But nobody would have thought of using the print journal, or anything like it, for that purpose.” See also Caso 2017, sect. VI. It is no coincidence that the network has been described by one of the leading advocates of open access as a medium that allows not only the sharing of writings but also conversations: (Harnad 2003).↩
- Bon 2015, italics added.↩
- See the note about its processes. ↩
- Rossi 2015, 27 transl. mine.↩
- Merton 1957, now in Merton 1973, 311.↩
- Caso 2017, sect. VI. ↩
- Oransky and Marcus 2016. For an update see Oransky 2022.↩
- See however also Brembs 2015 .↩
- The Web is not to be confused with proprietary social media such as Facebook and Academia.edu:. ↩
- Fecher et al. 2017 .↩
- “In science, each of us knows that what he has accomplished will be antiquated in ten, twenty, fifty years. That is the fate to which science is subjected; it is the very meaning of scientific work, to which it is devoted in a quite specific sense, as compared with other spheres of culture for which in general the same holds. Every scientific ‘fulfilment’ raises new ‘questions’; it asks to be ‘surpassed’ and outdated. Whoever wishes to serve science has to resign himself to this fact.” ( Weber 1919, 14; transl. p. 6).↩
- Weinberger 2011, 106. ↩
- Weinberger 2011, 106 .↩
- A development of these ideas can be found in Kant’s last published work, The Conflict of Faculties, which deals with the university and in particular with the role of critical studies as opposed to what would today be called “professionalizing” studies. See for example Francesca Di Donato 2006.↩
Comments
0 Comments on the whole Page
Login to leave a comment on the whole Page
0 Comments on block 1
Login to leave a comment on block 1
0 Comments on block 2
Login to leave a comment on block 2
0 Comments on block 3
Login to leave a comment on block 3
0 Comments on block 4
Login to leave a comment on block 4
0 Comments on block 5
Login to leave a comment on block 5
0 Comments on block 6
Login to leave a comment on block 6
0 Comments on block 7
Login to leave a comment on block 7
0 Comments on block 8
Login to leave a comment on block 8
0 Comments on block 9
Login to leave a comment on block 9
0 Comments on block 10
Login to leave a comment on block 10
0 Comments on block 11
Login to leave a comment on block 11
0 Comments on block 12
Login to leave a comment on block 12
0 Comments on block 13
Login to leave a comment on block 13