Σφακιανάκης Αλέξανδρος
ΩτοΡινοΛαρυγγολόγος
Αναπαύσεως 5 Άγιος Νικόλαος
Κρήτη 72100
00302841026182
00306932607174
alsfakia@gmail.com

Αρχειοθήκη ιστολογίου

! # Ola via Alexandros G.Sfakianakis on Inoreader

Η λίστα ιστολογίων μου

Κυριακή 5 Μαΐου 2019

Synthese

Some fallibilist knowledge: Questioning knowledge-attributions and open knowledge

Abstract

We may usefully distinguish between one's having fallible knowledge and having a fallibilist stance on some of one's knowledge. A fallibilist stance could include a concessive knowledge-attribution (CKA). But it might also include a questioning knowledge-attribution (QKA). Attending to the idea of a QKA leads to a distinction between what we may call closed knowledge that p and open knowledge that p. All of this moves us beyond Elgin's classic tale of the epistemic capacities of Holmes and of Watson, and towards a way of resolving Kripke's puzzle about dogmatism and knowing.



What we cannot learn from analogue experiments

Abstract

Analogue experiments have attracted interest for their potential to shed light on inaccessible domains. For instance, 'dumb holes' in fluids and Bose–Einstein condensates, as analogues of black holes, have been promoted as means of confirming the existence of Hawking radiation in real black holes. We compare analogue experiments with other cases of experiment and simulation in physics. We argue—contra recent claims in the philosophical literature—that analogue experiments are not capable of confirming the existence of particular phenomena in inaccessible target systems. As they must assume the physical adequacy of the modelling framework used to describe the inaccessible target system, arguments to the conclusion that analogue experiments can yield confirmation for phenomena in those target systems, such as Hawking radiation in black holes, beg the question.



Error possibility, contextualism, and bias

Abstract

A central theoretical motivation for epistemic contextualism is that it can explain something that invariantism cannot. Specifically, contextualism claims that judgments about "knowledge" are sensitive to the salience of error possibilities and that this is explained by the fact that salience shifts the evidential standard required to truthfully say someone "knows" something when those possibilities are made salient. This paper presents evidence that undermines this theoretical motivation for epistemic contextualism. Specifically, it demonstrates that while error salience does sometimes impact "knowledge" judgments as contextualism predicts, it does so in ways that are consistent with invariantism and does not require positing any additional contextualist semantics to explain. These results advance our understanding of the pathways by which error possibility affects "knowledge" judgments, answer a major challenge to invariantism, and suggest several methodological improvements for the study of knowledge attribution.



Explaining the behaviour of random ecological networks: the stability of the microbiome as a case of integrative pluralism

Abstract

Explaining the behaviour of ecosystems is one of the key challenges for the biological sciences. Since 2000, new-mechanism has been the main model to account for the nature of scientific explanation in biology. The universality of the new-mechanist view in biology has been however put into question due to the existence of explanations that account for some biological phenomena in terms of their mathematical properties (mathematical explanations). Supporters of mathematical explanation have argued that the explanation of the behaviour of ecosystems is usually provided in terms of their mathematical properties, and not in mechanistic terms. They have intensively studied the explanation of the properties of ecosystems that behave following the rules of a non-random network. However, no attention has been devoted to the study of the nature of the explanation in those that form a random network. In this paper, we cover that gap by analysing the explanation of the stability behaviour of the microbiome recently elaborated by Coyte and colleagues, to determine whether it fits with the model of explanation suggested by the new-mechanists or by the defenders of mathematical explanation. Our analysis of this case study supports three theses: (1) that the explanation is not given solely in terms of mechanisms, as the new-mechanists understand the concept; (2) that the mathematical properties that describe the system play an essential explanatory role, but they do not exhaust the explanation; (3) that a non-previously identified appeal to the type of interactions that the entities in the network can exhibit, as well as their abundance, is also necessary for Coyte and colleagues' account to be fully explanatory. From the combination of these three theses we argue for the necessity of an integrative pluralist view of the nature of behaviour explanation when this is given by appealing to the existence of a random network.



Causal concepts and temporal ordering

Abstract

Though common sense says that causes must temporally precede their effects, the hugely influential interventionist account of causation makes no reference to temporal precedence. Does common sense lead us astray? In this paper, I evaluate the power of the commonsense assumption from within the interventionist approach to causal modeling. I first argue that if causes temporally precede their effects, then one need not consider the outcomes of interventions in order to infer causal relevance, and that one can instead use temporal and probabilistic information to infer exactly when X is causally relevant to Y in each of the senses captured by Woodward's interventionist treatment. Then, I consider the upshot of these findings for causal decision theory, and argue that the commonsense assumption is especially powerful when an agent seeks to determine whether so-called "dominance reasoning" is applicable.



An ecumenical notion of entailment

Abstract

Much has been said about intuitionistic and classical logical systems since Gentzen's seminal work. Recently, Prawitz and others have been discussing how to put together Gentzen's systems for classical and intuitionistic logic in a single unified system. We call Prawitz' proposal the Ecumenical System, following the terminology introduced by Pereira and Rodriguez. In this work we present an Ecumenical sequent calculus, as opposed to the original natural deduction version, and state some proof theoretical properties of the system. We reason that sequent calculi are more amenable to extensive investigation using the tools of proof theory, such as cut-elimination and rule invertibility, hence allowing a full analysis of the notion of Ecumenical entailment. We then present some extensions of the Ecumenical sequent system and show that interesting systems arise when restricting such calculi to specific fragments. This approach of a unified system enabling both classical and intuitionistic features sheds some light not only on the logics themselves, but also on their semantical interpretations as well as on the proof theoretical properties that can arise from combining logical systems.



Hinge epistemology and the prospects for a unified theory of knowledge

Abstract

I defend two theses here. First, I argue that at least many of the commitments that Wittgenstein identifies as "hinge commitments" are plausibly what cognitive psychology and artificial intelligence call "procedural knowledge." Procedural knowledge can be implemented in cognitive systems in a variety of ways, and these modes of implementation, I argue, predict several properties of Wittgensteinian hinge commitments, including their functional profile, as well as other of their characteristic features. Second, I argue that thinking of hinge commitments as a kind of procedural knowledge allows a unified virtue-theoretic treatment of the generation of knowledge, the transmission of knowledge, and Wittgensteinian "hinge knowledge." This last thesis is noteworthy, in that Wittgenstein and his defenders have so far failed to offer any unified epistemology of hinge commitments and the knowledge that such commitments are supposed to make possible.



Infinitesimal idealization, easy road nominalism, and fractional quantum statistics

Abstract

It has been recently debated whether there exists a so-called "easy road" to nominalism. In this essay, I attempt to fill a lacuna in the debate by making a connection with the literature on infinite and infinitesimal idealization in science through an example from mathematical physics that has been largely ignored by philosophers. Specifically, by appealing to John Norton's distinction between idealization and approximation, I argue that the phenomena of fractional quantum statistics bears negatively on Mary Leng's proposed path to easy road nominalism, thereby partially defending Mark Colyvan's claim that there is no easy road to nominalism.



Combining finite and infinite elements: Why do we use infinite idealizations in engineering?

Abstract

This contribution sheds light on the role of infinite idealization in structural analysis, by exploring how infinite elements and finite element methods are combined in civil engineering models. This combination, I claim, should be read in terms of a 'complementarity function' through which the representational ideal of completeness is reached in engineering model-building. Taking a cue from Weisberg's definition of multiple-model idealization, I highlight how infinite idealizations are primarily meant to contribute to the prediction of structural behavior in Multiphysics approaches.



Minimal approximations and Norton's dome

Abstract

In this note, I apply Norton's (Philos Sci 79(2):207–232, 2012) distinction between idealizations and approximations to argue that the epistemic and inferential advantages often taken to accrue to minimal models (Batterman in Br J Philos Sci 53:21–38, 2002) could apply equally to approximations, including "infinite" ones for which there is no consistent model. This shows that the strategy of capturing essential features through minimality extends beyond models, even though the techniques for justifying this extended strategy remain similar. As an application I consider the justification and advantages of the approximation of a inertial reference frame in Norton's dome scenario (Philos Sci 75(5):786–798, 2008), thereby answering a question raised by Laraudogoitia (Synthese 190(14):2925–2941, 2013).



Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου