CfP: Minds & Machines Special Issue on "Machine Learning: Prediction Without Explanation?"


Machine Learning: Prediction Without Explanation?

Description
Over the last decades, Machine Learning (ML) techniques have gained central prominence in many areas of science. ML typically aims at pattern recognition and prediction, and in many cases has become a better tool for these purposes than traditional methods. The downside, however, is that ML does not seem to provide any explanations, at least not in the same sense as theories or traditional models do.

This apparent lack of explanation is often also linked to the opacity of ML techniques, sometimes referred to as the ‘Black Box Challenge’. Methods such as heat maps or adversarial examples are aimed at reducing this opacity and opening the black box. But at present, it remains an open question how and what exactly these methods explain and what the nature of these explanations is.
While in some areas of science this may not create any interesting philosophical challenges, in many fields, such as medicine, climate science, or particle physics, an explanation may be desired; among other things for the sake of rendering subsequent decisions and policy making transparent. Moreover, explanation and understanding are traditionally construed as central epistemic aims of science in general. Does a turn to ML techniques hence imply a radical shift in the aims of science? Does it require us to rethink science-based policy making? Or does it mean we need to rethink our concepts of explanation and understanding?

In this Special Issue, we want to address this complex of questions regarding explanation and prediction, as it attaches to ML applications in science and beyond.
We invite papers focusing on but not restricted to the following topics:

•    (How) can ML results be used for the sake of explaining scientific observations?
•    If so, what is the nature of these explanations?
•    Will future science favor prediction above explanation?
•    If so, what does this mean for science-based decision and policy making?
•    What is explained about ML by methods such as saliency maps and adversarials?
•    Does ML introduce a shift from classical notions of scientific explanation, such as causal-mechanistic, covering law-, or unification-based, towards a purely statistical one?
•    (Why) should we trust ML applications, given their opacity?
•    (Why) should we care about the apparent loss of explanatory power?

The Special Issue is guest edited by members of the project The impact of computer simulations and machine learning on the epistemic status of LHC Data, part of the DFG/FWF-funded interdisciplinary research unit The Epistemology of the Large Hadron Collider

For more information, please visit https://www.lhc-epistemology.uni-wuppertal.de

Timetable
Deadline for paper submissions: 28 February 2021
Deadline for paper reviewing: 19 April 2021
Deadline for submission of revised papers: 03 May 2021
Deadline for reviewing revised papers: 07 June 2021
Papers will be published in 2021

Submission Details
To submit a paper for this special issue, authors should go to the journal’s Editorial Manager https://www.editorialmanager.com/mind/default.aspx The author (or a corresponding author for each submission in case of co- authored papers) must register into EM.
The author must then select the special article type: "Machine Learning: Prediction without Explanation?” from the selection provided in the submission process. This is needed in order to assign the submissions to the Guest Editor. 
Submissions will then be assessed according to the following procedure: 
New Submission => Journal Editorial Office => Guest Editor(s) => Reviewers => Reviewers’ Recommendations => Guest Editor(s)’ Recommendation => Editor-in-Chief’s Final Decision => Author Notification of the Decision.
The process will be reiterated in case of requests for revisions.

Guest Editors
•    Dr. Florian J. Boge, postdoctoral researcher, Interdisciplinary Centre for Science and Technology Studies (IZWT), Wuppertal University
•    Paul Grünke, doctoral student, research group “Philosophy of Engineering, Technology Assessment, and Science”, Institute for Technology Assessment and Systems Analysis (ITAS), Karlsruhe Institute of Technology (KIT)
•    Prof. Dr. Dr. Rafaela Hillerbrand, head of the research group “Philosophy of Engineering, Technology Assessment, and Science”, Institute for Technology Assessment and Systems Analysis (ITAS), Karlsruhe Institute of Technology (KIT)

For any further information please contact:
-    Dr. Florian J. Boge
-    Paul Grünke