Herramientas de usuario

Herramientas del sitio


proyectos:tfg:casos:redes_neuronales:start

Causalidad y redes neuronales

Bibliografía a usar

Otras fuentes

“The extreme case of model interpretability is when we are trying to establish a mechanistic model, that is, a model that actually captures the phenomena behind the data. Good examples include trying to guess whether two molecules (e.g. drugs, proteins, nucleic acids, etc.) interact in a particular cellular environment or hypothesizing how a particular marketing strategy is having an actual effect on sales. Nothing really beats old-style Bayesian methods informed by expert opinion in this realm; they are our best (if imperfect) way we have to represent and infer causality. Vicarious has some nice recent work illustrating why this more principled approach generalizes better than deep learning in videogame tasks”.
http://hyperparameter.space/blog/when-not-to-use-deep-learning/

“Discovering causal models from observational and interventional data is an important first step preceding what-if analysis or counterfactual reasoning. As has been shown before, the direction of pairwise causal relations can, under certain conditions, be inferred from observational data via standard gradient-boosted classifiers (GBC) using carefully engineered statistical features. In this paper we apply deep convolutional neural networks (CNNs) to this problem by plotting attribute pairs as 2-D scatter plots that are fed to the CNN as images. We evaluate our approach on the 'Cause- Effect Pairs' NIPS 2013 Data Challenge. We observe that a weighted ensemble of CNN with the earlier GBC approach yields significant improvement. Further, we observe that when less training data is available, our approach performs better than the GBC based approach suggesting that CNN models pre-trained to determine the direction of pairwise causal direction could have wider applicability in causal discovery and enabling what-if or counterfactual analysis”.
Deep Convolutional Neural Networks for Pairwise Causality

“What's the relation between hierarchical models, neural networks, graphical models, bayesian networks?”
What's the relation between hierarchical models, neural networks, graphical models, bayesian networks?

The Epistemology of Data Use: Sabina Leonelli ALS, Dec. 1, 2017
BSTRACT: This talk examines the epistemology of data by addressing the challenges raised by ‘big data science’, and particularly the dissemination and re-use of large datasets via intricate and nested infrastructures such as digital databases. Empirically, my analysis is grounded on the in-depth qualitative study of “data journeys”, that is ways in which datasets are circulated and used for a variety of purposes across several different contexts. Conceptually, the talk brings my previous work on the relational nature of data to bear on existing philosophy of inductive reasoning and the triangulation of multiple lines of evidence (most prominently by John Norton, Alison Wylie and William Wimsatt), with the aim of outlining conditions under which big data can be used to reliably inform inferential reasoning. I conclude by highlighting five ways in which data science that fails to operate under such conditions could significantly damage scientific methods and the credibility of research outputs.
https://www.youtube.com/watch?v=qc1aQep4DE8

proyectos/tfg/casos/redes_neuronales/start.txt · Última modificación: 2018/05/31 06:37 por Joaquín Herrero Pintado