Herramientas de usuario

Herramientas del sitio


podcast:episodios:15

Diferencias

Muestra las diferencias entre dos versiones de la página.

Enlace a la vista de comparación

Ambos lados, revisión anterior Revisión previa
Próxima revisión
Revisión previa
Próxima revisión Ambos lados, revisión siguiente
podcast:episodios:15 [2019/11/19 08:17]
Joaquín Herrero Pintado [Peter Norvig y la confianza en la tecnología]
podcast:episodios:15 [2019/11/19 09:23]
Sergio Muñoz Roncero [¿Dualismo o monismo anómalo?]
Línea 175: Línea 175:
 Podemos aprender mucho del funcionamiento de la mente humana investigando el funcionamiento de mentes radicalmente diferentes a la nuestra. Es la idea fundamental del filósofo de la ciencia e historiador [[https://​en.wikipedia.org/​wiki/​Peter_Godfrey-Smith|Peter Godfrey-Smith]],​ quien en su libro [[https://​www.amazon.es/​mentes-or%C3%ADgenes-profundos-consciencia-Pensamiento/​dp/​8430619062/​|Otras mentes. El pulpo, el mar y los orígenes profundos de la consciencia]] hace un recorrido por la evolución de la mente distribuida de los cefalópodos y especula acerca del origen común de mentes tan distintas de la de los homínidos. Podemos aprender mucho del funcionamiento de la mente humana investigando el funcionamiento de mentes radicalmente diferentes a la nuestra. Es la idea fundamental del filósofo de la ciencia e historiador [[https://​en.wikipedia.org/​wiki/​Peter_Godfrey-Smith|Peter Godfrey-Smith]],​ quien en su libro [[https://​www.amazon.es/​mentes-or%C3%ADgenes-profundos-consciencia-Pensamiento/​dp/​8430619062/​|Otras mentes. El pulpo, el mar y los orígenes profundos de la consciencia]] hace un recorrido por la evolución de la mente distribuida de los cefalópodos y especula acerca del origen común de mentes tan distintas de la de los homínidos.
  
-[[http://​https://​www.revistadelauniversidad.mx/​articles/​f245334c-b871-490b-9fba-68cf6cc40279/​otras-mentes-de-peter-godfrey-smith|Reseña del libro en la Revista de la Universidad de México]]+[[http://​https://​www.revistadelauniversidad.mx/​articles/​f245334c-b871-490b-9fba-68cf6cc40279/​otras-mentes-de-peter-godfrey-smith|Reseña del libro en la //Revista de la Universidad de México//]]
 ===== Antecedentes:​ 1943, modelo neuronal de McCulloch-Pitts. ===== ===== Antecedentes:​ 1943, modelo neuronal de McCulloch-Pitts. =====
  
Línea 186: Línea 186:
 La Unión Europea está regulando el [[https://​ora.ox.ac.uk/​catalog/​uuid:​593169ee-0457-4051-9337-e007064cf67c/​download_file?​file_format=pdf&​safe_filename=euregs.pdf&​type_of_work=Journal+article|Derecho a la Explicación]] respecto a decisiones automatizadas. ​ La Unión Europea está regulando el [[https://​ora.ox.ac.uk/​catalog/​uuid:​593169ee-0457-4051-9337-e007064cf67c/​download_file?​file_format=pdf&​safe_filename=euregs.pdf&​type_of_work=Journal+article|Derecho a la Explicación]] respecto a decisiones automatizadas. ​
  
 +{{:​podcast:​episodios:​peter-norvig.png?​360 |}}
 A esta problemática se refiere el científico de la computación Peter Norvig en [[https://​www.youtube.com/​watch?​v=_VPxEcT_Adc&​t=1111s|esta entrevista que le hace Lex Fridman]] en su podcast Artificial Intelligence (minutos 18 a 25), aunque Norvir desplaza el problema desde el tema de la "​explicabilidad"​ al tema de la confianza en la exactitud de los resultados proporcionados por sistemas de 'caja negra'​. A esta problemática se refiere el científico de la computación Peter Norvig en [[https://​www.youtube.com/​watch?​v=_VPxEcT_Adc&​t=1111s|esta entrevista que le hace Lex Fridman]] en su podcast Artificial Intelligence (minutos 18 a 25), aunque Norvir desplaza el problema desde el tema de la "​explicabilidad"​ al tema de la confianza en la exactitud de los resultados proporcionados por sistemas de 'caja negra'​.
 +
 +//secondly we could somehow learn yeah there'​s this rule that you can remove one grain of sand and you can do that a bunch of times but you can't do it the new your infinite amount of times but on the other hand when you're doing induction on the integers sure then it's fine to do it an infinite number of times. And if we could somehow we have to learn when these strategies are applicable rather than having the strategies be completely neutral and of it available everywhere anytime using you know networks anytime you learn from data form representation from day in an automated way it's not very explainable as to or it's not introspective to us humans in terms of how this neural network sees the world where why does it succeed so brilliantly on so many in so many cases and fail so miserably in surprising ways and small. So what do you think is this is the future there can simply more data better data more organized data solve that problem or is there elements of symbolic systems they need to be brought in which are a little bit more explainable. So I prefer to talk about trust and validation and verification rather than just about explain ability and then I think explanations are one tool that you use towards those goals and I think it is an important issue that we don't want to use these systems. Unless we trust them and we want to understand where they work and where they don't work and in an explanation can be part of that right. So I apply for loan and I get denied I want some explanation of why.  You have in Europe we have the GDPR that says you're required to be able to get that but on the other hand an explanation alone is not enough. So you know we were used to dealing with people and with organizations and corporations and so on and they can give you an explanation and you have no guarantee that that explanation relates to reality right right so the bank can tell me well you didn't get the loan 
 +because you didn't have enough collateral and that may be true or it may be true that they just didn't like my religion or or something else. I can't tell from the explanation and that's that's true whether the decision was made by computer or by a person. So I want more I do want to have the explanations and I want to be able to have a conversation to go back and forth and said well you 
 +gave this explanation but what about this and what would have happened if this had happened and what would I need to change that. So I think a conversation is a better way to think about it than 
 +just an explanation as a single output and I think we need testing of various kinds right so in order to know was the decision really based on my collateral or was it based on my religion or skin
 +color or whatever I can't tell if I'm only looking at my case. But if I look across all the cases then I can detect the pattern that's right so you want to have that kind of capability you want to
 +have these adversarial testing all right.//
  
  
podcast/episodios/15.txt · Última modificación: 2022/06/15 10:40 por Joaquín Herrero Pintado