Herramientas de usuario

Herramientas del sitio


Barra lateral

F.A.Q.
Tutoriales

Secciones


Edita: Creative Codeworks

Action unknown: copypageplugin__copy
congresos:ecsi:keynote-castelfranchi

Keynote: Intelligence vs. Self-organization in an Hybrid Society, Cristiano Castellfranchi

From natural and artificial to hybrid social intelligence: Towards socio-cognitive technical systems

The current explosion and widespread adoption of social network services is deeply impacting how human societies function. Though the impact of these new technologies in the long run is difficult to assess, a major problem stems from they way such technologies are designed. In the absence of a rigorous understanding of how societies work, evolve and change, social network services risk to unintentionally cause deep and structural social change with unforeseen negative consequences and to miss opportunities for positive social innovation. Although social network technologies are nowadays already fused with human sociality, the future emerging societies are at risk of becoming an unpredictable mutant.

Consider the problem of privacy. Social network technologies are inevitably changing the way the private and public spheres are conceived by the new generation of digital natives. Social network technologies are inadvertently promoting new social norms and unintentionally changing human self-conception. As an unintended side-effect, a constitutive conception of personhood and autonomy might be eroded.

There is thus the need for a new generation of tools for human societies. These new tools should be conceived form the start on the basis of the core principles characterizing human societies and human cognitive development, should be designed with a view to socially desirable outcomes, should be aware of the subtleties that are intrinsic to human sociality and be able to anticipate and monitor the inevitable new spontaneous social order.

Indeed, as is well known, one peculiar feature of human societies is that they are based on a level of cooperation that is not achieved by any other biological species and was for a long time left unexplained. During the last decade, however, there has been an enormous rise in the scientific study of human cooperation, and nowadays there is a consolidated body of theoretical and empirical results that explain how cooperation in human societies is indeed possible. Such a conceptual toolbox has been the product of a merging of different disciplines: from biology to economics, from sociology to cognitive science. This interdisciplinary approach to natural social intelligence has identified a number of mechanisms that support human societies (like reputation, punishment, trust, norms and social and legal institutions, etc.) and has developed new formal and conceptual frameworks to approach these problems.

At the same time of the explosion of cooperation studies in the social sciences, computer science has given birth to artificial social intelligence: from early distributed artificial intelligence in which a massive number of autonomous intelligent computational entities interact in order to achieve collective objectives to the domain of Multi-Agent Systems in which software applications have been designed from the scratch as societies of software agents. Still, this artificial social intelligence has been conceived mainly has a closed artificial society mirroring human ones but with no real interaction.

A new generation of tools for human societies is however possible. By promoting a new interdisciplinary alliance between the cognitive sciences, social sciences and computer science, new paradigms to design a new form of hybrid - partly natural and partly artificial - social intelligence can be developed. These future systems will support human-like social features like cooperation, trust, norms etc. They will be anchored on the complexities of human cognitive systems. As a consequence these systems that will be partly made of autonomous and intelligent entities and partly made of humans, will be able to embody crucial principles of human sociality and offer new ecological niches. In order to build such systems, there is the need to promote interdisciplinary research between computer science, engineering, cognitive sciences, philosophy, economics and sociology.

This is the era of Socio-Cognitive Technical Systems.

A Working Document of the European Network for Social Intelligence, June 2013, www.sintelnet.eu
Authors: Cristiano Castelfranchi & Luca Tummolini (Institute of Cognitive Sciences and Technologies Italian National Research Council)
[cristiano.castelfranchi ; luca.tummolini]@istc.cnr.it,

http://www.sintelnet.eu/wiki/garbage/docs/sourcebook/positionpapers/SCTS-Castelfranchi&Tummolini2.pdf

Slides

Quick notes

Socio-technical systems require new skills, conventions, a new view on almost everything. Physical and virtual intermixed. Requires augmented body and augmented mind because we live in an augmented reality living at the same time in two worlds.

This organisation cannot be planned, it is an espontaneous order, it emerges.

  • Not only bounded rationality (Simon)
  • but COMPLEXITY
  • but COMPUTATIONAL INTELLIGENCES
  • for the intrinsic blindness typical of organized institutions

We need a new Simon for explaining rationality at the collectivelevel

1. General perspective

The COGNITIVE MEDIATORS of Social phenomena, richer cognitive models for “artificial intelligences”

COGNITIVIZING: cooperation, conflict, power, social values, commitmentnorms rights, social order, trust

Pareto, Garfinkel: social sciences as opposed top psichology. We need to go back.

We need MIND READING because agents behaviours are due to the mental mechanisnm creating and controlling them.

Una teoria del cerebro que evita la mente no permite entender las inteligencias artificiales.

Social interactions are artifacts not only for coordination but to predict and prescribe the mental states of participants. THE CENTRAL DEVICE IS MIND PRESUPOSING AND MODIFICATION.

  • We need MIND MODIFICATION models: goal adoption and goal induction, m mind and other's mind
  • social coordination works “as if” they have a mind
  • MIND is a social artifact. our social minds are social institutions
  • ASCRIBED and ENDOWED minds are crucial coordination artifacts because they crete the common ground, shared knowledge.
  • COMMUNICATION is also for shapingmind

BUT MIND IS NOT ENOUGH

  • The social actors do NOT understand, negotiate and plan
  • Identify the MENTAL MEDIATOR. unavoidable alienation, Leviatha Demo-crazy
  • Necesitamos entender como construimos algo que no entendemos aún.

MIND NOT ENOUGH - SELF ORGANIZATION

  • emergence & inmergence
  • emergence cognitive, dependence in network, interference in the world
  • spontaneous social order: Friedrich Hayek: emergence must be functional. (Hayek: Knowledge. Market. Planning)
  • Adam Smith invisible hand: teleological nature of invisible hand to pursue social order. Ideologism, too much positive. Must be rejected but social order is emergent as Smith said.
  • How is posible that we pursue something that is not an intention of ours?

2. Theory of function

theory of eemerging functions among cognitive agents NEEDED

In an hybrid world we can reduce guman affective handicap providing more reliable data

Social functions require an aextracognitive emergence working the efectiveness of social function is independent of agents understanding of this function on their own behaviour

Two finalistic systems

  • goal oriented
  • goal governed

Functional OK, teleological no.

KAKO-FUNCTIONS POSSIBLE?

  • cannot be explained in behaviouristic or reincorcement scenarios
  • notion of function as SELECTING and REPRODUCINGits own causes
  • we need COMPLEX REINFORCEMENT LEARNING FORMS operating on GOALS and BELIEFS, thts is, in the cognitive representations
  • example of kakofunction: dirty and clean screens
  • institutional level vicious circles: prisons reproduce delinquency
  • FUNCTION is something SELF REPRODUCING AND SELF PRODUCED, emergent

3. Blind sociality

Obey norms blindly make norms work because the issuer see norm as a tool for a problem. We trust that norm is for social good. Socrates taking the poison. But there is a part of the norm that has to be understood partially.

We blindly reify, objectify power. We dress theking with our eyes.

The “mistakes”, like the idea of god, works very well socially. Doesn't depend on existence.

Social Control

  • MANTENER CONTROL: delegar en IA el COMO conseguir una meta pero no dejar que escoja CUAL meta conseguir
  • OPEN DELEGATION, transparently let know all goals
  • AVOID UNAWARE COOPERATION, better goal adoption instead of goal delegation

We need adjustable autonomy

  • MONITOR PEOPLE to understand why they need to violate norms: possible danger of formalization and enforcement of rules.
  • Violations sometimes produce better functionality.

Concluding remarks

  • we are engineering a new society
  • reconcile emergence, self-organizing with intelligente, people participation
  • self organization = out of mind
    • society works thanks to our PARTIAL INTELLIGENCE, not knowing whats going on at social level
    • will invisible hand become a computational invisible intelligence orchestrating societies? PRESERVE SELF-ORGANIZATION
    • reconcile emergence and congnition
    • en sociedades híbridas se necesita información para que la gente que conozca el cuadro completo de normas y todos los efectos
    • alienation
    • worry: net-demagogy
    • Mark Twain: si votar pudiera cambiar el orden social no nos dejarian hacerlo

    The book Computational intelligent data analysis for sustainable development: shows how predicting without understanding is possible in this area also

Science will be computational or will not be

  • AI (Artificial Inleggigence) was the first attempt
  • not just models but EXPERIMENTAl PLATFORMS, VR

The Goal-Oriented Agents Lab (GOAL) is an interdisciplinary group that carry out research on finalistic behavior in intelligent agents. Key areas of activity are Cognitive Systems, Social Cognition, Action Control, Decision Making, and Emotions. Since the 70s, members of the group developed a novel approach to cognition, known as goal theory. www.istc.cnr.it/group/goal

Q&A

  • Formalizacion no es necesariamente mala, lo malo es crear un modelo social en el que la violacion de la norma no este contemplado
  • Big data es como la gravedad, desde Newton sabemos como funcona en la practica, es una ley, pero no es una teoria porque nosabemos que es realmente. con Big Data encontramos resultados espectaculares de prediccion minando grandes cantidades de datos pero no entendemos los mecanismos sociales que intervienen.
  • social simulations
    • with insects we predict social complexity
    • we eplain without cognitiva agent.thats true
    • technology for collective intelligency
    • VDI is just an preliminary step
    • do we need an emotional mind alwaysin simullations? castelfranchi thiks not
  • there is no technical perfect solutions to political problems because the cause is here are CONFLICTING PERSONAL INTERESTS
congresos/ecsi/keynote-castelfranchi.txt · Última modificación: 2016/07/06 16:32 por Joaquín Herrero Pintado