三多|棋牌

                      Home      Log In      Contacts      FAQs      INSTICC Portal
                       

                      Keynote Lectures

                      Explanations on the Web: a Provenance-based Approach
                      Luc Moreau, King's College London, United Kingdom

                      Available Soon
                      Stefan Decker, RWTH Aachen University, Germany

                      Hybrid Intelligence: AI Systems that Collaborate with People Instead of Replacing Them
                      Frank van Harmelen, The Hybrid Intelligence Center & Vrije Universiteit Amsterdam, Netherlands

                       

                      Explanations on the Web: a Provenance-based Approach

                      Luc Moreau
                      King's College London
                      United Kingdom
                       

                      Brief Bio

                      Luc Moreau is a Professor of Computer Science and Head of the department of Informatics, at King's College London.

                      He has conducted research in various areas of Computer Science, including programming languages, distributed algorithms, distributed systems, and the Web. Luc is renowned for his work on Provenance. Luc was co-chair of the W3C Provenance Working Group, which resulted in four W3C Recommendations and nine W3C Notes, specifying PROV, a conceptual data model for provenance the Web, and its serializations in various Web languages. Previously, he initiated the successful Provenance Challenge series, which saw the involvement of over 20 institutions investigating provenance inter-operability in 3 successive challenges, and which resulted in the specification of the community Open Provenance Model (OPM). Before that, he led the development of provenance technology in the FP6 Provenance project and the Provenance Aware Service Oriented Architecture (PASOA) project.

                      He is currently the principal investigator of three projects: PA4C2: Provenance Analytics for Command and Control; PLEAD: Provenance-driven and Legally-grounded Explanations for Automated Decisions (https://plead-project.org/); and THUMP: Trust in Human-Machine Partnerships (https://thump-project.ai).


                      Abstract
                      AI-based automated decisions are increasingly used as part of new services being deployed to the general public over the Web. This approach to building services presents significant potential benefits, such as the reduced speed of execution, increased accuracy, lower cost, and ability to adapt to a wide variety of situations. However, equally significant concerns have been raised and are now well documented such as concerns about privacy, fairness, bias and ethics. On the consumer side, more often than not, the users of those services are provided with no or inadequate explanations for decisions that may impact their lives.

                      Meanwhile, a decade of research on provenance, a standardisation of provenance at the World Wide Web Consortium (PROV), and applications, toolkits and services adopting provenance have led to the recognition that provenance is a critical facet of good data governance for businesses, governments and organisations in general. Provenance, which is defined as a record that describes the people, institutions, entities, and activities involved in producing, influencing, or delivering a piece of data or a thing, is now regarded as an essential function of data-intensive applications, to provide a trusted account of what they performed.

                      In this talk, I will show that such a provenance record can provide a solid foundation for generating explanations about decisions. The talk will overview the notion of provenance, will outline key steps in constructing explanations, and will report on our experience in three projects: PA4C2: Provenance Analytics for Command and Control; PLEAD: Provenance-driven and Legally-grounded Explanations for Automated Decisions (https://plead-project.org/); and THUMP: Trust in Human-Machine Partnerships (https://thump-project.ai/).



                       

                       

                      Keynote Lecture

                      Stefan Decker
                      RWTH Aachen University
                      Germany
                       

                      Brief Bio
                      Available Soon


                      Abstract
                      Available Soon



                       

                       

                      Hybrid Intelligence: AI Systems that Collaborate with People Instead of Replacing Them

                      Frank van Harmelen
                      The Hybrid Intelligence Center & Vrije Universiteit Amsterdam
                      Netherlands
                       

                      Brief Bio
                      Frank van Harmelen has a PhD in Artificial Intelligence from Edinburgh University, and has been professor of AI at the Vrije Universiteit since 2001, where he leads the research group on Knowledge Representation. He was one of the designers of the knowledge representation language OWL, which is now in use by companies such as Google, the BBC, New York Times, Amazon, Uber, Airbnb, Elsevier, Springer Nature, XMP, and Renault among others. He co-edited the standard reference work in his field (The Handbook of Knowledge Representation), and received the Semantic Web 10-year impact award ifor his work on the open source software Sesame (over 200.000 downloads). He is a Fellow of the European Association for Artificial Intelligence, member of the the Dutch Royal Academy of Sciences (KNAW), of The Royal Holland Society of Sciences and Humanities (KHWM) and of the Academia Europaea, and is adjunct professor at Wuhan University and Wuhan University of Science and Technology in China.


                      Abstract
                      Much of current AI research is implicitly aimed at building systems that replace humans: self-driving cars to replace Uber drivers, translation software replacing interpreters, image analysis software replacing radiologists. But it's becoming increasingly clear that machine intelligence will be rather different from human intelligence. It is therefore more interesting to build AI systems that collaborate in hybrid teams of people and machine, in order to combine their complementary skills. This will require that we start asking a whole set of new research questions. How to equip AI systems with a "theory of mind" to make them collaborative? How to make AI systems adaptive to changes in the team and the environment? How to instill moral values into these systems? And of course how to make them explainable? We will outline a research agenda for hybrid intelligence and present some early results from researchers worldwide into hybrid intelligence.



                      footer

                                          city

                                          car

                                          Technology

                                          explore

                                          mailbox

                                          image

                                          constellation

                                          news

                                          society