The European Commission "The European Commission’s guidelines on ethics in artificial intelligence" (AI), published in April 2019, recognised the importance of a 'human-centric' approach to AI that is respectful of European values. Dedicated training schemes to prepare for the integration of ‘human-centric’ AI into European innovation and industry are now needed. AIs should be able to collaborate with (rather than replace) humans. Safety critical applications of AI technology are ‘human-in-the-loop’ scenarios, where AI and humans work together, as manufacturing processes, IoT systems, and critical infrastructures. The concept of Collaborative Intelligence is essential in these scenarios. The CISC EID will nurture and train 14 world class-leading Collaborative Intelligence Scientists for safety critical situations and provide a blue-print for postgraduate training in this area. The development of Collaborative Intelligence systems requires an interdisciplinary skill set blending expertise across AI, Human Factors, Neuroergonomics and System Safety Engineering. This inter-disciplinary skill-set is not catered for in traditional training courses at any level. The CISC training programme will develop Collaborative Intelligence Scientists with the expertise and skill set necessary to carry-out the major tasks required to develop a Collaborative Intelligence system: (1) Modelling the dynamics of system behaviours for the production processes, IoT systems, and critical infrastructures (System Safety Engineering); (2) Designing and implementing processes capable of monitoring
interactions between automated systems and the humans destined to use them (Human Factors/Neuroergonomics); (3) Using data analytics and AI to create novel human-in-the-loop automation paradigms to support decision making and/or anticipate critical scenarios; and, (4) Managing the Legal and Ethical implications in the use of physiologyrecording wearable sensors and human performance data in AI algorithms.