Tutorial Program

Tutorial 1: The Journey from Similar to Plausible Situations: Human and Mathematical Aspects.

Instructor:
Prof. Peeter Lorents, Estonian Business School, Estonia

Time: Friday, May 14 13:00 - 16:30 (GMT)

Abstract:  People have relied on similarity for millennia in reaching decisions, without being able to process a large amount of data, use statistical methods, etc. One approach that people have taken since the beginning of time is to rely on similarity. In order to guess what might happen in the future, or what happened in the past, we try to recall similar situation, and similar developments. The conclusions that people reach in this way seem plausible enough for them to make informed decisions. While acknowledging this fact, it is probably not too much to ask what similarity is, how to evaluate it, and how to relate it to credibility. While acknowledging this fact, it is probably not too much to ask what similarity is, how to evaluate it, and how to relate it to plausibility. To do this, we first need to get the necessary overview of the relevant concepts of the field of algebraic systems and logic. Second, we also need to know about this wonderful connection between the (algebraic) similarity of the structure of things and the (logical) veracity of the statements that characterize them. But - which is particularly nice - it turns out that the procedures for assessing similarity and comparing credibility are not complicated at all. Although, it must be acknowledged that its simplicity is based on hundreds of pages of math, and to be quite precise - to two additions of natural numbers and one division of numbers. Our task in the next few hours is to explain what the concepts of the similarity and plausibility could be and the methods by which to explain the things described above. It turns out that this does not require any difficult-to-understand definitions and complicated formulas. True - something still needs to be formulated as clearly as possible and convincingly substantiated. However, let us promise - behind it all there are humanly comprehensible explanations, arguments and calculations (necessary for assessing similarity), which are so-called ordinary people's abilities. To this end, we look at and explain some things about the interplay between algebra and logic in order to be able to use the concept of an ordered set - or system - to use it to make sense of and evaluate the similarity of situations and the similarity of developments.

  1. Situations as algebraic systems, and developments as algebraic systems in which situations are elements
  2. Describing situations and developments through statements. Making statements using logic formulas
  3. Two types of similarity: structural similarity and descriptive similarity
  4. An algebraic bridge between two species of similarity
  5. Numerical assessment of similarity
  6. Linking the assessment of similarity of developments to the plausibility of situations.
  7. Using the link between credibility and similarity assessments to support human decision-making
  8. Problems worth investigating

Expected results of the lectures

  1. Ability to see situations and developments as structured collections or systems
  2. Ability to treat structured sets as algebraic systems
  3. Ability to treat statements characterizing structured sets as formulas of the theory of a suitable algebraic system
  4. Getting acquainted with the main types of structural similarity of systems: homomorphism and isomorphism
  5. Relationship between homomorphism and positive formulas
  6. Numerical tools for assessing structural similarity and their implementation
  7. The equalization necessary to assess the similarity of the descriptions of the claims
  8. Equalization tables and numerical evaluation of descriptive similarity
  9. Relating numerical estimates of similarity to past and future plausibility of developments
  10. Examples explored with participants

Peeter Lorents

Instructor's biography: Mathematician Peeter Lorents works as a professor at the Estonian Business School and as an associate professor at the IT College of Tallinn University of Technology. His research began at the Institute of Cybernetics of the Estonian Academy of Sciences and was related to hierarchies of recursive function classes (the results of which were recognized as worthy of a PhD at the Faculty of Mathematics of St. Petersburg University). Over the years, interests changed and related to the algebraic and logical aspects of systems intelligence. Especially to the treatment of the nature of knowledge (data and information) based on the fundamental binary notation-denotation relation between entities. Also with structural and descriptive aspects of similarity and its implementation to practical support the decision-making process. In addition to research and teaching work of Peeter Lorents has held a high state positions (Prime Minister's advisor, MEP, Chairman of the Parliamentary Defense Committee). Formed and chaired a project-team to establish the NATO Collective Cyber Defense Excellence Center. He later headed the research and development branch of the center. He also contributed to NATO's Cyber Defense for many years as a member of the relevant groups in NATO Science and Technology Organization.


Tutorial 2: Getting Support Right: User and Use System Testing Using a Work-Centered Approach

Instructors:
Ann Bisantz, Ph.D., Professor of Industrial and Systems Engineering, University at Buffalo, Buffalo, NY
Emilie Roth, Ph.D., Owner and Principal Scientist, Roth Cognitive Engineering, Stanford, CA

Time: Friday, May 14 17:00 - 20:30 (GMT)

Abstract: A critical component of any system design process is verification that the designed system meets operational objectives. Decision-support and similar aiding systems are designed based on goals of how they will improve human performance. In the cognitive engineering tradition, these goals are regarded as hypotheses that need to be tested. Robust user testing is required to uncover whether or not hypothesized benefits are realized, to identify unsupported aspects of performance, and finally to reveal unanticipated side-effects of introducing the new technology that need to be addressed. This need is particularly important in evaluating systems which support high-consequence, challenging work in environments characterized by both risk and uncertainty. This tutorial will introduce attendees to a robust method of system evaluation, Work-Centered Evaluation, that has been developed and refined through the design and evaluation of aiding systems in domains including military command and control, and health care. The methodology includes evaluations of the underlying model of work support as well as the surface features of the interface, through in-depth, scenario-based testing with system experts. The tutorial will provide detailed descriptions, as well as examples, of how to deploy Work-Centered Evaluation as part of the iterative systems design process.

Ann Bisantz

Instructor's biography: Ann M. Bisantz, PhD is a Professor of Industrial and Systems Engineering at the University at Buffalo, State University of NY. She has over twenty years of experience in research and applications in areas of cognitive engineering and interface design, particularly in domains of health care and military command-and-control. Her focus includes methods of cognitive engineering; cognitive work analysis; trust in automation; decision-making modeling and support; and displaying uncertain information. Most recently, she has collaborated with the National Center for Human Factors in Healthcare on the design of displays intended to support shared communication and awareness among Emergency Medicine Clinicians. She is a past recipient of an NSF Career Award and a SUNY Chancellor’s Award for Research and Creativity; and is a Fellow of the Human Factors and Ergonomics Society. Dr. Bisantz is the past chair of the Industrial and Systems Engineering department and since 2018 has served as Dean of Undergraduate Education for the University at Buffalo.

Emilie M. Roth

Instructor's biography: Emilie M. Roth, Ph.D. is owner and principal scientist of Roth Cognitive Engineering. She is a cognitive psychologist by training, and has over 30 years of experience in cognitive analysis and design in a variety of domains including nuclear power plant operations, railroad operations, military command and control, and healthcare. She has supported design of first-of-a-kind systems including next-generation nuclear power plant control rooms; and work-centered support systems for airlift planning and monitoring for USTRANSCOM and the Air Mobility Command. This has included development and execution of work-centered evaluations of prototype cognitive support systems across multiple domains. She is an associate editor of the Journal of Cognitive Engineering and Decision Making; a fellow of the Human Factors and Ergonomics Society; and currently serves as a member of the Board on Human-Systems Integration at the National Academies.


Tutorial 3: Introduction to Mission-Centric Cyber Security Situation Management

Instructor:
Dr. Gabriel Jakobson, Chief Scientist, CyberGem Consulting, USA

Time: Saturday, May 15 13:00 - 16:30 (GMT)

Abstract:  Until recently the IT-centricity was the prevailing paradigm in building cyber security systems. According to this paradigm the cyber security measures were organized around three major goals: confidentiality, integrity, and availability of IT assets. The success of cyber security measures was determined by the level of protection achieved for the IT network infrastructure components (routers, servers, etc.) and IT software assets (operating systems, data resources, application programs, etc.). Although the IT-centric approach is still widely used, its weakness has become obvious with the deployment of large IT infrastructures, where it is technically and economically unjustifiable to seek high level protection for all IT components. Despite the massive efforts to secure the cyber space, situation control of complex dynamic systems, being them military or exploratory missions, manufacturing or transportation business processes, or operations of other complex systems, inevitably experience numerous security incidents. At the same time, it is hard not to defend a position that the ultimate goal of cyber security operations is the protection of ongoing and planned missions. Consequently, the answer to the question: “How well did we succeed in protecting of cyber assets?” really depends on how well we succeeded in reaching the operational goals set for the missions, even if the IT assets that serve the missions, might be under cyber attacks. We will call this the principle of mission-centric cyber security situation management. According to this principle; the scope and depth of to be protected cyber assets is a dynamic function depending on the successful completion of mission and business processes that relay on corresponding IT assets and services. This tutorial gives an introduction to mission-centric cyber security situation management.We will show that the quality of mission protection depends on how well we are able to model the dependencies between the cyber situations happening on IT assets, and the physical situations happening on the level of real-time managed missions. Correct and complete understanding of those dependencies and the timely ability to modify and reconfigure them allows mitigation of the impact of cyber attacks on current and future missions.This tutorial presents conceptual frameworks, models and algorithms of assessing the impact of cyber attacks on cyber assets, services, and missions. We will describe different models of cyber attacks and their impact, both, on real-time cyber security situations as well on plausible future situations. The tutorial outlines the architecture and the design of major components of a mission-centric cybe security situation management system, and gives an overview of practical applications of the described methods.

Gabriel Jakobson

Instructor's biography: Dr. Gabriel Jakobson is VP and Chief Scientist of CyberGem Consulting, a consulting company specializing in situation management technologies for defense, cyber security, and enterprise management applications. During his more than 20 years tenure at Verizon (formerly GTE), he had increasing responsibilities of leading advanced database, expert systems, artificial intelligence, and telecommunication network management programs. Prior to that, he was Senior Researcher at the Institute of Cybernetics, Tallinn, Estonia conducting research on knowledge-based systems. Dr. Jakobson has authored a monograph and over 140 technical publications and has awarded five US patents on situation management and real-time event correlation. He received a PhD degree in Computer Science from the Institute of Cybernetics, Estonia. Dr. Jakobson holds Honorary Degree of Doctor Honorius Causa from Tallinn University of Technology, Estonia, and in 2006-2012 was Distinguished IEEE Lecturer. He has given lectures and tutorials in more than 20 countries worldwide. Dr. Jakobson is chair of the Technical Committee of Cognitive Situation Management, IEEE Systems, Man and Cybernetics Society.


Tutorial 4: Interdependence and Vulnerability in Systems: Applying Theory to Define Situations for Autonomous Systems

Instructor:
William F. Lawless, Professor of Mathematics, Sciences and Technology and Professor of Social Sciences, Paine College, USA

Time: Saturday, May 15 17:00 - 20:30 (GMT)

Abstract:  Interdependence is an umbrella term for the phenomena that transmit all social effects in the form of interference (e.g., the synergism that produces emergence; the dysergism that produces divorce, splits and conflict; and the asynergesism that destabilizes opponent defenses; together, the phenomena combine into forces that drive local change, organizational restructuring, or even political and possibly social evolution). In contrast to our theory of interdependence, social interdependence theory has a long and hopeful history, culminating in the homilies of peace and harmony that replace Darwin’s “survival of the fittest” with new ageism’s “survival of the friendliest.” Unfortunately, the results of this theory cannot be generalized. The primary weakness of social interdependence theory, developed largely with aggregations that sum the choices of individuals, is its limited ability to predict outcomes in natural social settings and, more relevant, it’s inability to establish fundamental science and engineering relationships for the design, operational guidance and metrics for the rapidly approaching age of autonomous human-machine teams and systems.
In contrast, by relying on the interdependent effects found in human-team studies (e.g., the best science research teams are highly interdependent), by relying on field studies, and by adopting state-dependent effects in theory (quantum-like), including Schrödinger’s and Lewin’s separately derived concept of the “whole being greater than the sum of its parts,” which fits nicely in modern Systems Engineering, our revised theory of interdependence has guided us to make several predictions along with these supporting findings in the field that:

  • redundant team members in teams and organizations impede performance and increase the likelihood of corruption and accidents;
    • boundaries mathematically distinguish Shannon’s information theory for factorable entities (HA,B ≥ HA, HB) from Von Neumann’s non-factorable entities, known as subadditivity (SA,B ≤ SA + SB), accounting for Schrodinger’s and Lewin’s speculations;
    • an intelligent organism can act independenty as an individual, or as the member of a group, but not both simultaneously, accounting for the failure of complementarity theory in social science and game theory, forcing traditional game theory to adapt implicit preferences, reducing their ability to generalize;
  • the value of intelligence in the form of a nation’s higher education for all of its citizens increases the nation’s ability to innovate as indicated by the number of productive patents it produces;
  • facing uncertainty, humans weigh the choice of a path forward by engaging in debates, supporting AI scientists that machines must be able to express their intentions and actions in a causal language humans understand (viz., using artificial intelligence, or AI);
    • Most situations are either well-defined, defined before hand, or defined for competitors, but what steps should be taken when all paths forward are uncertain?
  • the over-reliance of convergence processes, especially in computational systems, leads to poorer decisions, misleading conclusions, or possibly more accidents; and, with the discovery of an entirely new field of research, that:
  • a sense of vulnerability motivates teams and organizations to pursue avoidance behaviors (e.g., mergers or spin-offs), to engage in exploitative behavior (e.g., direct attacks among competitors), or to create vulnerability in opponents (e.g., with the use of deception).

The latter discovery, the sense of vulnerability in the “self” or promoted in the “other,” appears to be key to the survival of teams and systems in nature by promoting resilience, leanness, and adaptiveness.
As part of our ongoing program of research, we propose to advance the theory of interdependence by providing a mathematical model of vulnerability in a team or system, how it is identified or created, how it is exploited, and how to avoid the vulnerability arising with a false sense of security derived from relying on convergence processes alone, both socially and computationally.
For this CogSIMA tutorial, we will cover the definition and uniqueness of interdependence, why it is difficult to handle in the laboratory, details for the supporting research noted above, mathematical models of autonomous human-machine teams, the new field of structural vulnerability, and some of the known problems that remain to be solved.

William Lawless

Instructor's biography: William F. Lawless was a mechanical engineer in charge of nuclear waste management in 1983 when he blew the whistle on the Department of Energy’s (DOE) mismanagement of its military radioactive wastes. For his PhD topic on group dynamics, he theorized about the causes of tragic mistakes made by large organizations with world- class scientists and engineers. After his PhD in 1992, DOE invited him to join its citizen advisory board (CAB) at DOE’s Savannah River Site (SRS), Aiken, SC. As a founding member of DOE's SRS CAB, he coauthored numerous recommendations on environmental remediation from radioactive wastes (e.g., the regulated closure in 1997 of the first two high-level radioactive waste tanks in the USA, possibly the world). He was the SRS CAB co-technical advisor on incineration, 2000-03, and technical advisor in 2009. He was a member of the European Trustnet hazardous decisions group. He is a senior member of IEEE. His research today is on the metrics for, and entropy generation by, autonomous human-machine teams (A-HMT). He is the lead editor of five books (Springer 2016; 2017; CRC 2018; Elsevier 2019; 2020). He was the lead organizer of a 6-article special issue on “human-machine teams and explainable AI” by AI Magazine (2019). He was a co-editor for the Naval Research & Development Enterprise (NRDE) Applied Artificial Intelligence Summit, October 2018, San Diego. He is on the Office of Naval Research's two Advisory Boards for the Science of Artificial Intelligence and Command Decision Making (2018-current). He has authored or co-authored over 80 articles and book chapters, over 150 peer-reviewed proceedings and received almost $2 million in research grants. He has co-organized twelve AAAI symposia at Stanford (2020: AI welcomes Systems Engineering: Towards the science of interdependence for autonomous human-machine teams; https://aaai.org/Symposia/Spring/sss20symposia.php#ss03).

News

We are pleased to announce that Dr. Mica Endsley will present a keynote address on Situation Awareness & Automation: The Boeing 737-Max8 .

Registration is now open.
Registration fees and info are available here
Author registration deadline is March 31 Please register here

New Conference Dates: May 14-22, 2021.

Mar 26: Prof. Peeter Lorents will instruct a tutorial on The journey from Similar to Plausible Situations: Human and Mathematical Aspects.
Feb 22: Dr. Gabriel Jakobson (CyberGem Consulting, USA) will instruct a tutorial on Introduction to Mission-Centric Cyber Security Situation Management.
Jan 7: Prof. Katia Sycara (Carnegie Mellon University, USA) will present a keynote address. 
Nov 19: Dr. William D. Casebeer (Riverside Research’s Open Innovation Center, USA) will present a keynote address on Human-Machine Teaming: Evolution or Revolution, and the Ethical Dimensions of Cyborgs.
Nov 16: Deadline Extension: Paper submissions are due Jan 11, 2021.
Nov 16: Prof. Susan Stepney (University of York, UK) will present a keynote address on Computation as a dynamical system.
Nov 02: Prof. Ann Bisantz (University at Buffalo, USA) and Dr. Emilie Roth (Roth Cognitive Engineering, USA) will instruct a tutorial on Getting Support Right: User and Use System Testing using a Work-centered Approach.
Oct 21: Prof. William Lawless (Paine College, USA) will instruct a tutorial on Interdependence and vulnerability in systems: Applying theory to define situations for autonomous systems. 
Sep 24: The IEEE CogSIMA official website is online! 

 

 

Sponsors and Patrons

IEEE logo
 
SMC logo
 
ISIF Logo
TALTECH LOGO


Related Conferences & Organizations