Early-Career Spotlight Invited talk

Session 1 — Aug 24th at 14:10EDT @Stadium
Markus Brill
Technische Universität Berlin
From Computational Social Choice to Digital Democracy
Digital Democracy (aka e-democracy or interactive democracy) aims to enhance democratic decision-making processes by utilizing digital technology. A common goal of these approaches is to make collective decision-making more engaging, inclusive, and responsive to participants’ opinions. For example, online decision-making platforms often provide much more flexibility and interaction possibilities than traditional democratic systems. It is without doubt that the successful design of digital democracy systems presents a multidisciplinary research challenge. I argue that tools and techniques from computational social choice should be employed to aid the design of online decision-making platforms and other digital democracy systems.
Clément Carbonnel
Frontiers of Tractability in Constraint Satisfaction Problems
Jana Doppa
Washington State University, Pullman, WA
Adaptive Experimental Design for Optimizing Combinatorial Structures
Scientists and engineers in diverse engineering and scientific domains need to perform expensive experiments to optimize combinatorial spaces, where each candidate input is a discrete structure (e.g., sequence, tree, graph) or a hybrid structure (mixture of discrete and continuous design variables). For example, in hardware design optimization over locations of processing cores and communication links for data transfer between cores, evaluating each design involves performing a computationally-expensive simulation. These experiments are often performed in a heuristic manner by the humans without any formal reasoning. In this paper, we first describe the key challenges in solving these problems in the framework of Bayesian optimization (BO) and our progress over the last five years in addressing these challenges. We also discuss exciting sustainability applications in domains including electronic design automation, nanoporous materials, biological sequence design, and electric transportation systems.
Angelika Kimmig
KU Leuven
Reasoning and Learning in Rich Uncertain Domains
Reasoning, learning and decision making in complex, uncertain domains is central to many applications of artificial intelligence and data science, e.g., in robotics, natural language processing, social networks, bioinformatics, smart sensor networks, and automatic knowledge acquisition and integration from structured and unstructured sources such as databases and webpages. An AI system supporting a human expert in such a domain not only needs to be able to combine a diverse range of inputs, but also needs to explicitly handle uncertainty, and to support inspection and explaining, that is, insight into why the system gives certain answers and which assumptions it made to arrive at its conclusions. My research vision is to develop a unified framework for such tasks, with on the one hand a language with rigorously defined semantics and on the other hand a system that frees users from the need to understand the details of the underlying algorithms and machinery and instead lets them focus on the task of interest. I am especially building upon work around the probabilistic logic programming language and system ProbLog, cf. https://dtai.cs.kuleuven.be/problog.
Session 2 — Aug 25th at 14:30EDT @Stadium
Guni Sharon
Texas A&M University, College Station, TX
Alleviating Road Traffic Congestion with Artificial Intelligence
This paper reviews current AI solutions towards road traffic congestion alleviation. Three specific AI technologies are discussed, (1) intersection management protocols for coordinating vehicles through a roads intersection in a safe and efficient manner, (2) road pricing protocol that induce optimized traffic flow, and (3) partial or full autonomous driving that can stabilize traffic flow and mitigate adverse traffic shock waves. The paper briefly presents the challenges affiliated with each of these applications along with an overview of state-of-the-art solutions. Finally, real-world implementation gaps and challenges are discussed.
Yizhou Sun
University of California
Leman Akoglu
Carnegie Mellon University
Anomaly mining is an important problem that finds numerous applications in various real world do- mains such as environmental monitoring, cybersecurity, finance, healthcare and medicine, to name a few. In this article, I focus on two areas, (1) point-cloud and (2) graph-based anomaly mining. I aim to present a broad view of each area, and discuss classes of main research problems, recent trends and future directions. I conclude with key take-aways and overarching open problems. Disclaimer. I try to provide an overview of past and recent trends in both areas within 4 pages. Undoubtedly, these are my personal view of the trends, which can be organized differently. For brevity, I omit all technical details and refer to corresponding papers. Again, due to space limit, it is not possible to include all (even most relevant) references, but a few representative examples.
Yair Zick
University of Massachusetts, Amherst, MA
Towards Fair and Transparent Algorithmic Systems
My research in the past few years has focused on fostering \emph{trust} in algorithmic systems. I often analyze scenarios where a variety of desirable trust-oriented goals must be simultaneously satisfied; for example, ensuring that an allocation mechanism is both fair and efficient, or that a model explanation framework is both effective and differentially private. This interdisciplinary approach requires tools from a variety of computer science disciplines, such as game theory, economics, ML and differential privacy.
Session 3 — Aug 25th at 23:00EDT @Stadium
Naoto Yokoya
University of Tokyo
Computational Imaging and Vision from Space
Shivaram Kalyanakrishnan
Indian Institute of Technology Bombay
Intelligent and Learning Agents: Four Investigations
My research is driven by my curiosity about the nature of intelligence. Of the several aspects that characterise the behaviour of intelligent agents, I primarily study sequential decision making, learning, and exploration. My interests also extend to broader questions on the effects of AI on life and society. In this paper, I present four distinct investigations drawn from my recent work, which range from theoretical to applied, and which involve both analysis and design. I also share my outlook as an early-career researcher.
Yu-Feng Li
Nanjing University
Safe Weakly Supervised Learning
Weakly supervised learning (WSL) refers to learning from a large amount of weak supervision data. This includes i) incomplete supervision (e.g., semi-supervised learning); ii) inexact supervision (e.g., multi-instance learning) and iii) inaccurate supervision (e.g., label noise learning). Unlike supervised learning which typically achieves performance improvement with more labeled data, WSL may sometimes even degenerate performance with more weak supervision data. It is thus desired to study safe WSL, which could robustly improve performance with weak supervision data. In this article, we share our understanding of the problem from in-distribution data to out-of-distribution data, and discuss possible ways to alleviate it, from the aspects of worst-case analysis, ensemble-learning, and bi-level optimization. We also share some open problems, to inspire future researches.
Nir Lipovetzky
School of Computing and Information Systems, University of Melbourne, Australia
Width-Based Algorithms for Common Problems in Control, Planning and Reinforcement Learning
Width-based algorithms search for solutions through a general definition of state novelty. These algorithms have been shown to result in state-of-the-art performance in classical planning, and have been successfully applied to model-based and model-free settings where the dynamics of the problem are given through simulation engines. Width-based algorithms performance is understood theoretically through the notion of planning width, providing polynomial guarantees on their runtime and memory consumption. To facilitate synergies across research communities, this paper summarizes the area of width-based planning, and surveys current and future research directions.
Session 4 — Aug 26th at 23:00EDT @Stadium
Qi Liu
University of Science and Technology of China
Towards a New Generation of Cognitive Diagnosis
Cognitive diagnosis is a type of assessment for automatically measuring individuals’ proficiency profiles from their observed behaviors, e.g. quantifying the mastery level of examinees on specific knowledge concepts/skills. As one of the fundamental research tasks in domains like intelligent education, a number of Cognitive Diagnosis Models (CDMs) have been developed in the past decades. Though these solutions are usually well designed based on psychometric theories, they still suffer from the limited ability of the handcrafted diagnosis functions, especially when dealing with heterogeneous data. In this paper, I will share my personal understanding of cognitive diagnosis and review our recent developments of CDMs mostly from a machine learning perspective. Meanwhile, I will show the wide application of cognitive diagnosis in recommender systems, adaptive learning and computerized adaptive testing.
Zhiyuan Liu
Tsinghua University
Knowledgeable Natural Language Processing
As human language is closely related to human intelligence, in 1950, Alan Turing raised the question “can computers think like humans” and formally proposed the idea “Turing Test” based on language conversation [Turing, 1950]. After the Dartmouth Conference in 1956, natural language processing (NLP) has gradually become the key to passing the Turing Test and achieving Artificial Intelligence. In the past decades, we have continuously made breakthroughs in the NLP field. From early syntactic and statistical methods to the latest deep neural methods, NLP techniques have been also constantly innovating. Taking a deep look into this history, we could find a line running through the whole NLP research spectrum: a closed-loop of knowledge, including knowledge representation, knowledge acquisition, and application for language understanding. Hence, based on the perspective of knowledge, we introduce a new framework to revisit existing efforts in NLP, namely “knowledgeable natural language processing”. Next, we will first introduce various knowledge for language understanding, then show the overall framework of knowledgeable machine learning for NLP, and finally introduce the new trend of knowledge use after the emergence of large-scale pre-trained language models.