CSD

In today's datacenters, storage and network resources are shared to lower costs and achieve better utilization, but how does one meet tail latency performance goals in these shared environments? Workloads exhibit different behaviors (e.g., burstiness, load) and can have different latency requirements. Furthermore, storage and networks each exhibit unique performance characteristics that make it hard to provide latency guarantees. For example, in SSD storage devices, writes are slower than reads, while in networks, packets traverse a series of queues and congest with different workloads at each queue.

In this talk, I will describe my thesis work on meeting tail latency goals when sharing storage and networks. Our work looks at the problem from the perspective of scheduling policies, admission control, and workload placement. In particular, we implement new policies and mechanisms for controlling the congestion between multiple workloads of varying burstiness. Our system is unique in that it incorporates mathematically grounded techniques such as Deterministic Network Calculus (DNC) and Stochastic Network Calculus (SNC) to guarantee tail latency. In fact, we are the first to implement SNC in a computer system.

Thesis Committee:
Mor Harchol-Balter (Chair)
Gregory R. Ganger
David G. Andersen
Michael A. Kozuch (Intel Labs)
Arif Merchant (Google)

Though a small part of the body, the human hand is complex and remarkably versatile and multipurpose. Much work has gone into understanding the hand, such as understanding the physical capabilities of the human hand, how humans develop manipulation skills throughout the lifespan, how people translate task requirements into grasping strategy, and so on. Despite that, human manipulation is still not well understood. For example, how many grasps or manipulation actions do people use in daily life? How often and under what circumstances do people use both hands simultaneously instead of one?  A better understanding of how humans grasp can improve our ability to control robotic hands, which are still far behind human hands in dexterity.

In our work we have used a variety of methods to observe how humans grasp and manipulate in natural, everyday settings. We have used photos taken throughout a normal day; high-framerate video in a specific setting (that of a convenience store); and cameras and motion capture systems in the context of a controlled experiment involving transporting a bowl from one location to another. In these studies we found that a single grasp pose can be used for a variety of actions, were able to observe the grasping process in detail, and found that minimizing body rotation plays a large role in the use of one hand vs. two in transport tasks.

We propose applications of some of the main findings of these studies to the goals of improving the success or naturalness of grasping performed by robotic hands and virtual characters. In particular, we propose using the detailed grasping behavior found in the high-framerate video to create a virtual hand controller capable of levering up objects into the hand. We also propose using the results of the bowl transport experiment to create a character whose transporting behavior looks natural and believable.

This work thus presents the results and insights from investigations of human manipulation and lays out ways in which those insights can be used to improve the capabilities of artificial manipulators.

Thesis Committee:
Nancy Pollard (Chair)
Jessica Hodgins
Roberta Klatzky
Carol O'Sullivan (Trinity College, Dublin)

Copy of Thesis Summary

Increasingly, decisions and actions affecting people's lives are determined by automated systems processing personal data.  Excitement about these systems has been accompanied by serious concerns about their opacity and the threats that they pose to privacy, fairness, and other values.  Recognizing these concerns, it is important to make real-world automated decision-making systems accountable for privacy and fairness by enabling them to detect and explain violations of these values. System maintainers may leverage such accounts to repair systems to avoid future violations with minimal impact on the utility goals.

In this thesis, I aim to develop theories and tools for analyzing information use that enable practical accountability mechanisms and ensure data-driven systems respect meaningful privacy and fairness properties. In particular, I focus on two forms of information use: (i) explicit use, the direct causal influence of information, and (ii) proxy use, the indirect use of information through associations. In prior work, I have developed methods for detecting and explaining explicit information use. In this proposal, I will address issues due to covariate shifts in causal testing of machine-learned systems. Further, I will focus on a new formalization of proxy use, and tools for its detection and repair. Finally, I will explore theories of use privacy and proxy non-discrimination built on top of this formalization of proxy use.

Thesis Committee:
Anupam Datta (Chair)
Jaime Carbonell
Matt Fredrikson
Sriram Rajamani (Microsoft Research)
Jeannette Wing (Microsoft Research)

Copy of Thesis Summary

In distributed machine learning, data is dispatched to multiple machines for processing. Motivated by the fact that similar data points often belong to the same or similar classes, and more generally, classification rules of high accuracy tend to be "locally simple but globally complex" (Vapnik and Bottou, 1993), we propose data dependent dispatching that takes advantage of such structure. We present an in-depth analysis of this model, providing new algorithms with provable worst-case guarantees, analysis proving existing scalable heuristics perform well in natural non worst-case conditions, and techniques for extending a dispatching rule from a small sample to the entire distribution. We overcome novel technical challenges to satisfy important conditions for accurate distributed learning, including fault tolerance and balancedness. We empirically compare our approach with baselines based on random partitioning, balanced partition trees, and locality sensitive hashing, showing that we achieve significantly higher accuracy on both synthetic and real world image and advertising datasets. We also demonstrate that our technique strongly scales with the available computing power.

This is joint work with Mu Li, Krishna Pillutla, Colin White, Nina Balcan, and Alex Smola.

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.

This talk is also part of the AI Lunch Seminar Series.

At the core of Machine Learning (ML) analytics is often an expert-suggested model, whose parameters are refined by iteratively processing a training dataset until convergence. The completion time (i.e. convergence time) and quality of the learned model not only depends on the rate at which the refinements are generated but also the quality of each refinement. While data-parallel ML applications often employ a loose consistency model when updating shared model parameters to maximize throughput, the accumulated error may seriously impact the quality of refinements and thus delay completion time, a problem that usually gets worse with scale. Although more immediate propagation of updates reduces the accumulated error, this strategy is limited by physical network bandwidth.

In this talk, I will present Bosen, a system that maximizes the network communication efficiency to minimize such inconsistency error by fully utilizing the inter-machine network bandwidth under a given budget and prioritizing updates that are most significant to convergence. Experiments on various ML applications showed 2-3X improvements in convergence time compared to the previous state-of-the-art synchronization mechanism.

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.

Motivated by these observations, we formalize axioms that any node classification algorithm should obey and propose NetConf which satisfies these axioms and handles arbitrary network effects (homophily/heterophily) at scale. Our contributions are: (1) Axioms: We state axioms that any node classification algorithm should satisfy; (2) Theory: NetConf is grounded in a Bayesian-theoretic framework to model uncertainties, has a closed-form solution and comes with precise convergence guarantees; (3) Practice: Our method is easy to implement and scales linearly with the number of edges in the graph. On experiments using real world data, we always match or outperform BP while taking less processing time.

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.

Neural networks are machine learning models represented by continuous programs of many parameters. Those parameters are generally optimized via gradient descent and related methods. However, those methods are indeed limited to tuning parameters of a given program. Choosing the program itself is an open problem. Programs are generally chosen by expert judgement, for computational convenience or by brute force search.

I present "nonparametric neural networks", a framework for jointly optimizing program parameters and structure. Under this framework, we alternate between random local changes to the program and gradient-based parameter tuning and thus search over both components jointly. The search is enabled by defining a connected, continuous space over all program-parameter pairs, by penalizing programs according to their size, and by several optimization tricks.

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.

The theory of approximation algorithms has seen great progress since 90's, and the optimal approximation ratio was revealed for many fundamental combinatorial optimization problems. Despite this success for individual problems, our understanding is not complete to have a unified understanding of each class of problems. One of the most notable exceptions is an important subclass of CSPs called MAX CSP, where there is a simple semidefinite programming based algorithm provably optimal for every problem in this class under the Unique Games Conjecture. Such a complete understanding is not known for other basic classes such as coloring, covering, and graph cut problems.

This thesis tries to expand the frontiers of approximation algorithms, with respect to the range of optimization problems as well as mathematical tools for algorithms and hardness. We show tight approximabilities for various fundamental problems in combinatorial optimization beyond Max CSP. It consists of the following five parts:

1. CSPs: We introduce three variants of MAX CSP, called Hard CSP, Balance CSP, and Symmetric CSP. Our results show that current hardness theories for Max CSP can be extended to its generalizations (Hard CSP, Balance CSP) to prove much stronger hardness, or can be significantly simplified for a special case (Symmetric CSP).

2. Applied CSPs: Numerous new optimization problems have emerged since the last decade, as computer science has more actively interacted with other fields. We study three problems called Unique Coverage, Graph Pricing, and decoding LDPC codes, motivated by networks, economics, and error-correcting codes. Extending tools for Max CSP, we show nearly optimal hardness results or integrality gas for these problems.

3. Coloring: We study complexity of hypergraph coloring problems when instances are promised to have a structure much stronger than admitting a proper 2-coloring, and prove both algorithmic and hardness results. For both algorithms and hardness, we give unifed frameworks that can be used for various promises.

4. H-Transversal: We study H-Transversal, where given a graph G and a fixed "pattern" graph H, the goal is to remove the minimum number of vertices from G to make sure it does not include H as a subgraph. We show an almost complete characterization of the approximability of H-Transversal depending on properties of H. One of our algorithms reveals a new connection between path transversal and graph partitioning.

5. We also study various cut problems on graphs, where the goal is to remove the minimum number of vertices or edges to cut desired paths or cycles. We present a general tool called "length-control dictatorship tests" to prove strong hardness results under the Unique Games Conjecture, which allow us to prove improved hardness results for multicut, bicut, double cut, interdiction, and firefigher problems.

Thesis Committee:
Venkatesan Guruswami (Chair)
Anupam Gupta
Ryan O'Donnell
R. Ravi
Ola Svensson (EPFL)

Concurrent programming presents a challenge to students and experts alike because of the complexity of multithreaded interactions and the difficulty to reproduce and reason about bugs. Stateless model checking is a concurrency testing approach which forces a program to interleave its threads in many different ways, checking for bugs each time. This technique is powerful, in principle capable of finding any nondeterministic bug in finite time, but suffers from exponential explosion as program size increases. Checking an exponential number of thread interleavings is not a practical or predictable approach for programmers to find concurrency bugs before their project deadlines.

In this thesis, I propose to make stateless model checking more practical for human use by way of several new techniques. I have built Landslide, a stateless model checker specializing in student projects for undergraduate operating systems classes. Landslide includes a novel algorithm for automatically managing multiple state spaces according to their estimated completion times, which I will show quickly finds bugs should they exist and also quickly verifies correctness otherwise. I will evaluate Landslide's suitability for inexpert use by presenting the results of many semesters providing it to students in 15-410, CMU's Operating System Design and Implementation class. Finally, I will explore broader impact by extending Landslide to test some real-world programs and to be used by students at other universities.

Thesis Committee:
Garth Gibson (Chair)
David Eckhardt
Brandon Lucia
Haryadi Gunawi (University of Chicago)

Copy of Thesis Summary 

Duolingo has become the #1 education app on both Google Play and the Apple App Store. Since its public launch five years ago, the company has created three independent apps on multiple platforms and attracted over 170 million users worldwide.

Much of this success can be attributed to a data-centric and incremental approach to improving the app. Everything is A/B tested. Though user retention and daily active users are fairly straightforward to measure, a challenge that has presented itself over the years is how to measure how well people learn a language on our app.

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement

Pages

Subscribe to CSD