Theory Seminar

COMPSCI 891M-01:Theory Seminar

Meeting Time: Tue. 4 – 5, CS 140

Organizer: Barna Saha

The theory seminar is a weekly meeting in which topics of interest in the theory of computation — broadly construed — are presented. This is sometimes new research by visitors or by local people. It is sometimes work in progress, and it is sometimes recent or classic material of others that some of us present in order to learn and share.

The goal of all talks in this seminar is to encourage understanding and participation. We would like as many attendees as possible to get a sense of the relevant ideas being discussed, including their context and significance.

Please email me if you would like to give a talk, or if you would like to suggest/invite/volunteer someone else; or a paper or topic that you would like to see covered.

This is a one-credit seminar which may be taken repeatedly for credit.

Schedule, Spring, 2016:

 

Tue., Jan. 19 brief organizational meeting Please come and help us get organized. Whether or not you can come, please email me with times that you can/would-like-to speak.
Tue., Jan. 26 Jon Machta  Spin Glasses: a frustrating problem for statistical physics Slides
Thur., Jan 28, 1-2 pm

 

Arya Mazumdar (part of MLFL) Neural Auto-associative memory via sparse recovery
Thur., Feb 4, noon-1 pm Edo Liberty (part of MLFL) Online Data Mining PCA And K-Means
Mon., Feb. 8, 11-noon Howard Karloff  On the Optimal BMI
Tue, Feb. 9, 1-2 pm Howard Karloff Variable Selection is Hard
Fri, Feb. 19, Time: 11-noon Arturs Backurs

Edit Distance Cannot Be Computed in Strongly Subquadratic Time (unless SETH is false)

Tue., Feb. 23 Sainyam Galhotra  Maximizing social influence in nearly optimal time
Wed., Mar. 2  Vasilis Syrgkanis

Learning in Strategic Environments: Theory & Data

Tue., Mar. 8 Andrew McGregor  The Latest on Linear Sketches for Large Graphs
Tue., Mar. 15 Spring Break
Tue., Mar. 22 Eva Mascovisi  New Directions in Cryptography, the seminal paper by Diffie and Hellman
Tue., Apr. 5  Pratistha Bhattarai  b-coloring, its hardness and some algorithms on special restricted graphs.
Tue., Apr. 19 Dan Saunders

Memcomputing Machines

Tue., Apr. 26  Ankit Singh Rawat  New Coding Techniques for Distributed Storage System
Tue., May 3rd My Phan & Sophie Koffler

Talk Details:

Jan. 26, 4-5 pm. Spin Glasses: a frustrating problem for statistical physics , John Machta, Physics, UMass Amherst

I will describe recent work on spin glasses.  These are models of random magnetic materials with competing interactions.  Finding the ground state of a spin glass is an NP-hard combinatorial optimization problem.  After introducing the Gibbs distribution and several other statistical physics concepts, I will discuss the physics of spin glasses and some open theoretical questions.  I will then describe our numerical simulations of spin glasses.  I will first discuss the population annealing algorithm, an effective sequential Monte Carlo method for sampling the Gibbs distribution, and then present results that shed light on the open theoretical questions and also help explain in physical terms why spin glasses (and perhaps other NP-hard  problems) are computationally difficult.

Jan. 28, 1-2 pm.Neural Auto-associative memory via sparse recovery, Arya Mazumdar, UMass Amherst

An associative memory is a structure learned from a dataset M of vectors (signals) in a way such that, given a noisy version of one of the vectors as input, the nearest valid vector from M (nearest neighbor) is provided as output, preferably via a fast iterative algorithm. Traditionally, neural networks are used to model the above structure. In this talk we propose a model of associative memory based on sparse recovery of signals. Our basic premise is simple. Given a dataset, we learn a set of linear constraints that every vector in the dataset must satisfy. Provided these linear constraints possess some special properties, it is possible to cast the task of finding nearest neighbor as a sparse recovery problem. Assuming generic random models for the dataset, we show that it is possible to store exponential number of n-length vectors in a neural network of size O(n). Furthermore, given a noisy version of one of the stored vectors corrupted in linear number of coordinates, the vector can be correctly recalled using a neurally feasible algorithm.

Instead of assuming the above subspace model for the dataset, we might assume that the data is a sparse linear combination of vectors from a dictionary (sparse-coding). This very relevant model poses significant challenge in designing associative memory and is one of the main problems we will describe. (This is a joint work with Ankit Singh Rawat (CMU) and was presented in part at NIPS’15).

Feb. 4, noon-1 pm. Online Data Mining PCA And K-Means, Edo Liberty, Yahoo! Research

Algorithms for data mining, unsupervised machine learning and scientific computing were traditionally designed to minimize running time in the batch setting (random access to memory). In recent years, a significant amount of research is devoted to producing scaleable algorithms for the same problems. A scaleable solution assumes some limitation on data access and/or compute model. Some well known models include map reduce, message passing, local computation, pass efficient, streaming and others. In this talk we argue for the need to consider the online model in data mining tasks. In an online setting, the algorithm receives data points one by one and must make some decision immediately (without examining the rest of the input). The quality of the algorithm’s decisions is compared to the best possible in hindsight. Note that no stochasticity assumption is made about the input. While practitioners are well aware of the need for such algorithms, this setting was mostly overlooked by the academic community. Here, we will review new results on online k-means clustering and online Principal Component Analysis (PCA).

Bio

Edo Liberty is a research director and Yahoo Labs and leads its Scalable Machine Learning group. He received his BSc in Computer Science and Physics from Tel Aviv University and his PhD in Computer Science from Yale. After his postdoctoral position at Yale in the Applied Math department he co-founded a New York based startup. Since 2009 he has been with Yahoo Labs. His research focuses on the theory and practice of large scale data mining.

Feb. 8, 11 am-noon. On the Optimal BMI, Howard Karloff, Goldman Sachs

I will talk about the “optimal” body-mass index (BMI=mass/height^2) as well as alternative formulas for BMI. By treating BMI as a continuous variable and estimating its interaction effects with other demographic and health variables, we compute for each individual a “personalized optimal BMI.” The averages of these personalized
optimal BMIs across our study are 25.7 for men and 26.3 for women (both of which are in the “overweight” category).

To answer the question of whether changing the exponent of 2 on height in the BMI formula would give better predictions, we show that the “best” exponent is less than 2 for both men and women. Interestingly, we cannot exclude the possibility that the optimal exponent is 1, which would yield the simple formula mass/height.

This is joint work with Dean Foster and Ken Shirley.

Feb. 9, 1-2 pm. Variable Selection is Hard, Howard Karloff, Goldman Sachs

Consider the task of a machine-learning system faced with voluminousdata on m individuals.  There may be p=10^6 features describing each individual.  How can the algorithm find a small set of features that “best” describes the individuals?  People usually seek small feature sets both because models with small feature sets are understandable and because simple models usually generalize better.

We study the simple case of linear regression, in which a user has an m x p matrix B and a vector y, and seeks a p-vector x *with as few nonzeroes as possible* such that Bx is approximately equal toy, and we call it SPARSE REGRESSION.  There are numerous algorithms in the statistical literature for SPARSE REGRESSION, such as Forward Selection, Backward Elimination, LASSO, and Ridge Regression.

We give a general hardness proof that (subject to a complexity assumption) no polynomial-time algorithm can give good performance (in the worst case) for SPARSE REGRESSION, even if it is allowed to include more variables than necessary, and even if it need only find an x such that Bx is relatively far from y.

Howard Karloff’s Bio:

After receiving a PhD from UC Berkeley, Howard Karloff taught at the University of Chicago and Georgia Tech before leaving Georgia Tech to join AT&T Labs–Research in 1999. He left ATT Labs in 2013 to join Yahoo Labs in New York, where he stayed till February, 2015. Now he does data science for Goldman Sachs in New York.

A fellow of the ACM, he has served on the program committees of numerous conferences and chaired the 1998 SODA program committee. He is the author of numerous journal and conference articles and the Birkhauser book “Linear Programming.” His interests include
data science, machine learning, algorithms, and optimization.

Feb. 19, Time: 11-noon, CS 140. Edit Distance Cannot Be Computed in Strongly Subquadratic Time (unless SETH is false), Arturs Backers, MIT

The edit distance (a.k.a. the Levenshtein distance) between two strings is defined as the minimum number of insertions, deletions or substitutions of symbols needed to transform one string into another. The problem of  computing the edit distance between two strings is a classical computational task, with a well-known algorithm based on dynamic programming. Unfortunately, all known algorithms for this problem run in nearly quadratic time.

In this paper we provide evidence that the near-quadratic running time bounds known for the problem of computing edit distance might be {tight}. Specifically, we show that, if the edit distance can be computed in time $O(n^{2-\delta})$ for some constant $\delta>0$, then the satisfiability of conjunctive normal form formulas with $N$ variables and $M$ clauses can be solved in time $M^{O(1)} 2^{(1-\epsilon)N}$ for a constant $\epsilon>0$. The latter result would violate the {\em Strong Exponential Time Hypothesis}, which postulates that such algorithms do not exist.

Joint work with Piotr Indyk.

Bio. Arturs is a fourth year graduate student at MIT. His research topics include fine-grained complexity, metric embeddings and sparse recovery.

Mar. 2, Time: 4-5pm, CS 151. Learning in Strategic Environment: Theory and Data, Vasilis Syrkanis, Microsoft Research, New York

Abstract: The strategic interaction of multiple parties with different objectives is at the heart of modern large scale computer systems and electronic markets. Participants face such complex decisions in these settings that the classic economic equilibrium is not a good predictor of their behavior. The analysis and design of these systems has to go beyond equilibrium assumptions. Evidence from online auction marketplaces suggests that participants rather use algorithmic learning. In the first part of the talk, I will describe a theoretical framework for the analysis and design of efficient market mechanisms, with robust guarantees that hold under learning behavior, incomplete information and in complex environments with many mechanisms running at the same time. In the second part of the talk, I will describe a method for analyzing datasets from such marketplaces and inferring private parameters of participants under the assumption that their observed behavior is the outcome of a learning algorithm. I will give an example application on datasets from Microsoft’s sponsored search auction system.

Bio: Vasilis Syrgkanis is a postdoctoral researcher at Microsoft Research NYC, where he is a member of the algorithmic economics and machine learning groups. He received his Ph.D. in Computer Science from Cornell University in 2014, under the supervision of Prof. Eva Tardos. His research addresses problems at the intersection of theoretical computer science, machine learning and economics. His work received best paper awards at the 2015 ACM Conference on Economics and Computation (EC’15) and at the 2015 Annual Conference on Neural Information Processing Systems (NIPS’15). He was the recipient of the Simons Fellowship for graduate students in theoretical computer science 2012-2014.

Mar. 8, Time: 4-5 pm.The Latest on Linear Sketches for Large Graphs, Andrew McGregor, UMass Amherst, Computer Science

In this talk, we survey recent work on using random linear projections, a.k.a. sketches, to solve graph problems. Sketches are useful in a variety of computational models including the dynamic graph stream model were the input is defined by a stream of edge insertions and deletions that need to be processed in small space. A large number of problems have now been considered in this model including edge and vertex connectivity, sparsification, densest subgraph, correlation clustering, vertex cover and matching.

April 26th, Time: 4-5 pm.New Coding Techniques for Distributed Storage Systems,Ankit Singh Rawat, CMU, Computer Science

Abstract: Distributed storage systems (a.k.a. cloud storage networks) are becoming increasingly important, given the need to put away vast amounts of data that are being generated, analyzed and accessed across multiple disciplines today. Besides serving as backbone systems for large institutions such as CERN, Google and Microsoft, distributed storage systems have been instrumental in the emergence and rapid growth of modern cloud computing framework.

In this talk, I’ll present coding theoretic solutions to key issues related to designing distributed storage systems, especially focusing on 1) providing efficient mechanisms for repairing server failures, 2) enabling parallel accesses to data, and 3) securing information against eavesdropping attacks.

Bio: Ankit Singh Rawat received the B.Tech. degree from Indian Institute of Technology (IIT), Kanpur, India, in 2010, and the M.S. and Ph.D. degrees from The University of Texas at Austin in 2012 and in 2015, respectively. Since September 2015, he is a postdoctoral fellow at Carnegie Mellon University working with Venkatesan Guruswami. His research interests include coding theory, information theory, and statistical machine learning. Ankit is a recipient of the Microelectronics and Computer Development Fellowship from UT Austin. He has held summer internships at Bell Labs in Murray Hill, NJ, and Docomo Innovations Inc. in Palo Alto, CA.