User profiles for G. Valiant
Gregory ValiantAssistant Professor of Computer Science, Stanford University Verified email at stanford.edu Cited by 5769 |
An automatic inequality prover and instance optimal identity testing
We consider the problem of verifying the identity of a distribution: Given the description of a
distribution over a discrete finite or countably infinite support, $p=(p_1,p_2,\ldots)$, how …
distribution over a discrete finite or countably infinite support, $p=(p_1,p_2,\ldots)$, how …
What can transformers learn in-context? a case study of simple function classes
In-context learning is the ability of a model to condition on a prompt sequence consisting of
in-context examples (input-output pairs corresponding to some task) along with a new query …
in-context examples (input-output pairs corresponding to some task) along with a new query …
Groucho running
TA McMahon, G Valiant… - Journal of applied …, 1987 - journals.physiology.org
… 0/g) is introduced to distinguish between hard and soft running modes. Here, omega 0 is the
natural frequency of a mass-spring system representing the body, g is … 0/g approaches zero. …
natural frequency of a mass-spring system representing the body, g is … 0/g approaches zero. …
The power of linear estimators
For a broad class of practically relevant distribution properties, which includes entropy and
support size, nearly all of the proposed estimators have an especially simple form. Given a …
support size, nearly all of the proposed estimators have an especially simple form. Given a …
Estimating the unseen: improved estimators for entropy and other properties
We show that a class of statistical properties of distributions, which includes such practically
relevant properties as entropy, the number of distinct elements, and distance metrics …
relevant properties as entropy, the number of distinct elements, and distance metrics …
Settling the polynomial learnability of mixtures of gaussians
… G projected onto n2 sufficiently distinct directions (directions that differ by at least ϵ2 >> ϵ1)
one can accurately recover the multi-dimensional parameters of G. … distributions f(x),g(x) on n …
one can accurately recover the multi-dimensional parameters of G. … distributions f(x),g(x) on n …
Estimating the unseen: an n/log (n)-sample estimator for entropy and support size, shown optimal via new CLTs
We introduce a new approach to characterizing the unobserved portion of a distribution, which
provides sublinear--sample estimators achieving arbitrarily small additive constant error …
provides sublinear--sample estimators achieving arbitrarily small additive constant error …
Making ai forget you: Data deletion in machine learning
Intense recent discussions have focused on how to provide individuals with control over
when their data can and cannot be used---the EU’s Right To Be Forgotten regulation is an …
when their data can and cannot be used---the EU’s Right To Be Forgotten regulation is an …
Learning from untrusted data
The vast majority of theoretical results in machine learning and statistics assume that the
training data is a reliable reflection of the phenomena to be learned. Similarly, most learning …
training data is a reliable reflection of the phenomena to be learned. Similarly, most learning …
Efficiently learning mixtures of two Gaussians
Given data drawn from a mixture of multivariate Gaussians, a basic problem is to accurately
estimate the mixture parameters. We provide a polynomial-time algorithm for this problem for …
estimate the mixture parameters. We provide a polynomial-time algorithm for this problem for …