Statistics Seminar Series

Department of Mathematical Sciences
and
Center for Applied Mathematics and Statistics

New Jersey Institute of Technology

Fall 2014

Please note the time and location of each seminar. If you have any questions about a particular seminar, please contact the person hosting the speaker.

Date

Time

and Location

Speaker and Title

Host

Thursday
October 9, 2014

4:00PM

CULM 611

Joseph Romano, PhD

Department of Statistics

Stanford University

Permutation Tests 101


(abstract/ PDF)

Wenge Guo

Thursday
October 16, 2014

4:00PM
CULM 611

Xiaoyu Jia, PhD

Boehringer Ingelheim Pharmaceutical

Two-stage likelihood Continual Reassessment Method for Phase I Clinical Trials


(abstract)

Antai Wang

Wednesday October 22, 2014

3:00PM
CULM LH1

Min Qian, PhD

Department of Biostatistics

Columbia University

Constructing Dynamic Treatment Regimes Using Q-learning with L1-Regularization

 

(abstract)

Antai Wang

Thursday
November 13, 2014

4:00PM
CULM LH1

Aiyi Liu, PhD
NICHD
National Institutes of Health
Group Testing for Rare Diseases in the Presence of Misclassification

(abstract)

Antai Wang

Thursday
November 20, 2014

4:00PM
CULM LH1

Xin (James) Li, PhD
Department of Biostatistics, Bioinformatics and Biomathematics
Georgetown University
Topic TBA

(abstract)

Antai Wang

Thursday
November 27, 2014

4:00PM
CULM 611

Jinfeng Xu, PhD

Dept. of Biostatistics
NYU

(abstract)

Antai Wang

TBA

TBA

Hammou Elbarmi, PhD

Department of Statistics and CIS, Baruch College
The City University of New York

(abstract)

Ji Meng Loh

ABSTRACTS

Permutation Tests 101

Given independent samples from $P$ and $Q,$ two-sample permutation tests allow one to construct exact level tests when the null hypothesis is $P = Q.$ On the other hand, when comparing or testing particular parameters $\theta$ of $P$ and $Q,$ such as their means or medians, permutation tests need not be level $\alpha$, or even approximately level $\alpha$ in large samples. Under very weak assumptions for comparing estimators, we provide a general test procedure whereby the asymptotic validity of the permutation test holds while retaining the {\it exact} rejection probability $\alpha$ in finite samples when the underlying distributions are identical. The ideas are broadly applicable and generalized to the Wilcoxon test, and to the $k$-sample problem of comparing general parameters, whereby a permutation test is constructed which is exact level $\alpha$ under the hypothesis of identical distributions, but has asymptotic rejection probability $\alpha$ under the more general null hypothesis of equality of parameters. A quite general theory is possible based on a coupling construction, as well as a key contiguity argument for the multinomial and multivariate hypergeometric distributions. Time permitting, the results will be extended to multivariate settings and multiple testing.


Joseph Romano, PhD, Department of Statistics, Stanford University

Two-stage likelihood Continual Reassessment Method for Phase I Clinical Trials

The likelihood continual reassessment method is an adaptive model-based design used to estimate the maximum tolerated dose (MTD) in phase I clinical trials. The method is generally implemented in two-stage approach, whereby the model based dose escalation is activated after an initial sequence of patients is treated. We establish a theoretical framework for building a two-stage continual reassessment based on coherence principle. To facilitate with implementation of such design, we also propose a systematic approach to calibrate the design parameters based on this theoretical framework. We compare the approaches to the traditional trial-and-error approach using real trial example. The systematic calibration approach simplifies the model calibration process for the tow-stage likelihood continual reassessment method while being competitive to the time-consuming trial-and-error process. This is joint work with Shing Lee and Ken Cheung at Columbia University.


Xiaoyu Jia, PhD, Senior Biostatistician at Boehringer Ingelheim Pharmaceutical

Permutation Tests 101

Given independent samples from $P$ and $Q,$ two-sample permutation tests allow one to construct exact level tests when the null hypothesis is $P = Q.$ On the other hand, when comparing or testing particular parameters $\theta$ of $P$ and $Q,$ such as their means or medians, permutation tests need not be level $\alpha$, or even approximately level $\alpha$ in large samples. Under very weak assumptions for comparing estimators, we provide a general test procedure whereby the asymptotic validity of the permutation test holds while retaining the {\it exact} rejection probability $\alpha$ in finite samples when the underlying distributions are identical. The ideas are broadly applicable and generalized to the Wilcoxon test, and to the $k$-sample problem of comparing general parameters, whereby a permutation test is constructed which is exact level $\alpha$ under the hypothesis of identical distributions, but has asymptotic rejection probability $\alpha$ under the more general null hypothesis of equality of parameters. A quite general theory is possible based on a coupling construction, as well as a key contiguity argument for the multinomial and multivariate hypergeometric distributions. Time permitting, the results will be extended to multivariate settings and multiple testing.


Joseph Romano, PhD, Department of Statistics, Stanford University

Constructing Dynamic Treatment Regimes Using Q-learning with L1-Regularization

Recent research in treatment and intervention science is shifting from the traditional 'one-size-fits-all' treatment to dynamic treatment regimes, which allow greater individualization in programming over time. A dynamic treatment regime is a sequence of decision rules that specify how the dosage and/or type of treatment should be adjusted through time in response to an individual's changing needs. Constructing an optimal dynamic treatment regime is challenging because the objective function is the expectation of a weighted indicator function that is non-concave in the parameters. In addition, there are many variables in the observed sample, yet cost and interpretability considerations imply that fewer rather than more variables should be included in the developed dynamic treatment regimes. To address these challenges we consider estimation based on L1 regularized Q-learning. This approach is justified via a finite sample upper bound on the difference between the mean response due to the estimated dynamic treatment regimes and the mean response due to the optimal dynamic treatment regime.


Min Qian, PhD, Department of Biostatistics, Columbia University

Group Testing for Rare Diseases in the Presence of Misclassification

Aimed at more efficient screening of a rare disease, Dorfman (1943) proposed to test for syphilis antigen by first testing pooled blood samples, followed by retesting of individuals in groups found to be infected. This strategy and its variations developed later, often referred to as group testing or pooled testing, have received substantial attention for efficient identification of an event or estimation of the probability that the event occurs.

We further investigate the optimality properties of group testing strategy in estimating the prevalence of a disease. We show that, when the disease status is measured with error, group testing with moderate group sizes provides more efficient estimation than the fully observed data over a wide range of disease prevalence. When the number of groups is fixed, group testing also prevails over the one-subject-per-group random sampling design for moderate disease prevalence. We discuss applications to evaluation of gene-environment interactions, and proposed a strata-based group testing strategy for such an evaluation. Extension to test for correlated rare diseases are also considered.


Aiyi Liu,PhD, Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health