Statistics Seminar Series

Department of Mathematical Sciences
and
Center for Applied Mathematics and Statistics

New Jersey Institute of Technology


FALL 2010

 

All seminars are 4:00 - 5:00 p.m., in Cullimore Hall Room 611 (Math Conference Room) unless noted otherwise. Refreshments are usually served at 3:30 p.m., and talks start at 4:00 p.m. If you have any questions about a particular seminar, please contact the person hosting the speaker.

 

Date
Speaker and Title
Host
Thursday
September 09, 2010
4:00PM
Professor Gerhard Dikta, Aachen University of Applied Sciences
Campus Juelich, Ginsterweg 1, 52428, Juelich, Germany
Probability of Damage of Electronic Systems due to Indirect Lightning Flashes
(abstract )

Sundar Subramanian

Thursday
September 16, 2010
4:00PM
Dr. Kaifeng Lu, Forest Laboratories, Inc.
Specification of Covariance Structure in Longitudinal Data Analysis for Randomized Clinical Trials (abstract)

Sunil Dhar

Thursday
September 23, 2010
Cullimore Lecture Hall I
@ 4:00PM
Ohad Amit, Ph.D., Senior Director, Oncology R&D, Statistics and Programming
Graphical Approaches to the Analysis of Safety Data from Clinical Trials (abstract)

Sunil Dhar

Thursday
October 7, 2010
4:00PM
G. Frank Liu, Ph.D., Merck Research Laboratories
On Statistical Analysis of Continuous Responses in Clinical Trials with Baseline Measurements (abstract)

Sunil Dhar

Thursday
October 14, 2010
4:00PM
Bruce Levin, Ph.D., Professor and Chair, Department of Biostatistics,
Columbia University, Mailman School of Public Health
Subset Selection in Comparative Selection Trials (abstract)

Manish Bhattacharjee

Thursday
October 28, 2010
4:00PM
Yongchao Ge, Ph.D., Mount Sinai Medical School
Making Statistical Inference on the Proportion of Positive Cells for the Flow Cytometry Data (abstract)

Wenge Guo

Thursday
November 11, 2010
4:00PM
Randall  H. Rieger, Ph.D., Professor of Statistics - Director, Graduate Program in Applied Statistics - Director, West Chester Statistics Institute
West Chester University, West Chester, PA
Testing for Violations of the Homogeneity Needed for Conditional Logistic Regression (abstract)

Sunil Dhar

Thursday
November 18, 2010
Cullimore Lecture Hall I
@ 4:00PM
Marinela Capanu, Ph.D., Assistant Attending Biostatistician, Memorial Sloan Kettering Cancer Center
Title and abstract: TBA
Sunil Dhar

 

 

 

 

 

 

 

ABSTRACTS

Probability of Damage of Electronic Systems due to Indirect Lightning Flashes:

Standard German household insurance covers the damage of an electronic system if the damage is caused by lightning. Over the last years, a large increase of claims of this type of damage was observed by the insurance companies. To handle this increasing demand of claims properly, the GDV sponsored a project with the objective to analyze the distance between lightning and location of damage. In this talk, a model for the distribution of these distances will be discussed and applied to real data from the insurance companies.

Professor Gerhard Dikta, Aachen University of Applied Sciences, Campus Juelich, Ginsterweg 1, 52428, Juelich, Germany ~  September 09, 2010
 

Specification of Covariance Structure in Longitudinal Data Analysis for Randomized Clinical Trials:

Misspecification of the covariance structure for repeated measurements in longitudinal analysis may lead to biased estimates of the regression parameters and under or overestimation of the corresponding standard errors in the presence of missing data. The so-called sandwich estimator can correct the variance, but it does not reduce the bias in point estimates. Removing all assumptions from the covariance structure (i.e. using an unstructured (UN) covariance) will remove such biases. However, an excessive amount of missing data may cause convergence problems for iterative algorithms, such as the default Newton-Raphson algorithm in the popular SAS PROC MIXED. This article examines, both through theory and simulations, the existence and the magnitude of these biases. We recommend the use of UN covariance as the default strategy for analyzing longitudinal data from randomized clinical trials with moderate to large number of subjects and small to moderate number of time points. We also present an algorithm to assist in the convergence when the UN covariance is used. 

Dr. Kaifeng Lu, Forest Laboratories, Inc. ~ September 16, 2010
 

Graphical Approaches to the Analysis of Safety Data from Clinical Trials:

Patient safety has always been a primary focus in the development of new pharmaceutical products. The predominant method for statistical evaluation and interpretation of safety data collected in a clinical trial is the tabular display of descriptive statistics. There is a great opportunity to enhance evaluation of drug safety through the use of graphical displays, which can convey multiple pieces of information concisely and more effectively than can tables. Graphs can be used in an exploratory setting to help identify emerging safety signals, or in a confirmatory setting as a tool to elucidate known safety issues. We developed several graphical displays for routine safety data collected during a clinical trial, covering a broad range of graphical techniques, and illustrate here 10 specific graphical designs, many of which display the data along with statistics derived from them. Two are simple plots, comparing distributions in the form of boxplots or cumulative plots, and four more display data and summaries over time, comparing information from two groups in terms of distribution (with boxplots), cumulative incidence, hazard, or simply means with error bars. The other four are multi-panel displays: one-dimensional and two-dimensional arrays of scatterplots, a trellis of individual profiles, and a paired dotplot displaying risk together with relative risk. The displays focus on key safety endpoints in clinical trials including the QT interval from electrocardiograms, laboratory measurements for detecting hepatotoxicity, and adverse events of special interest. We discuss in detail the statistical and graphical principles underlying the production and interpretation of the displays.

Ohad Amit, PhD, Senior Director, Oncology R&D, Statistics and Programming ~ September 23, 2010
 

On Statistical Analysis of Continuous Responses in Clinical Trials with Baseline Measurements:

Measurements are often collected prior to treatment randomization in clinical trials. These baseline measures may be used in analysis models to increase efficiency. When there is only one post-randomization measure, an analysis of covariance (ANCOVA) model with either the post-randomization value or the calculated change from baseline value as the dependent variable is commonly used. With several post-randomization measures, a longitudinal data analysis (LDA) model can be used with the baseline measurement included as a covariate. In a recently paper (Liu et al. 2009 Stat in Medicine), a constrained full likelihood approach is recommended as a better method compared to the traditional ANCOVA model. In this talk, we will discuss the choice of analysis endpoints as well as pros and cons of different analysis models for continuous responses in clinical trials. Real clinical trial examples will be used for illustration of its application and some tips for considerations on implementing these methods using SAS Proc Mixed procedure.

G. Frank Liu, PhD, Merck Research Laboratories ~ October 7, 2010
 

Subset Selection in Comparative Selection Trials:

This is joint work with Cheng-Shiun Leu and Ken Cheung. When several treatment regimens are possible candidates for a large phase III study, but too few resources are available to evaluate each relative to a standard, conducting a multi-arm randomized selection trial is a useful strategy to remove inferior treatments from further consideration. When the study has a rapidly determined endpoint, frequent interim monitoring of the trial becomes ethically and practically appealing. In this talk we present a class of sequential procedures designed to select a subset of treatments that offer clinically meaningful improvements over the control group, or to declare that no such subset exists. The proposed procedures are easy to implement, allow sequential elimination of inferior treatments and sequential recruitment of promising treatments, and have high probability of correct selection of better-than-control subsets while preserving the rate of false declarations.

Bruce Levin, PhD, Professor and Chair, Department of Biostatistics, Columbia University, Mailman School of Public Health ~ October 14, 2010
 

Making Statistical Inference On The Proportion Of Positive Cells For The Flow Cytometry Data:

In working with flow cytometry data, one of the most frequent problems is to identify the cells that are positive in a stimulated experiment compared with a control experiment.

A rigorous way of deciding the cutoff value between the positive cells and negative cells is quite challenging. In this talk, my goal is modest: making statistical inference on the proportion of positive cells, i.e., the hypothesis testing, point estimation and confidence interval. The statistical challenges and approximate solutions will be presented.

Yongchao Ge, Ph.D., Mount Sinai Medical School ~ October 28, 2010
 

Testing for Violations of the Homogeneity Needed for Conditional Logistic Regression:

Keywords: clustered binary outcomes, conditional logistic regression, heterogeneity of response.

In epidemiologic studies where the outcome is binary, the data often arise as clusters, as when siblings, friends or neighbors are used as matched controls in a case-control study. Conditional logistic regression (CLR) is typically used for such studies to estimate the odds ratio for an exposure of interest. However, CLR assumes the exposure coefficient is the same in every cluster, and CLR-based inference can be badly biased when homogeneity is violated. Existing methods for testing goodness-of-fit for CLR are not designed to detect such violations. Good alternative methods of analysis exist if one suspects there is heterogeneity across clusters. However, routine use of alternative robust approaches when there is no appreciable heterogeneity could cause loss of precision and be computationally difficult, particularly if the clusters are small. We propose a simple non-parametric test, the test of heterogeneous susceptibility (THS), to assess the assumption of homogeneity of a coefficient across clusters. The test is easy to apply and provides guidance as to the appropriate method of analysis. Simulations demonstrate that the THS has reasonable power to reveal violations of homogeneity. We illustrate by applying the THS to a study of periodontal disease.

Randall  H. Rieger, PhD, Professor of Statistics - Director, Graduate Program in Applied Statistics - Director, West Chester Statistics Institute, West Chester University, West Chester, PA 19383 ~ November 11, 2010
 

Title: TBA

Abstract: TBA

Marinela Capanu, Ph.D., Assistant Attending Biostatistician, Memorial Sloan Kettering Cancer Center ~ November 18, 2010