Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
This book provides a self-contained introduction to shrinkage estimation for matrix-variate normal distribution models.
This book provides comprehensive summaries of theoretical (algebraic) and computational aspects of tensor ranks, maximal ranks, and typical ranks, over the real number field.
This book is the modern first treatment of experimental designs, providing a comprehensive introduction to the interrelationship between the theory of optimal designs and the theory of cubature formulas in numerical analysis.
The final chapter concludes with an overview of analysis for probabilistic spatial percolation methods that are relevant in the modeling of graphical networks and connectivity applications in sensor networks, which also incorporate stochastic geometry features.
This book presents a systematic explanation of the SIML (Separating Information Maximum Likelihood) method, a new approach to financial econometrics. Considerable interest has been given to the estimation problem of integrated volatility and covariance by using high-frequency financial data.
Lastly, the book addresses the applications of asymmetric kernel estimation and testing to various forms of nonnegative economic and financial data. Until recently, the most popularly chosen nonparametric methods used symmetric kernel functions to estimate probability density functions of symmetric distributions with unbounded support.
This book deals with advanced methods for adaptive phase I dose-finding clinical trials for combination of two agents and molecularly targeted agents (MTAs) in oncology.
This book introduces readers to advanced statistical methods for analyzing survival data involving correlated endpoints. Hence, the book offers an essential reference guide for medical statisticians and provides researchers with advanced, innovative statistical tools.
This book focuses on all-pairwise multiple comparisons of means in multi-sample models, introducing closed testing procedures based on maximum absolute values of some two-sample t-test statistics and on F-test statistics in homoscedastic multi-sample models.
This book provides comprehensive reviews of recent progress in matrix variate and tensor variate data analysis from applied points of view.
This book contains new aspects of model diagnostics in time series analysis, including variable selection problems and higher-order asymptotics of tests. The book begins by introducing a unified view of a portmanteau-type test based on a likelihood ratio test, useful to test general parametric hypotheses inherent in statistical models.
This book presents the state of the art in extreme value theory, with a collection of articles related to a seminal paper on the bivariate extreme value distribution written by Professor Masaaki Sibuya in 1960, demonstrating various developments of the original idea over the last half-century.
This book focuses on multiple comparisons of proportions in multi-sample models with Bernoulli responses. First, the author explains the one-sample and two-sample methods that form the basis of multiple comparisons. Then, regularity conditions are stated in detail. Simultaneous inference for all proportions based on exact confidence limits and based on asymptotic theory is discussed. Closed testing procedures based on some one-sample statistics are introduced. For all-pairwise multiple comparisons of proportions, the author uses arcsine square root transformation of sample means. Closed testing procedures based on maximum absolute values of some two-sample test statistics and based on chi-square test statistics are introduced. It is shown that the multi-step procedures are more powerful than single-step procedures and the Ryan-Einot-Gabriel-Welsch (REGW)-type tests. Furthermore, the author discusses multiple comparisons with a control. Under simple ordered restrictions of proportions, the author also discusses closed testing procedures based on maximum values of two-sample test statistics and based on Bartholomew's statistics. Last, serial gatekeeping procedures based on the above-mentioned closed testing procedures are proposed although Bonferroni inequalities are used in serial gatekeeping procedures of many.
This book provides a self-contained introduction of mixed-effects models and small area estimation techniques. In particular, it focuses on both introducing classical theory and reviewing the latest methods. First, basic issues of mixed-effects models, such as parameter estimation, random effects prediction, variable selection, and asymptotic theory, are introduced. Standard mixed-effects models used in small area estimation, known as the Fay-Herriot model and the nested error regression model, are then introduced. Both frequentist and Bayesian approaches are given to compute predictors of small area parameters of interest. For measuring uncertainty of the predictors, several methods to calculate mean squared errors and confidence intervals are discussed. Various advanced approaches using mixed-effects models are introduced, from frequentist to Bayesian approaches. This book is helpful for researchers and graduate students in fields requiring data analysis skills as well as in mathematical statistics.
This treatise delves into the latest advancements in stochastic volatility models, highlighting the utilization of Markov chain Monte Carlo simulations for estimating model parameters and forecasting the volatility and quantiles of financial asset returns. The modeling of financial time series volatility constitutes a crucial aspect of finance, as it plays a vital role in predicting return distributions and managing risks. Among the various econometric models available, the stochastic volatility model has been a popular choice, particularly in comparison to other models, such as GARCH models, as it has demonstrated superior performance in previous empirical studies in terms of fit, forecasting volatility, and evaluating tail risk measures such as Value-at-Risk and Expected Shortfall. The book also explores an extension of the basic stochastic volatility model, incorporating a skewed return error distribution and a realized volatility measurement equation. The concept of realized volatility, a newly established estimator of volatility using intraday returns data, is introduced, and a comprehensive description of the resulting realized stochastic volatility model is provided. The text contains a thorough explanation of several efficient sampling algorithms for latent log volatilities, as well as an illustration of parameter estimation and volatility prediction through empirical studies utilizing various asset return data, including the yen/US dollar exchange rate, the Dow Jones Industrial Average, and the Nikkei 225 stock index. This publication is highly recommended for readers with an interest in the latest developments in stochastic volatility models and realized stochastic volatility models, particularly in regards to financial risk management.
This book presents a study of statistical inferences based on the kernel-type estimators of distribution functions. The inferences involve matters such as quantile estimation, nonparametric tests, and mean residual life expectation, to name just some. Convergence rates for the kernel estimators of density functions are slower than ordinary parametric estimators, which have root-n consistency. If the appropriate kernel function is used, the kernel estimators of the distribution functions recover the root-n consistency, and the inferences based on kernel distribution estimators have root-n consistency. Further, the kernel-type estimator produces smooth estimation results. The estimators based on the empirical distribution function have discrete distribution, and the normal approximation cannot be improved¿that is, the validity of the Edgeworth expansion cannot be proved. If the support of the population density function is bounded, there is a boundary problem, namely the estimator does not have consistency near the boundary. The book also contains a study of the mean squared errors of the estimators and the Edgeworth expansion for quantile estimators.
This book presents the latest results related to one- and two-way models for time series data. Analysis of variance (ANOVA) is a classical statistical method for IID data proposed by R.A. Fisher to investigate factors and interactions of phenomena. In contrast, the methods developed in this book apply to time series data. Testing theory of the homogeneity of groups is presented under a wide variety of situations including uncorrelated and correlated groups, fixed and random effects, multi- and high-dimension, parametric and nonparametric spectral densities. These methods have applications in several scientific fields. A test for the existence of interactions is also proposed. The book deals with asymptotics when the number of groups is fixed and sample size diverges. This framework distinguishes the approach of the book from panel data and longitudinal analyses, which mostly deal with cases in which the number of groups is large. The usefulness of the theory in this book is illustratedby numerical simulation and real data analysis. This book is suitable for theoretical statisticians and economists as well as psychologists and data analysts.
This book provides a self-contained introduction of Stein/shrinkage estimation for the mean vector of a multivariate normal distribution. The book begins with a brief discussion of basic notions and results from decision theory such as admissibility, minimaxity, and (generalized) Bayes estimation. It also presents Stein's unbiased risk estimator and the James-Stein estimator in the first chapter. In the following chapters, the authors consider estimation of the mean vector of a multivariate normal distribution in the known and unknown scale case when the covariance matrix is a multiple of the identity matrix and the loss is scaled squared error. The focus is on admissibility, inadmissibility, and minimaxity of (generalized) Bayes estimators, where particular attention is paid to the class of (generalized) Bayes estimators with respect to an extended Strawderman-type prior. For almost all results of this book, the authors present a self-contained proof. The book is helpful for researchers and graduate students in various fields requiring data analysis skills as well as in mathematical statistics.
Furher, it discusses Markov chain Monte Carlo and direct samplers from A-hypergeometric distribution, as well as the maximum likelihood estimation of the A-hypergeometric distribution of two-row matrix using properties of polytopes and information geometry.
The book provides relationships of the autoregressive linear mixed effects models with linear mixed effects models, marginal models, transition models, nonlinear mixed effects models, growth curves, differential equations, and state space representation.
This book introduces academic researchers and professionals to the basic concepts and methods for characterizing interdependencies of multiple time series in the frequency domain.
This book presents recent non-asymptotic results for approximations in multivariate statistical analysis. It then introduces new areas of research in high-dimensional approximations for bootstrap procedures, Cornish-Fisher expansions, power-divergence statistics and approximations of statistics based on observations with random sample size.
Beginning with a brief introduction to linear programming, the book introduces the algebraic representations of conditional independence statements and their applications using linear programming methods.
The evaluation of consistency of an information criterion from the high-dimensional asymptotic framework provides new knowledge to us, e.g., Akaike's information criterion (AIC) sometimes becomes consistent under the high-dimensional asymptotic framework although it never has a consistency under the large-sample asymptotic framework;
This book integrates the fundamentals of asymptotic theory of statistical inference for time series under nonstandard settings, e.g., infinite variance processes, not only from the point of view of efficiency but also from that of robustness and optimality by minimizing prediction error.
This is the first book to provide a comprehensive introduction to a new modeling framework known as semiparametric structural equation modeling and its technique, with the fundamental background needed to understand it.
This book presents a fresh, new approach in that it provides a comprehensive recent review of challenging problems caused by imbalanced data in prediction and classification, and also in that it introduces several of the latest statistical methods of dealing with these problems.
Included are rule-based and model-based designs, such as 3 + 3 designs, accelerated titration designs, toxicity probability interval designs, continual reassessment method and related designs, and escalation overdose control designs.
This book expounds the principle and related applications of nonlinear principal component analysis (PCA), which is useful method to analyze mixed measurement levels data.
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.