Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
An essential tool for statisticians and data scientists seeking to interpret the vast troves of data that increasingly power our world First developed in the 1990s, the False Discovery Rate (FDR) is a way of describing the rate at which null hypothesis testing produces errors. It has since become an essential tool for interpreting large datasets. In recent years, as datasets have become ever larger, and as the importance of 'big data' to scientific research has grown, the significance of the FDR has grown correspondingly. The False Discovery Rate provides an analysis of the FDR's value as a tool, including why it should generally be preferred to the Bonferroni correction and other methods by which multiplicity can be accounted for. It offers a systematic overview of the FDR, its core claims, and its applications. Readers of The False Discovery Rate will also find: Case studies throughout, rooted in real and simulated data setsDetailed discussion of topics including representation of the FDR on a Q-Q plot, consequences of non-monotonicity, and many moreWide-ranging analysis suited for a broad readership The False Discovery Rate is ideal for Statistics and Data Science courses, and short courses associated with conferences. It is also useful as supplementary reading in courses in other disciplines that require the statistical interpretation of "big data.' The book will also be of great value to statisticians and researchers looking to learn more about the FDR.
Mixed modelling is very useful, and easier than you think! Mixed modelling is now well established as a powerful approach to statistical data analysis.
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.