Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
Applied demography is a technique which can handle small geographic areas -- an approach which allows market segments and target populations to be studied in detail. This book provides the essential elements of applied demography in a clear and concise manner. It details the kind of information that is available; who produces it; and how that information can be used. The sources mentioned are American, but the techniques for estimating have universal application. A background in elementary algebra is sufficient for this book.'This is a handy, concise primer that summarizes various fundamentals of particular interest to the field (estimating total populations, household sizes, etc.)' -- Population Today, October 1984
The great advantage of time series regression analysis is that it can both explain the past and predict the future behavior of variables. This volume explores the regression (or structural equation) approach to the analysis of time series data. It also introduces the Box-Jenkins time series method in an attempt to bridge partially the gap between the two approaches.
Considers how "real-world" observations can be interpreted to convert them into data to be analyzed, so as to facilitate more effective use of scaling techniques. The text introduces the most appropriate scaling strategies for different research situations.
Meta-Analysis shows concisely, yet comprehensively, how to apply statistical methods to achieve a literature review of a common research domain. It demonstrates the use of combined tests and measures of effect size to synthesize quantitatively the results of independent studies for both group differences and correlations. Strengths and weaknesses of alternative approaches, as well as of meta-analysis in general, are presented.
Provides readers with a clear and concise introduction to the why, what, and how of the comparative method
In the QUANTITATIVE APPLICATIONS IN THE SOCIAL SCIENCES series, an introduction to correspondence analysis which outlines the history and logic behind the technique. It explains the analysis of large contingency tables and survey data and compares correspondence analysis with log linear models.
Proving a non-technical introduction to probability theory, this book covers topics including: the concept of probability and its relation to relative frequency, the properties of probability, discrete and continuous random variables, and binomial, uniform, normal and chi-squared distributions.
Provides applied researchers in the social, educational, and behavioural sciences with comprehensive coverage of analyses for ordinal outcomes. This book includes content that builds on a review of logistic regression, and extends to details of the cumulative (proportional) odds, continuation ratio, and adjacent category models for ordinal data.
Interpreting and Using Regression sets out the actual procedures researchers employ, places them in the framework of statistical theory, and shows how good research takes account both of statistical theory and real world demands. Achen builds a working philosophy of regression that goes well beyond the abstract, unrealistic treatment given in previous texts.
Demonstrating the application of this analysis technique to the social sciences, the author explains how it can be used to determine the number of factors to be retained in a factor analysis; in selecting a subset of variables to represent a larger set; and much more.
Where an assumption of unidirectionality in causal effects is unrealistic, 'recursive' models cannot be used, and complex 'nonrecursive' models are necessary. But, many nonrecursive models are 'unidentified', which makes meaningful parameter estimation impossible. This book explains the concept of identification and the factors that lead to it.
Explores even the fundamental assumptions underlying mediation analysis
Effect Size for Anova Designs lays out the computational methods for 'd' with a variety of designs including factorial ANOVA, ANCOVA and repeated measures ANOVA; 'd' divides the observed effect by the standard deviation of the dependent variable.
Provides easy-to-follow, didactic examples of several common growth modeling approaches
This book presents a technique for analyzing the effects of variables, groups, and treatments in both experimental and observational settings. It considers not only the main effects of one variable upon another, but also the effects of group cases.
When using the analysis of variance (ANOVA) in an experimental design, how can the researcher determine whether to treat a factor as fixed or random? This book provides the reader with the criteria to make the distinction between fixed and random levels among factors, an important decision that directly reflects the purpose of the research. In addition to exploring the varied roles random factors can play in social research, the authors provide a discussion of the statistical analyses required with random factors and give an overview of computer-assisted analysis of random factor designs using SAS and SPSSX.
Groups - organizations, corporations and governments - have formal rules for the allocation of their resources and in democratic societies the decisions about allocation are generally made by simple majority voting. But does majority rule always improve social well-being? Could it sometimes lead to collective irrationality? In this thought-provoking book, Paul E Johnson considers the key questions and concepts in social choice theory.
How to collect, describe, compare and analyze data.
This book explores a variety of graphical displays that are useful for visualizing multivariate data. The basic problem involves representing information that varies along several dimensions when the display medium is inherently two-dimensional. In order to address this problem, Jacoby introduces and explores the concept of a `data space'.
Clearly reviews the properties of important contemporary measures of association and correlation. Liebetrau devotes full chapters to measures for nominal, ordinal, and continuous (interval) data, paying special attention to the sampling distributions needed to determine levels of significance and confidence intervals. Valuable discussions also focus on the relationships between various measures, the sampling properties of their estimators and the comparative advantages and disadvantages of different approaches.
Explains the most widely used methods for analyzing cross-classified data on occupational origins and destinations. Hout reviews classic definitions, models, and sources of mobility data, as well as elementary operations for analyzing mobility tables. Tabular and graphic displays illustrate the discussion throughout.
Krippendorff introduces social scientists to information theory and explains its application for structural modeling. He discusses key topics such as: how to confirm an information theory model; its use in exploratory research; and how it compares with other approaches such as network analysis, path analysis, chi square and analysis of variance.Information Theory simplifies and clarifies a complex but powerful statistical method for analysing multivariate qualitative data. It will serve both as a textbook and as a sourcebook for researchers in communication theory, information theory and systems theory.
Significance testing - a core technique in statistics for hypothesis testing - is introduced in this volume. Mohr first reviews what is meant by sampling and probability distributions and then examines in-depth normal and t-tests of significance. The uses and misuses of significance testing are also explored.
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.