Confront how ALL data has uncertainty, and why statistics is a powerful tool for reaching insights and solving problems. Begin by describing and summarizing data with the help of concepts such as the mean, median, variance, and standard deviation. Learn common statistical notation and graphing techniques, and get a preview of the programming language R, which will be used throughout the course.
Dip into R, which is a popular open-source programming language for use in statistics and data science. Consider the advantages of R over spreadsheets. Walk through the installation of R, installation of a companion IDE (integrated development environment) RStudio, and how to download specialized data packages from within RStudio. Then, try out simple operations, learning how to import data, save your work, and generate different plots.
Study sampling and probability, which are key aspects of how statistics handles the uncertainty inherent in all data. See how sampling aims for genuine randomness in the gathering of data, and probability provides the tools for calculating the likelihood of a given event based on that data. Solve a range of problems in probability, including a case of medical diagnosis that involves the application of Bayes' theorem.
There's more than one way to be truly random! Delve deeper into probability by surveying several discrete probability distributions—those defined by discrete variables. Examples include Bernoulli, binomial, geometric, negative binomial, and Poisson distributions—each tailored to answer a specific question. Get your feet wet by analyzing several sets of data using these tools.
Focus on the normal distribution, which is the most celebrated type of continuous probability distribution. Characterized by a bell-shaped curve that is symmetrical around the mean, the normal distribution shows up in a wide range of phenomena. Use R to find percentiles, probabilities, and other properties connected with this ubiquitous data pattern.
When are two variables correlated? Learn how to measure covariance, which is the association between two random variables. Then use covariance to obtain a dimensionless number called the correlation coefficient. Using an R data set, plot correlation values for several variables, including the physical measurements of a sample population.
Graphical data analysis was once cumbersome and time-consuming, but that has changed with programming tools such as R. Analyze the classic Iris Flower Data Set—the standard for testing statistical classification techniques. See if you can detect a pattern in sepal and petal dimensions for different species of irises by using scatterplots, histograms, box plots, and other graphical tools.
It’s rarely possible to collect all the data from a population. Learn how to get a lot from a little by “bootstrapping,” a technique that lets you improve an estimate by resampling the same data set over and over. It sounds like magic, but it works! Test tools such as the Q-Q plot and the Shapiro-Wilk test, and learn how to apply the central limit theorem.
Take your understanding of descriptive techniques to the next level, as you begin your study of statistical inference, learning how to extract information from sample data. In this lecture, focus on the point estimate—a single number that provides a sensible value for a given parameter. Consider how to obtain an unbiased estimator, and discover how to calculate the standard error for this estimate.
Move beyond point estimates to consider the confidence interval, which provides a range of possible values. See how this tool gives an accurate estimate for a large population by sampling a relatively small subset of individuals. Then learn about the choice of confidence level, which is often specified as 95%. Investigate what happens when you adjust the confidence level up or down.
Having learned to estimate a given population parameter from sample data, now go the other direction, starting with a hypothesized parameter for a population and determining whether we think a given sample could have come from that population. Practice this important technique, called hypothesis testing, with a single parameter, such as whether a lifestyle change reduces cholesterol. Discover the power of the p-value in gauging the significance of your result.
Extend the method of hypothesis testing to see whether data from two different samples could have come from the same population—for example, chickens on different feed types or an ice skater’s speed in two contrasting maneuvers. Using R, learn how to choose the right tool to differentiate between independent and dependent samples. One such tool is the matched pairs t-test.
Step into fully modeling the relationship between data with the most common technique for this purpose: linear regression. Using R and data on the growth of wheat under differing amounts of rainfall, test different models against criteria for determining their validity. Cover common pitfalls when fitting a linear model to data.
What do you do if your data doesn't follow linear model assumptions? Learn how to transform the data to eliminate increasing or decreasing variance (called heteroscedasticity), thereby satisfying the assumptions of normality, independence, and linearity. One of your test cases uses the R data set for miles per gallon versus weight in 1973-74 model automobiles.
Multiple linear regression lets you deal with data that has multiple predictors. Begin with an R data set on diabetes in Pima Indian women that has an array of potential predictors. Evaluate these predictors for significance. Then turn to data where you fit a multiple regression model by adding explanatory variables one by one. Learn to avoid overfitting, which happens when too many explanatory variables are included.
Delve into ANOVA, short for analysis of variance, which is used for comparing three or more group means for statistical significance. ANOVA answers three questions: Do categories have an effect? How is the effect different across categories? Is this significant? Learn to apply the F-test and Tukey's honest significant difference (HSD) test.
You can combine features of regression and ANOVA to perform what is called analysis of covariance, or ANCOVA. And that's not all: Just as you can extend simple linear regression to multiple linear regression, you can also extend ANOVA to multiple ANOVA, known as MANOVA, or multivariate analysis of variance. Learn when to apply each of these techniques.
While a creative statistical analysis can sometime salvage a poorly designed experiment, gain an understanding of how experiments can be designed in from the outset to collect far more reliable statistical data. Consider the role of randomization, replication, blocking, and other criteria, along with the use of ANOVA to analyze the results. Work several examples in R.
Delve into decision trees, which are graphs that use a branching method to determine all possible outcomes of a decision. Trees for continuous outcomes are called regression trees, while those for categorical outcomes are called classification trees. Learn how and when to use each, producing inferences that are easily understood by non-statisticians.
What can be done with data when transformations and tree algorithms don't work? One approach is polynomial regression, a form of regression analysis in which the relationship between the independent and dependent variables is modeled as the power of a polynomial. Step functions fit smaller, local models instead of one global model. Or, if we have binary data, there is logistic regression, in which the response variable has categorical values such as true/false or 0/1.
Spatial analysis is a set of statistical tools used to find additional order and patterns in spatial phenomena. Drawing on libraries for spatial analysis in R, use a type of graph called a semivariogram to plot the spatial autocorrelation of the measured sample points. Try your hand at data sets involving the geographic incidence of various medical conditions.
Time series analysis provides a way to model response data that is correlated with itself, from one point in time to the next, such as daily stock prices or weather history. After disentangling seasonal changes from longer-term patterns, consider methods that can model a dependency on time, collectively known as ARIMA (autoregressive integrated moving average) models.
Turn to an entirely different approach for doing statistical inference: Bayesian statistics, which assumes a known prior probability and updates the probability based on the accumulation of additional data. Unlike the frequentist approach, the Bayesian method does not depend on an infinite number of hypothetical repetitions. Explore the flexibility of Bayesian analysis.
Close the course by learning how to write custom functions for your R programs, streamlining operations, enhancing graphics, and putting R to work in a host of other ways. Professor Williams also supplies tips on downloading and exporting data, and making use of the rich resources for R—a truly powerful tool for understanding and interpreting data in whatever way you see fit.