10 Principles of Practical Statistical Reasoning
Practical Statistical Reasoning is a term that covers the nature and objective of applied statistics/data science, principles common to all applications, and practical steps/questions for better conclusions. The following principles have helped me become more efficient with my analyses and clearer in my conclusions.
By Neil Chandarana, Machine Learning
There are 2 core aspects to fruitful application of statistics (data science):
 Domain knowledge.
 Statistical methodology.
Due to the highly specific nature of this field, it is difficult for any book or article to convey both a detailed and accurate description of the interplay between the two. In general, one can read material of two types:
 Broad info on statistical methods with conclusions that generalise but are not specific.
 Detailed statistical methods with conclusions that are useful only in a specific domain.
After 3 years working on my own data science projects and 3.5 years manipulating data on the trading floor, there is an additional category of learnings. It is fundamentally just as useful as the above and I take them into every project/side hustle/consulting gig…
Practical Statistical Reasoning
I made that term up because I don’t really know what to call this category. However, it covers:
 The nature and objective of applied statistics/data science.
 Principles common to all applications
 Practical steps/questions for better conclusions
If you have experience of the application of statistical methods, I encourage you to use your experience to illuminate and criticise the following principles. If you have never tried implementing a statistical model, have a go and then return. Don’t see the following as a list to memorise. You’ll get peak synthesis of information if you can relate to your own experience.
The following principles have helped me become more efficient with my analyses and clearer in my conclusions. I hope you can find value in them too.
1 — Data quality matters
The extent to which poor data quality can be set right by more elaborate analyses is limited. Practical checks worth completing are:
 Visual/automatic inspection of values that are logically inconsistent or in conflict with prior information about the ranges likely to arise from each variable. E.g. extreme values, variable type.
 Frequency of distributions.
 Pairwise scatter for lowlevel inspection of collinearity.
 Missing observations (0, 99, None, NaN values).
 Question methods of collection for bias introduced by inconsistencies e.g differences between observers.
2 — Criticise variation
In nearly all problems, you will be dealing with uncontrolled variation. Attitude to this variation should differ depending on whether this variability is an intrinsic part of the system under study or whether it represents experimental error. In both cases, we consider the distribution of the variation but motivation differs:
 Intrinsic variation: we are interested in detail in form of the distribution.
 Error variation: we are interested in what would have been observed if the error had been eliminated.
3 — Select a sensible depth of analysis
Try to consider depth independently to the amount of data available or the technologies available. Just because it is easy/cheap to collect data, doesn’t mean the data are relevant. Same applies to methodologies and technologies. Wellchosen analysis depth supports clear conclusions, and clear conclusions support better decisionmaking.
4 — Understand data structure
Data quantity concerns the number of individuals and number of variables per individual. Data structure = data quantity + groupings of individuals. Most datasets are of the following form:
 There are a number of individuals.
 On each individual, a number of variables are observed.
 Individuals are considered independent of one another.
Given this form, answering the following question will shorten the path to meaningful conclusion interpretation.
 What is to be regarded as an individual?
 Are individuals grouped/associated in ways that must be factored into the analysis?
 What variables are measured on each individual?
 Are any observations missing? What can be done to replace/estimate those values?
Note: small datasets allow easy inspection of data structure whilst large dataset may only allow for small proportions of analyses of structure. Factor this into your analysis and take as long as you need.
5 — 4 phases of statistical analysis
 Initial data manipulation. Intention = carry out checks of data quality, structure and quantity, and assemble of data in a form for detailed analysis.
 Preliminary analysis. Intention = clarify the form of data and suggest the direction of definitive analysis (plots, tables).
 Definitive analysis. Intention = provide the basis for conclusions.
 Presentation of conclusions. Intention = accurate, concise, lucid conclusions with domain interpretation.
…but there are caveats for these phases:
 Division of phases is useful but not rigid. Preliminary analysis may lead to clear conclusions whilst definitive analysis may reveal unexpected discrepancies that demand reconsideration of the whole basis of analysis.
 Skip 1 when given a cleaned dataset.
 Skip 2 in fields where there are substantial existing analyses.
6 — What’s the output?
Remember, statistical analysis is but a single step in a larger decisionmaking process. Presentation of conclusions to decisionmakers is critical to the effectiveness of any analysis:
 Conclusion style should depend on the audience.
 Explain the broad strategy of analysis in a form reasonable to a critical nontechnical reader.
 Include direct links between conclusions and data.
 Effort presenting complex analysis in simple ways is worthwhile. However, be aware that simplicity is subjective and correlated with familiarity.
7 — Appropriate analysis style
From a technical perspective, the style of analysis refers to how the underlying system of interest is modelled:
 Probabilistic/Inferential: draw conclusions subject to uncertainty, often numeric.
 Descriptive: seeks to summarise data, often graphical.
Appropriate analysis style helps retain focus. Give it consideration early on and it will reduce the need the return back to time consuming data processing steps.
8 — Computational consideration is only sometimes an issue
The choice of technology seeps into all aspects of applied statistical analysis including:
 The organisation and storage of raw data.
 The arrangement of conclusions.
 Implementation of the main analysis/analyses.
But when should this be on the radar?
 Large scale investigation + large data = worth devoting resources to bespoke programs/libraries if flexibility and performance cannot be achieved via existing tools.
 Large scale investigation + small data = computational consideration not critical.
 Small scale investigation + large data = bespoke programs infeasible, availability of flexible and general programs/libraries are of central importance.
 Small scale investigation + small data = computational consideration not critical.
9 — Design investigations well
Whilst a range of statistical methods can be used across a range of investigation types. The interpretation of results will vary based on the investigation design:
 Experiments = system under study is set up and controlled by the investigator. Clearcut differences can be attributed to variables confidently.
 Observational studies = the investigator has no control over data collection other than monitoring data quality. True explanatory variables may be missing, hard to draw conclusions with confidence.
 Sample surveys = sample drawn from a population by methods (randomisation) under the control of the investigator. Conclusions can be drawn in confidence on the descriptive properties of the population however explanatory variables suffer as above.
 Controlled prospective studies = sample chosen by the investigator, explanatory variables measured and followed over time. Has some virtues of Experiments but in reality, it’s not possible to measure all explanatory variables.
 Controlled retrospective studies = existing datasets with appropriate handling of explanatory variables.
Note: A significant aspect of investigation design is distinguishing response and explanatory variables.
10 — Purpose of investigation
Obviously the purpose of the investigation is important. But how should you consider purpose?
First, a general qualitative distinction of objectives:
 Explanatory: increase understanding. Dangerous to pick arbitrarily amongst wellfitting models.
 Predictive: primary practical use. Easy to pick arbitrarily amongst wellfitting models.
The specific purpose of the investigation may indicate that the analysis should be sharply focussed on a particular aspect of the system under study. It also has a bearing on the kinds of conclusion to be sought and on the presentation of the conclusions.
Purpose may dictate an expiry date of conclusions. Any model chosen on totally empirical grounds is at risk if changes in interrelationships between variables are observed.
Final Word
Almost all tasks in life can be considered from the framework:
Input > System > Output
The job then becomes to define each aspect of the framework.
Practical statistical reasoning addresses the ‘System’. Some parts of the system cannot be determined out of context. Some parts can. Practical statistical reasoning is really just the ability to define your ‘System’ easily and competently. That ability is definitely not limited to these principles.
If you’d like to see programming/data science side hustles built in front of you, check out my YouTube channel where I post the full build in python.
The goal is to inspire and collaborate so reach out!
Bio: Neil Chandarana works in Machine Learning, and is an exoptions trader. He is working on projects that improve life and enhance human experience of life using cod, and likes to share his thoughts.
Original. Reposted with permission.
Related:
 Conjoint Analysis: A Primer
 Exploratory Data Analysis Using Python
 Exploratory Data Analysis on Steroids
Top Stories Past 30 Days  


