Compute risk difference from event counts or probabilities. Review tables, inputs, exports, and graph output. Use practical examples to understand treatment effect differences better.
| Group | Events | Non Events | Total | Risk |
|---|---|---|---|---|
| Treatment | 42 | 158 | 200 | 0.21 |
| Control | 30 | 170 | 200 | 0.15 |
| Difference | - | - | - | 0.06 |
In this example, treatment risk is 0.21 and control risk is 0.15. The estimated risk difference is 0.06, meaning six additional events per hundred observations.
The calculator uses the risk difference formula:
RD = p₁ − p₀
Where p₁ is the event risk in group 1 and p₀ is the event risk in group 0.
When counts are entered:
p₁ = a / (a + b)
p₀ = c / (c + d)
The standard error is:
SE = √[(p₁(1−p₁)/n₁) + (p₀(1−p₀)/n₀)]
The confidence interval is:
RD ± Z × SE
This approach helps compare absolute risk change between two groups rather than a ratio.
Select the input mode first. Use event counts when you know the number of events and non events in both groups. Use probabilities when you already know the risks and sample sizes.
Enter the values for both groups. Choose a confidence level. Press Calculate to show the result directly above the form and below the header section.
Review the two group risks, the risk difference, the standard error, and the confidence interval. Use the graph for quick comparison. Export the output with CSV or PDF when needed.
This calculator measures the absolute difference between two event probabilities. It is often used in clinical research, epidemiology, policy analysis, and program evaluation because it gives a direct effect size in probability units.
Unlike a relative measure, risk difference tells you how many more or fewer events occur in one group compared with another. That makes interpretation practical when decisions depend on real outcome changes per sample or population.
The tool supports event count input and direct probability input, which makes it flexible for raw studies, summary tables, and classroom exercises. The result area also includes confidence interval estimates for better statistical interpretation.
Use the example data table to understand the setup, then test your own values. The export buttons make it easier to save results for reports, assignments, or audit trails.
Risk difference is the absolute difference between two event probabilities. It shows how much higher or lower the event rate is in one group compared with another group.
Risk difference measures an absolute change, while relative risk measures a proportional change. Risk difference is useful when you want the direct increase or decrease in event probability.
Yes. This calculator supports direct probability input with sample sizes. That helps when a report already provides event rates but you still need the difference and confidence interval.
Sample sizes are needed to estimate the standard error and confidence interval. Without them, the calculator cannot measure uncertainty around the estimated difference.
A negative value means the event risk in group 1 is lower than the event risk in group 0. That often suggests reduced absolute risk in the first group.
The confidence interval gives a likely range for the true risk difference. A narrow interval suggests more precision, while a wide interval suggests more uncertainty.
Yes. Risk difference is common in medical and public health studies because it communicates absolute treatment effect in a practical and interpretable way.
It provides a confidence interval, which helps assess significance. If the interval excludes zero, the difference is commonly treated as statistically meaningful at that confidence level.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.