Power Analysis

The two main analysis methods: estimate power at a fixed sample size, or search for the minimum sample size that achieves a target power level.


find_power()

MCPower.find_power(sample_size, target_test='all', correction=None, print_results=True, scenarios=False, summary='short', return_results=False, test_formula='', progress_callback=None, cancel_check=None)[source]

Calculate statistical power for given sample size.

Parameters:
  • sample_size (int) – Number of observations per simulation

  • target_test (str) – Effect(s) to test. "all" (default) runs overall F-test + all individual fixed effects. Also accepts "all-posthoc", specific names like "x1", pairwise comparisons like "factor[a] vs factor[b]", exclusions like "-test_name", or comma-separated combinations. Duplicate tests raise ValueError.

  • correction (str | None) – Multiple comparison correction (None, “bonferroni”, “benjamini-hochberg”, “holm”)

  • print_results (bool) – Whether to print results

  • scenarios (bool | List[str]) – Scenario analysis control — False (default) disables scenario analysis, True runs all configured scenarios, or pass a list of scenario names to run selectively (e.g. ["optimistic", "extreme"]). Case-insensitive.

  • summary (str) – Output detail level (“short” or “long”)

  • return_results (bool) – Return results dict

  • test_formula (str) – Formula for statistical testing (default: use data generation formula). If the formula contains random effects like (1|school), analysis switches to mixed model testing.

  • progress_callback – Progress reporting control — None (default) auto-uses PrintReporter when print_results is True, False explicitly disables progress, or pass a callable (current, total) for custom reporting.

  • cancel_check – Optional callable returning True to abort.

Returns:

If return_results is True, returns a results dictionary with keys "model" (metadata) and "results" (power estimates). Returns None otherwise.

Return type:

dict or None

Target Test Options

Value

Meaning

"all"

Overall F-test plus all individual fixed effects (no post-hoc contrasts)

"overall"

Overall model F-test only

"x1"

A specific predictor by name

"x1, x2"

Comma-separated list of specific tests

"group[1] vs group[2]"

Post-hoc pairwise comparison between two factor levels

"all-posthoc"

All pairwise contrasts for every factor variable

"-overall"

Exclude a test (prefix with -); use with keywords like "all, -overall"

"all, all-posthoc, -overall"

Combine keywords and exclusions

Duplicate tests in the resolved list raise a ValueError.

Correction Options

Value

Method

None

No correction

"bonferroni"

Bonferroni correction (conservative)

"holm"

Holm-Bonferroni step-down (less conservative than Bonferroni)

"fdr" or "benjamini-hochberg"

Benjamini-Hochberg false discovery rate

"tukey"

Tukey HSD (requires at least one post-hoc contrast in target_test)

Progress Callback

Value

Behavior

None (default) + print_results=True

Automatic PrintReporter on stderr

None + print_results=False

Silent (no progress output)

callable(current: int, total: int)

Custom callback invoked periodically

False

Explicitly disable all progress reporting

Examples

from mcpower import MCPower

model = MCPower("y = x1 + x2 + x1:x2")
model.set_simulations(400)
model.set_effects("x1=0.5, x2=0.3, x1:x2=0.2")

# Basic usage (prints results to stdout)
model.find_power(sample_size=100)
# Programmatic access with correction
result = model.find_power(
    sample_size=200,
    target_test="x1, x2",
    correction="bonferroni",
    return_results=True,
    print_results=False,
)

Test Formula Example

Generate data with a full model but test with a reduced model to evaluate model misspecification:

model = MCPower("y = x1 + x2 + x3")
model.set_simulations(400)
model.set_effects("x1=0.5, x2=0.3, x3=0.2")

# Full model (default)
model.find_power(100)
# Reduced model (omit x3 from analysis)
result = model.find_power(100, test_formula="y = x1 + x2",
                          return_results=True, print_results=False)
# result contains power for x1 and x2 only

See Tutorial: Using test_formula for more examples including mixed-model cross-testing.

See Also


find_sample_size()

MCPower.find_sample_size(target_test='all', from_size=30, to_size=200, by=5, correction=None, print_results=True, scenarios=False, summary='short', return_results=False, test_formula='', progress_callback=None, cancel_check=None)[source]

Find minimum sample size needed for target power.

Parameters:
  • target_test (str) – Effect(s) to test. Defaults to "all". See find_power() for full keyword/exclusion syntax.

  • from_size (int) – Minimum sample size to test

  • to_size (int) – Maximum sample size to test

  • by (int) – Step size between sample sizes

  • correction (str | None) – Multiple comparison correction

  • print_results (bool) – Whether to print results

  • scenarios (bool | List[str]) – Scenario analysis control — False (default) disables scenario analysis, True runs all configured scenarios, or pass a list of scenario names to run selectively (e.g. ["optimistic", "extreme"]). Case-insensitive.

  • summary (str) – Output detail level

  • return_results (bool) – Return results dict

  • test_formula (str) – Formula for statistical testing (default: use data generation formula). If the formula contains random effects like (1|school), analysis switches to mixed model testing.

  • progress_callback – Progress reporting control — None (default) auto-uses PrintReporter when print_results is True, False explicitly disables progress, or pass a callable (current, total) for custom reporting.

  • cancel_check – Optional callable returning True to abort.

Returns:

If return_results is True, returns a results dictionary with keys "model" (metadata) and "results" (per-sample-size power estimates, first-achieved sizes). Returns None otherwise.

Return type:

dict or None

Examples

from mcpower import MCPower

model = MCPower("y = treatment + age")
model.set_simulations(400)
model.set_effects("treatment=0.4, age=0.2")
model.set_variable_type("treatment=binary")

# Search from 30 to 300 in steps of 30
model.find_sample_size(from_size=30, to_size=300, by=30)
# Programmatic access
result = model.find_sample_size(
    target_test="treatment",
    from_size=50,
    to_size=500,
    by=50,
    return_results=True,
    print_results=False,
)

Test Formula Example

model = MCPower("y = x1 + x2 + x3")
model.set_simulations(400)
model.set_effects("x1=0.5, x2=0.3, x3=0.2")

# Full model (default)
model.find_sample_size(target_test="x1, x2, x3")
# Reduced model (omit x3 from analysis)
result = model.find_sample_size(target_test="x1, x2",
                                test_formula="y = x1 + x2",
                                return_results=True, print_results=False)
# result contains sample sizes for x1 and x2 only

See Tutorial: Using test_formula for more examples including mixed-model cross-testing.

Notes

  • The target power level is set via model.set_power() (default: 80%).

  • If no sample size in the range achieves target power, the results indicate this – consider widening the range or increasing effect sizes.

  • For mixed models, each sample size refers to the total number of observations across all clusters.

  • Progress reporting spans all sample sizes multiplied by the number of simulations, so the total count is larger than a single find_power() call.

See Also