Any discipline — medicine to materials, genomics to risk. Literature gates score every claim, Monte Carlo paths scaffold from templates, and every run ships with a reproducible ID, parameter snapshot, and evidence you can attach to notebooks or manuscripts.
20+ domain templates
< 10 min to first simulation
100% reproducible runs
In-app tracks
The research workspace is organized into four product tracks — each with dedicated routes, checkpoints, and exports. Wizard flows stitch them into end-to-end studies.
Navigate sources, hypotheses, and evidence trails — built for literature-heavy workflows with exportable summaries.
Parameter sweeps, scenario imports, and long-running simulation monitors with session checkpoints.
Design and iterate survey instruments with agent assistance — keep methodology and instruments versioned.
Ongoing observation-style runs with alerts and structured logging for operational research programs.
University R&D
Bethlehem University (wireless, drones, energy engineering) and Northern Border University (UAV SAR, power networks, open datasets) — with citations on every claim.
Why teams adopt it
The same workflow anchors telecom link budgets, PK-style sweeps, and custom domains — with gates and exports your PI or reviewers can inspect.
Reference gates score coverage per simulation aspect; generated runs respect literature-backed bounds instead of ad hoc constants.
Pre-built schemas from life and environmental sciences through engineering, quantitative methods, social sciences, and computational security — plus CUSTOM for anything the catalog does not name yet.
Each run gets an ID, scaffold path, hashed parameter snapshot, and audit-friendly exports aligned with institutional expectations.
How it works
Structured where it matters, flexible where science is messy — no blank forms, no untracked parameter drift.
Natural language plus structured domain selection so downstream templates and gates know what “good” looks like.
AI-assisted reference analysis, DOI validation, and explicit gap flags before simulation spend.
Pre-built uncertainty ranges and metrics — not a generic spreadsheet — so sweeps stay physically or clinically plausible.
Batch sizes, convergence monitors, and parallel scenario paths generated with your template, not retyped each project.
Python scaffolds with instrumentation hooks and run IDs baked in so logs and figures trace back to inputs.
Figures, tables, run records, and trails suitable for notebooks, appendices, or regulatory-style review packets.
What you get
From template library to API access — pick what you need today, grow into the rest without changing tools.
Domain templates (20+)
Materials, robotics, aerospace, manufacturing, neuroscience, ecology, agriculture, energy, social science, education, cybersecurity, ML/OR, and more — plus custom.
Reference gate engine
Per-aspect coverage scores and gap detection before compute-heavy steps.
Monte Carlo scaffolder
Batch paths and convergence-aware structure, not one-off loops.
Parameter sweep builder
Grids and ranges tied to schema defaults and uncertainty bands.
DOI & literature search
Canonical reference capture with validation hooks.
Uncertainty quantification
Distributions and sensitivity hooks aligned to domain idioms.
Code generation (Python)
Runnable scaffolds with run metadata and instrumentation.
Evidence-friendly exports
Tables, figures, and machine-readable run summaries.
Reproducible run records
IDs, seeds, and hashed snapshots for replay and audit.
Collaboration & sharing
Session-scoped artifacts teammates can review without losing provenance.
API & SDK access
Automate briefs, runs, and exports from your stack.
Custom domain builder
Extend beyond catalog verticals with the same gates and exports.
Clinical, pharmacovigilance, genomics, and neuroscience templates with literature-first gates.
Trial-scale Monte Carlo, PK-style parameter sweeps, and coverage gates tuned to therapeutic claims.
Key parameters: clearance (L/h), volume of distribution (L), oral bioavailability (ratio), elimination half-life (h).
Signal-oriented simulations with explicit literature hooks for safety and observational endpoints.
Key parameters: analysis window (months), minimum case count for signal display.
Population-driven draws and GWAS-flavored uncertainty without losing traceable assumptions.
Key parameters: risk allele frequency, penetrance / effect proxy, odds ratio for risk allele.
Firing-rate and latency proxies with noise-aware sweeps for decoding and system-level simulations.
Key parameters: firing rate (Hz), synapse count, neural or behavioral response latency (ms), noise amplitude, membrane time constant (ms).
Climate, ecology, agriculture, and energy grids — distributional assumptions grounded in inventory and field literature.
Environmental drivers and resilience KPIs with distributional weather and exposure assumptions.
Key parameters: emission factor (kg CO₂ / kWh), global mean temperature anomaly (K), equilibrium climate sensitivity (K).
Population and diversity surrogates with carrying capacity and habitat-area uncertainty.
Key parameters: species richness, carrying capacity, intrinsic growth rate (yr⁻¹), habitat area, dispersal rate.
Yield, irrigation, inputs, and soil chemistry sweeps anchored to trial or extension references.
Key parameters: yield (kg/ha), irrigation (mm), fertilizer (kg/ha), soil pH, growing-season temperature (°C).
Capacity, efficiency, storage, and utilization factors for LCOE-style and grid studies.
Key parameters: capacity (MW), efficiency (%), storage (MWh), load factor, capex ($/kW).
Physics signals, RF, materials, robotics, aerospace, and manufacturing quality — instrumentation-ready scaffolds.
Signal chains and surrogate-friendly parameters for downstream FEM or lab cross-checks.
Key parameters: sampling rate (Hz), channel SNR (dB), viscous damping ratio (dimensionless).
Link budgets, fading models, and 3GPP-aligned path loss assumptions with explicit reference coverage.
Key parameters: carrier frequency (GHz), path loss (dB), transmit power (dBW), system noise temperature (K), receiver noise figure (dB), rain rate (mm/h).
Strength, modulus, density, and melting-point bands for mechanical and thermal Monte Carlo.
Key parameters: yield strength, elastic modulus, density, melting point, thermal conductivity (W/(m·K)), Poisson’s ratio, fracture toughness.
DOF, control rate, payload, and latency sweeps for tracking and energy narratives.
Key parameters: DOF, control frequency (Hz), payload (kg), latency (ms), peak joint torque (N·m), repeatability (mm).
Mach, altitude, thrust, and drag coefficient proxies for range and efficiency studies.
Key parameters: Mach number, altitude (m), thrust (N), drag coefficient, lift coefficient, wing area (m²), specific impulse (s).
Defect PPM, cycle time, throughput, and yield for SPC and OEE-oriented simulations.
Key parameters: defect rate (PPM), cycle time (s), throughput (units/h), first-pass yield (%), spec width in σ for Cpk.
Finance, operations research, and optimization-style sweeps — same evidence trail.
Portfolio and payoff-style metrics with Monte Carlo paths suitable for stress narratives.
Key parameters: expected annual drift, annualized volatility, annualized risk-free rate.
Variable and constraint count proxies for optimization sensitivity and feasibility-gap analysis.
Key parameters: decision variables, constraints, nonzero objective entries, integrality ratio.
Power analysis, education analytics, and replication-aware priors for behavioral and learning outcomes.
Sample size, effect size, alpha, and power sweeps with replication-aware framing.
Key parameters: sample size, Cohen’s d, α, target power, number of groups.
Cohort scale, difficulty, engagement, and dropout draws for learning-outcome Monte Carlo.
Key parameters: learner count, content difficulty, engagement rate, dropout rate, prior knowledge score.
ML evaluation proxies, threat modeling, and CUSTOM for bespoke computational work — gates stay strict.
Attack surface, vulnerability scores, patch latency, and threat-rate uncertainty for risk scoring.
Key parameters: attack-surface count, mean CVSS base score (0–10), patch latency (days), filtered threat rate (events/day), MTTD (days).
Dataset scale, feature count, model capacity, and regularization for accuracy and training-time bands.
Key parameters: dataset size, feature count, parameter-count scale, regularization strength, learning rate.
Any field not listed — define schemas, gates, and exports with the same run record discipline.
Key parameters: your nomenclature — locked, hashed, and versioned like catalog domains.
Interactive demos
Illustrative motion — the real wizard runs in-app with your references and parameters.
Wizard walkthrough
Domain: Telecom / RFPick template family and seed a session with tenant-safe defaults.
Monte Carlo convergence
Sample paths stabilize as batches complete (illustrative).
Reference coverage
Gap detected — add citation or explicit assumption
Sample paths stabilize as batches complete (illustrative).
Reproducibility & compliance
Borrow the Outcome Compiler gate culture for research runs: deterministic seeds, hashed parameters, versioned scaffolds.
Integrations & ecosystem
Exports and APIs fit existing notebooks, IDEs, and pipelines.
Pull figures and tables with run metadata cells.
Review scaffolds and diagnostics beside your repo.
Scripted sessions for repeat studies.
Automate from CI or orchestrators.
Drop-ready tables and citations for papers.
Re-run bounded studies on merge with hashed inputs.
Why Midcore
Specific gaps in spreadsheets, vendor suites, and chat-only workflows.
No enforced reference gates, no shared parameter hashing, and reproducibility depends on whoever remembered to commit.
Vendor lock-in, opaque literature linking, and limited evidence trails for custom publication or regulatory packets.
No schema-locked parameters, no Monte Carlo convergence tracking, and no audit-grade run record by default.
Institutional fit
Shareable artifacts with provenance — not screenshots of a chat.
Researchers track — R&D acceleration with evidence discipline.
Broad autonomy surface backing orchestration, gates, and safety.
The same gate culture that secures releases also tracks your simulation runs — run IDs, scaffold paths, and reference reports stay inspectable.
— Researchers track
FAQ
Twenty-one built-in verticals span life sciences, environmental and energy systems, engineering (materials through manufacturing), quantitative finance and OR, social and education research, plus cybersecurity and ML — CUSTOM remains the catch-all.
See the pricing page for seat and usage tiers; research sessions meter compute and storage like other Midcore workloads.
Tenant-scoped storage with export controls; align retention with your institution’s policy and use on-prem or private cloud deployments when required.
Core flows are designed for offline-capable stacks; exact deployment depends on your contract and model provider policy.
Share session-scoped bundles with run IDs, parameter snapshots, and reference gate reports — reviewers see provenance, not just plots.
Use CUSTOM to define parameters and gates; you keep the same exports and reproducibility primitives as catalog domains.
CSV tables, LaTeX fragments, BibTeX, JSON run records, and notebook-oriented bundles — extend via API for proprietary pipelines.
Audit trails bundle assumptions, citations, seeds, and template versions — your compliance team maps them to local SOPs.
Get started
Pricing scales with seats and compute — enterprise options add SSO, private networking, and retention controls.