Open Road Geometry
The project starts with inspectable public road data rather than hidden proprietary inputs.
Road Risk combines this public research site with a live app that lets users test selected roads, change assumptions, inspect graphs, and export results. This page explains the build, data sources, method, limits, and next development steps.
R.I.S.K. stands for Roadway Infrastructure Safety Kinematics. The project investigates whether public road geometry, physics-informed calculations, and controlled scenario assumptions can support early-stage comparative interpretation before collision-history data is introduced.
Road Risk is the live application built for the investigation. It lets users select road segments, inspect derived geometry and model outputs, change scenario assumptions, compare distributions, analyse routes, and export results for review.
Road Risk produces comparative model outputs from public geometry and selected assumptions. These outputs are not official safety ratings or crash predictions.
The project starts with inspectable public road data rather than hidden proprietary inputs.
Curvature, radius, safe-speed, and stopping-distance relationships are exposed as part of the explanation.
Rain, fog, fatigue, overspeed, vehicle type, surface, and lighting assumptions can be varied deliberately.
Distribution views and percentiles help prevent one raw model value from being read in isolation.
The app lets judges test a selected road, inspect the method, and export a case record.
Road Risk asks whether selected road geometry, vehicle-dynamics relationships, and transparent assumptions can identify relative changes in model output before collision-history data is used.
The hypothesis is treated cautiously: public geometry and kinematic reasoning can support comparative analysis within a defined model boundary, but stronger operational claims require empirical calibration and professional review.
Explains project scope, method, assumptions, results framing, references, and responsible interpretation.
Uses a map interface to select roads, derive geometry, apply scenario assumptions, display outputs, and export evidence.
Combines data pipeline, formulas, assumptions, graph interpretation, reliability, and key terms into one navigable method page.
The app can save opt-in cases locally for browser review and export case-evidence JSON. Only checked and committed cases become part of the reviewed public dataset.
These figures are approximate public-project indicators rather than claims of scientific validation. They describe the implementation scale and app features.
Approximate lines across the current source files.
Approximate logic for road selection, modelling, route analysis, graphs, and exports.
Approximate styling for the live app, public pages, responsive layouts, and panels.
Development history and iteration count as a project-scale signal.
Research and build notes to support transparency.
Unique output areas, controls, status panels, and interface components in the prototype.
Scenario, model, route, graph, export, and UI controls in the live app.
Vehicle presets that change model assumptions rather than labels alone.
Referenced road attributes across surface, speed, geometry, context, and infrastructure.
Default values used to make baseline assumptions visible before scenario changes.
Baseline and stress-style profiles for comparing model sensitivity.
Statistical views for distribution and percentile interpretation.
Indicators used in the distribution, percentile, route, and summary outputs.
CSV, GeoJSON, JSON, distribution data, and image-style outputs.
Defined the prospective relative-risk scope and selected core variables including curvature, friction, visibility, and speed.
Integrated OpenStreetMap-derived data, road segmentation, coordinate extraction, and an interactive map-selection interface.
Converted coordinate sequences into heading change, radius of curvature, curvature scoring, and bend-frequency context.
Added kinematic and dynamic relationships linking road geometry with lateral demand, grip assumptions, and safe-speed estimates.
Introduced reaction distance, velocity-squared braking behaviour, vehicle parameters, and profile-dependent interpretation.
Added road classification, traffic proxy, lane context, surface condition, lighting, pedestrian/cycle infrastructure, and barrier effects.
Integrated rain, fog, ice, snow, flooding, overspeed, fatigue, distraction, BAC, and combined multiplier assumptions.
Integrated geometry, physics, context, environment, and behaviour into a normalised comparative output with graphs, route analysis, and exports.
Refactored the surrounding website to explain the model, limitations, references, results structure, and live-app workflow.
The live app uses a browser map interface for road selection, overlays, routes, isochrones, and visual context.
Road ways, tags, and geometry are retrieved from public map data where available.
Route and isochrone functions use external routing services while the app handles map display and risk sampling.
The app exposes formula explanations, distribution views, and export files so the model can be inspected.
The live app connects map selection, geometry extraction, physics calculations, scenario multipliers, statistical context, and exports. That makes the system easier to inspect than a single untraceable number shown without provenance.
The public website now acts as the research companion to the app: it explains the method, formulas, assumptions, limitations, and intended use before users launch the interactive model.
The project does not yet claim official calibration against national crash datasets, formal engineering audits, or government road-safety standards. Future work should compare model outputs against trusted external evidence.
Road Risk benefits from public data because it is transparent and widely available. It also inherits public-data limitations such as missing tags, inconsistent geometry density, and incomplete infrastructure attributes.
Missing public tags must not be treated as proof that a feature is absent. The app uses fallback assumptions and confidence notes where possible, but stronger conclusions require better data or field verification.
OSM ways, nodes, tags, road class, names/refs, speed/surface hints, and public map context.
Selected segment, bearing, heading change, curvature, radius, bounding context, safe speed, and stopping distance.
Vehicle profile, weather, visibility, behaviour, friction, traffic proxy, and missing-data fallbacks.
CSV, GeoJSON, JSON run summaries, distribution data, and visual/map-style outputs for review.
Open the live app from the header or CTA.
Click a road and wait for the selected-road output.
Read the output as comparative, not official.
Change rain, fog, fatigue, vehicle type, or overspeed.
It is a comparative model output under defined assumptions, not a measured crash rate.
The project first tests whether geometry and physics can produce a transparent prospective signal before calibration.
Change one factor at a time, such as rain, fog, fatigue, or overspeed, and compare the same selected road.
Tighter curvature and smaller radius increase lateral demand and can reduce the friction-limited safe-speed estimate.
Compare exported cases with official collision records, engineering audits, field inspection, and expert review.
Booklet and poster materials are presented as SciFest display resources, not public downloads.
Selected-road calculations, scenario controls, graphs, maths panels, routes, and exports are inspectable.
Local cases appear on the current browser; reviewed cases can be committed to the public static dataset.
CSV, JSON, GeoJSON, graph data, and case-evidence files provide reviewable records.
Sources support the method and context; they do not endorse the app or validate individual outputs.
It may indicate a segment worth closer review under the active assumptions.
The model cannot see every real-world condition, behaviour, or infrastructure defect.
Fallback assumptions are used where public tags are incomplete.
Operational use would require calibration, field verification, and professional assessment.
The project should be presented as a research process: question, method, model, evidence, limitations, and future work.
Development logs and tracked versions help show how the model, app, and public presentation evolved over time.
Keeping the source inspectable matters because the project depends on transparent assumptions and reproducible calculations.
External discussion and reference gathering are treated as research context, not endorsement or validation. The next stage is to review assumptions with road-safety, transport, and vehicle-dynamics expertise.
The project framing identifies possible review routes with academic transport researchers, road-safety organisations, and public-sector infrastructure specialists.
Stronger claims would require comparison against observed collision datasets, engineering assessments, road-condition surveys, and documented infrastructure context.
Use crash datasets, audit examples, known blackspots, or expert review to test model behaviour.
Make missing tags, fallback assumptions, geometry sparsity, and route simplification even easier to inspect.
Separate public site, model helpers, map logic, API helpers, graphing, and exports without breaking the stable app.
Longer-term, route protected API calls through a server-side proxy with caching, rate limits, and clearer failure states.
Road-safety, transport, education, and GIS feedback would help separate strong model components from areas needing better evidence.
The public site explains the project. The existing live app remains unchanged and available for road selection, route analysis, graphs, maths, and exports.