C-Level & Board

CEO · CTO · CIO · COO · CISO — focused on business risk, investment justification, regulatory posture, and strategic fit. These questions determine whether the project gets funded.

Business risk & downtime cost

Per the Splunk / Oxford Economics 2024 study (2,000 Global 2000 executives, 53 countries), the average cost of unplanned downtime is $9,000/minute ($540,000/hour). Banking, finance, and healthcare verticals exceed $5M/hour for critical system outages. The Gartner benchmark ($5,600/min) is used as a conservative floor in Section 9.1. A single avoided 2-hour manual re-patch event at the lower figure represents $672,000 in cost avoidance.

§ 9.1–9.2 · Splunk/Oxford Economics 2024 · Gartner 2014/ITIC 2024

BGP detects failure when a TCP keepalive times out — typically 30–180 seconds after the physical cut. During that window all traffic is black-holed. The OCS switches the physical path in <50 ms — before any BGP timer expires — so the IP layer never sees the fault. Three specific gaps: detection latency, no physical pre-provisioning capability, and no proactive optical power monitoring. MPLS FRR operates comparably fast but cannot protect non-IP services (FC SAN, vMotion, OOB management) and cannot detect gradual optical degradation.

§ 12.2 · Three limitations of IP-only failover

Without an OCS, physical-layer RTO is 60–300+ minutes: NOC alert (1–60 min) + physical access arrangement + on-site credentialed engineer re-patching (30–240 min). With POLATIS® OCS, physical-layer RTO drops to <5 minutes — and for automated APS events, to <50 milliseconds at the physical layer.

§ 9.1 · Three-scenario RTO comparison table

Both — and they compound. Annual cost avoidance of $998,400 based on published figures before regulatory fine mitigation. Payback on a dual-site deployment: 8–14 months. 5-year NPV: >$3.5M at a conservative $300K/hr downtime cost. DORA and NERC compliance close the business case for organisations that would otherwise face significant remediation project costs.

§ 10 · ROI Quantitative Framework · 5-year NPV table

The $400B figure is the Global 2000 aggregate (Splunk/Oxford Economics 2024) — $200M per company per year, 9% of profits. Section 10.1 explicitly notes all model parameters should be replaced with your own figures. Even at 50% of the published downtime cost baseline, the ROI case remains strong. The whitepaper provides the input variables table so your finance team can substitute actual revenue, fault frequency, and recovery cost data.

§ 10.1 · Input parameters table · Splunk/Oxford Economics 2024

Regulatory & compliance posture

DORA (EU Regulation 2022/2554, effective January 2025) requires documented, tested ICT incident management and recovery processes including physical infrastructure resilience. A manual fiber re-patch process with no automated compensating control is an undocumented single point of failure — a DORA audit finding. The OCS provides an automated, software-logged control generating an audit trail via RESTCONF event logs for every protection switching event.

§ 14 · DORA Article 11 · Regulatory Context

NERC CIP-014-3 requires physical security and resilience controls for bulk electric system transmission infrastructure. The OCS provides an automated physical-layer protection mechanism documentable as a compensating control, with machine-readable audit evidence from the RESTCONF API.

§ 14 · References · NERC CIP-014-3

Yes. Every RESTCONF operation logs timestamp, port identifiers, OPM power readings, HTTP response codes, and ITSM ticket numbers. DR test events generate identical log trails to real events — giving auditors evidence of tested, repeatable, automated recovery. The orchestrator sends OPM telemetry CSV as an attachment to auto-created ITSM tickets on every event.

§ 8 · § 12 · RESTCONF audit trail

Strategic & architectural questions

Yes — the OCS is protocol-agnostic. It has operated transparently across four IEEE 802.3 speed generations (10G, 40/100G, 400G, 800G) in 20 years without modification. OEO switching hardware requires replacement at each speed transition. The OCS switches light, not protocol frames — a 2026 deployment serves 800G and future 1.6T infrastructure equally.

§ 2 · § 9.2 · IEEE 802.3 Standard history

Minimal. DR failover becomes a software event requiring no human involvement. DR testing no longer requires an on-site engineer. The NOC shifts from reactive re-patching to monitoring OPM dashboards and reviewing automated ITSM tickets. Section 12.3 addresses split-brain scenarios that may require updated run-books.

§ 12.3 · Operational model change

Each 1+1 APS-protected service requires 3 OCS ports (working input + protection input + equipment output). A 96×96 OCS supports 32 services at full utilisation (96 ÷ 3). A typical initial deployment of the 12 standard service types (FC SAN, iSCSI, core LAN, vMotion, OOB, DB replication, backup, market data, MPLS handoff, L2 extension, AI/GPU fabric, monitoring) consumes 36 ports — leaving 60 ports free for growth without hardware change.

§ 9.2 · Shared OCS fabric row · Port math corrected

Most enterprises still rely on manual re-patching for physical-layer DR — the whitepaper opens with this gap. Organisations that have automated L1 DR typically use amplified WDM systems at much higher cost and complexity. The POLATIS® OCS + RESTCONF approach delivers sub-50ms protection at data-center scale without wavelength management. DORA (January 2025) is accelerating adoption among EU-regulated financial institutions.

§ 1 · § 12 · § 14 · Competitive landscape

Infrastructure & Data Center Engineering

DC architects · network engineers · fiber plant managers · cabling teams — people who own the physical layer and need to design, size, and install this correctly.

Port sizing & OCS selection

32 services at full utilisation. Each 1+1 APS service needs 3 ports: 1 working-path input, 1 protection-path input, 1 equipment output (96 ÷ 3 = 32). The earlier "48 services" figure assumed 2 ports/service — valid only for simple pass-through routing, not APS switching. A deployment protecting the 12 standard service types uses 36 ports, leaving 60 ports (20 additional services) available for growth without hardware change.

§ 9.2 · Corrected port math · 3 ports/service

No — deliberately so, for three reasons: (1) TIA-942-C recommends sizing for 3–5 years of growth; (2) idle OCS ports cost nothing — the switch is all-optical with fixed power draw; (3) the 64×64 alternative supports only 21 services at full utilisation — too tight for a growing deployment adding AI/GPU workloads and additional carriers. The 60-port headroom absorbs 20 additional protected services with zero recabling.

§ 9.2 · TIA-942-C growth planning

24-fiber ribbon splice module, 12× LCD — part series IKD-12-LCUD (UPC) / IKD-12-LCAD (APC). MTP trunks from LISA™ feed the ribbon splice input; 12 LC duplex connectors on the module face patch to POLATIS® OCS LC ports. Capacity: 144F/1U, 576F/4U. For AI/HPC DR at ultra-high density: 72-fiber module (IKD-12-SNU6/MDU6, 36× VSFF, 432F/1U) — 3× denser but requires VSFF-compatible OCS ports.

§ 3.2 · HUBER+SUHNER IANOS Module Types pp. 62–63

12× MTP Base-12. Each LISA™ ribbon cassette ribbon-splices the incoming dark fiber and presents 12 MTP-12 connectors for MTP trunk cables to IANOS™. Full front-plate options: 12×/18× LCd UPC/APC; 6×/12×/18× MTP Base-8/-12; 36× SN UPC; 36× MDC UPC; Splice through. ½RU per cassette.

§ 3.1 · HUBER+SUHNER LISA Cassettes product page

TIA-942-C space placement

Entrance Room (ER) — primary location. The CDR is an ODF: its function is to terminate outside-plant carrier fiber and provide the demarcation between the outside fiber plant and inside cabling. This is precisely the ER function in TIA-942-C. In small data centers where the ER is consolidated into the MDA, the CDR is located in the MDA.

§ 5.3–5.4 · ANSI/TIA-942-C

MDA when co-located with the POLATIS® OCS in the main switching room; HDA when the OCS serves a specific equipment zone. IANOS™ must always be in the same functional space as the OCS. TIA-942-C requires cabinets in MDA/IDA/HDA to be minimum 800 mm wide.

§ 5.4 · ANSI/TIA-942-C · 800mm cabinet requirement

Backbone cabling (ER → MDA or ER → HDA). Max distance: 300 m OS2 single-mode — easily accommodated in any standard data center topology. LC patch cords from IANOS™ to OCS ports are intra-space equipment cords within the MDA/HDA.

§ 5.4 · ANSI/TIA-942-C backbone cabling

Yes — this is a specific change in TIA-942-C (May 2024). The previous edition required LC or MPO only in distributor areas. The 942-C revision explicitly permits VSFF connectors (SN, MDC) in MDA, IDA, and HDA, enabling IANOS™ 36× SN UPC and 36× MDC UPC modules at full density in standards-compliant deployments.

§ 5.4 callout · ANSI/TIA-942-C May 2024

Loss budget & optical design

LISA™ ribbon splice <0.1 dB + IANOS™ LC pair 0.2 dB + LC patch cord 0.2 dB + POLATIS® OCS 1.0 dB = ~1.5 dB total intra-site. Inter-site dark fiber span: add 0.2–0.4 dB/km for OS2 SMF at 1550 nm, calculated separately per link budget.

§ 5.1 · Figure 7 rack elevation loss budget callout

Fusion splices deliver <0.1 dB vs 0.3–0.5 dB for mated LC or MTP pairs. On 20–100 km inter-site spans this margin is critical. More importantly, mated connectors degrade through contamination over time — the primary cause of gradual optical power loss that eventually triggers unplanned outages. Splices made once at installation are never disturbed; only OCS cross-connects are reconfigured during failover.

§ 3.1 · § 5.1 · Design Principle callout

Configurable per-port via RESTCONF. Recommended thresholds: within ±2 dB of install baseline → log only; −3 dB from baseline → ITSM warning ticket + schedule inspection; −5 dB from baseline → switch to protection + emergency maintenance; below −30 dBm absolute → immediate APS activation. The −3 dB warning threshold catches connector degradation weeks before failure, enabling planned rather than emergency maintenance.

§ 11.2 · Pre-event vs reactive response table

Installation & physical design

No. The CDR's C-frame aluminium design provides full front access only. Suitable for back-to-wall, back-to-back, end-of-row, or side-by-side installation. CDR 1500 specs: 47U · 2236 mm H × 328 mm W · 201 kg.

§ 3.3 · HUBER+SUHNER CDR 1500 datasheet

No — it must be in the same TIA-942-C functional space (MDA or HDA) but not the same rack. It can occupy any top-of-rack position in the same zone. Co-location in the same cabinet row is recommended to minimise LC patch cord lengths — TIA-942-C advises minimising patch cord length in distributor areas.

§ 3.2 · § 5.4 · Flexible placement

Automation, DevOps & NetOps

Engineers who will integrate the RESTCONF API, build monitoring, and own the code that runs automated DR. People most likely to read Section 8 in detail.

API fundamentals

Base URL: http://{host}:8008/api · Auth: HTTP Basic (default admin:root — rotate in production) · Media type: application/yang-data+json · Standard: IETF RFC 8040 · Document: HUBER+SUHNER POLATIS® RESTCONF API User Manual, doc 7001-006-07.

§ 8.1 · RESTCONF API User Manual 7001-006-07 · RFC 8040

HTTP 204 No Content for PATCH and DELETE. HTTP 200 OK for GET. Any 4xx/5xx indicates failure — implement exponential backoff retry and NOC alert after N consecutive failures.

§ 8.3 · § 8.5 · RESTCONF response handling

Yes — PATCH operations are idempotent. Duplicate PATCHes during retry loops are safe. Verify current state first with GET optical-switch:cross-connects before issuing a PATCH to avoid conflicting updates in multi-orchestrator deployments.

§ 12.3 · Split-brain and consistency guarantees

GET /api/data/optical-switch:opm-power/port={N} returns current optical power in dBm and alarm status. The BackgroundService in Section 8.4 polls this in a 5-second loop across all monitored ports using the YANG model fields.

§ 8.2 · § 8.4 · OPM polling pattern

PATCH /api/data/optical-switch:cross-connects/pair={N} with body {"pair":{"ingress":N,"egress":193}} (193 = pre-provisioned protection port). Section 8.3 provides the complete C# ActivateProtectionPathAsync() implementation with JSON serialization and 204 response handling.

§ 8.3 · ActivateProtectionPathAsync()

Section 8.5 provides PreProvisionProtectionPathsAsync() — a bulk PATCH to /api/data/optical-switch:cross-connects with a JSON array of all working/protection pairs. The OCS holds this state independently; even if the orchestrator goes offline, the hardware APS continues to operate from the pre-provisioned state.

§ 8.5 · Startup pre-provisioning pattern

Orchestration design

5 seconds. The OCS hardware APS operates independently at <50 ms for power-loss events; the 5-second polling loop is for pre-alarm threshold detection (−3 dB and −5 dB warning states) — catching connector degradation weeks before a failure occurs.

§ 8.4 · § 11.2 · Pre-event monitoring

Yes. The POLATIS® OCS hardware APS operates independently — it detects OPM power loss and switches to the protection path without any orchestrator involvement. Pre-provisioned cross-connects are held in the switch fabric independently. Run a secondary orchestrator instance as hot-standby for the monitoring function.

§ 12.1 · § 12.3 · Hardware APS independence

Yes — any HTTP client works. The RESTCONF RFC 8040 interface is vendor-neutral. The whitepaper provides .NET examples as it is the most common enterprise automation platform for financial services, but the API calls are identical in any language or automation framework.

§ 8 · RFC 8040 · No proprietary SDK required

Detect OPM alarm on both working and protection ports → exhaust automated options → trigger critical NOC alert with full OPM telemetry. No automated recovery is possible — physical intervention is required. The orchestrator should log both port readings and open a P1 ITSM ticket with carrier circuit references attached.

§ 12.1 · Simultaneous dual-path failure mode

Yes — DR tests are scripted PATCH calls that switch to protection, verify connectivity, then restore. Unlimited switching cycles with no mechanical wear (DirectLight™ has no moving parts). A monthly test script can run at 2am without engineering involvement and log its results to ITSM automatically.

§ 12.3 · Idempotency guarantees · Unlimited cycle life

Finance & Procurement

CFO office · procurement teams · budget owners — understanding cost structure, depreciation, and total cost of ownership before approving capital expenditure.

CAPEX, TCO & ROI

Approximately $420,000 for ×2 96×96 OCS units, ×2 IANOS™ HD 4U chassis, ×4 CDR 1500 cabinets (2 per site), LISA™ splice cassettes, MTP trunk cabling, installation and commissioning. This is the model input in Section 10.3. Actual quotes from HUBER+SUHNER will vary by site-specific cabling requirements.

§ 10.3 · NPV model CAPEX line

Negligible. The OCS is all-optical with fixed, low power draw regardless of port utilisation. MTBF >250,000 hours means minimal maintenance contracts. The elimination of on-call DR test engineer costs alone saves $19,200/year (6 tests × $3,200/event).

§ 10.3 · OPEX noted as negligible in NPV model

Payback in Year 1 at +$115,556 cumulative NPV against $420,000 CAPEX — based on $998,400 annual cost avoidance. Payback period: approximately 8–14 months depending on actual downtime cost assumptions used.

§ 10.3 · 5-year NPV table

Section 10.1 explicitly provides the input parameters table so your finance team can substitute your own figures. Even at 50% of the published downtime cost baseline, the ROI case remains strong — the eliminated DR engineer costs and regulatory remediation savings remain unchanged regardless of downtime cost assumptions.

§ 10.1 · Input parameters table · Adjustable model

A substantially longer useful life than OEO is technically justified. OEO hardware is protocol-specific — it requires replacement at each major speed transition (10G → 100G → 400G → 800G). The OCS has served four IEEE 802.3 speed generations in 20 years without modification. A 10–15 year asset life (vs 3–5 years for OEO) reduces the effective annualised capital cost significantly.

§ 9.2 · Protocol-agnostic lifespan row · IEEE 802.3 history

Three primary sources: (1) Splunk / Oxford Economics "Hidden Costs of Downtime" (June 2024) — 2,000 Global 2000 executives, 53 countries, $9,000/min; (2) Gartner IT Downtime Cost Benchmark (2014, confirmed ITIC 2022/2024) — $5,600/min, 81% enterprises >$300K/hr; (3) ITIC 2024 Hourly Cost of Downtime Survey — banking/finance/healthcare avg. >$5M/hr. All cited in References section with publication dates.

§ 9.2 · References · Sourced TCO table

Procurement & vendor risk

No. The API is based on IETF RFC 8040 — an open standard. YANG data models are published and documented. Any HTTP client in any language can consume the API. If HUBER+SUHNER were replaced with another RFC 8040-compliant OCS, only the YANG resource paths would require change — not the orchestration architecture.

§ 8 · RFC 8040 · No proprietary SDK

POLATIS® Series 7000 MTBF: >250,000 hours (~28.5 years). DirectLight™ beam-steering has no mechanical cycle limit — unlimited switching cycles with no wear. This contrasts with MEMS alternatives (50,000–100,000 hour MTBF, with mechanical cycle limits degraded by DR test frequency). Warranty terms: confirm with HUBER+SUHNER directly.

§ 12.3 · MEMS vs DirectLight™ comparison

Up to 50% overall installation cost reduction; 30–40% labour saving; 30–70% deployment time reduction. Field termination requires a fusion splicer kit at $15,000+. Factory-terminated fiber undergoes 100% IL/RL testing — field-terminated single-mode has higher defect risk with 2–4 technician hours per rework event.

§ 9.2 · Pre-terminated fiber row · FASTCABLING 2022 · eskc.com 2024

Risk, Compliance & Internal Audit

CISOs · GRC teams · internal auditors · risk officers — evaluating the control framework, documenting compensating controls, and satisfying external examiners.

Control framework & audit evidence

Every RESTCONF operation generates a log entry: timestamp, HTTP method, resource path, response code, requesting orchestrator IP. The .NET orchestrator adds: port IDs, OPM power readings in dBm, carrier circuit references, ITSM ticket number. A complete, timestamped audit trail from physical power alarm through protection path confirmation — identical for real events and DR tests.

§ 8.4 · § 12 · RESTCONF event log structure

Yes — on two counts. Documented: GET /api/data/optical-switch:cross-connects returns a machine-readable specification of every protection path at any time. Tested: automated DR tests produce logged outcomes identical to real events. Section 14 maps this explicitly to DORA Article 11 ICT business continuity requirements.

§ 14 · DORA Article 11 mapping

The OCS cross-connect table provides a machine-readable map of every working path and pre-provisioned protection path with port assignments. Combined with IANOS™ port-to-fiber documentation and carrier circuit references in ITSM tickets, this documents that working and protection paths use physically distinct fiber routes — a documentable compensating control for NERC CIP-014 and DORA path diversity requirements.

§ 12 · § 5.4 · Cross-connect state as evidence

Section 12.1 documents all seven failure modes. Primary residual risk: simultaneous dual-path failure (both working and protection fibers cut simultaneously) — mitigated by physical path diversity and Model D multi-site topology (Section 6.4). Secondary: OCS hardware fault (MTBF >250,000 hours; OCS fails in last-known-good cross-connect state). Both risks are quantified with documented response procedures.

§ 12.1 · Seven-mode failure matrix · Residual risk column

Yes. The pre-provisioned protection paths are activated by scripted PATCH calls and restored immediately. Because the OCS switches <50 ms at the physical layer, the IP layer (BGP, TCP sessions, SAN replication) typically does not register the test event — sessions are preserved. Tests can be scheduled at low-traffic windows and completed in seconds.

§ 12.3 · Test without disruption · BGP session preservation

Standards & regulatory alignment

Yes. Sections 5.3–5.4 provide a full ANSI/TIA-942-C (May 2024) mapping: CDR/LISA™ in the Entrance Room (carrier demarcation), MTP trunks as backbone cabling, IANOS™ and OCS in the MDA or HDA, LC patch cords as intra-space equipment cords. The 800 mm minimum cabinet width requirement in MDA/IDA/HDA is accommodated by standard 19″ rack deployments.

§ 5.3–5.4 · ANSI/TIA-942-C May 2024 · Product space mapping table

Section 14 and References cover: DORA (EU 2022/2554, eff. Jan 2025) · NERC CIP-014-3 · HIPAA · SOX · PCI-DSS · ANSI/TIA-942-C (May 2024) · ANSI/TIA-568-C · RFC 8040 (RESTCONF) · IEEE 802.3. Industry benchmarks: Gartner, ITIC 2024, Splunk/Oxford Economics 2024 — all cited with full publication details.

§ 14 · References section · Full citation table

Section 14 identifies three converging drivers: (1) DORA — EU regulation requiring documented tested physical DR, effective January 2025 with enforcement; (2) AI/HPC workloads — GPU training runs have zero tolerance for extended outages; physical automation is now infrastructure-level, not optional; (3) deliberate fiber sabotage — documented fiber cuts across Europe and North America 2023–2025 make sub-50 ms automated L1 recovery a security requirement, not just an availability feature.

§ 14 · Three forces · DORA enforcement timeline

Vendor Evaluation & Architecture Committee

Technical architects · vendor selection committees · RFP evaluators — comparing POLATIS® OCS against MEMS alternatives, OEO solutions, and manual approaches.

MEMS vs DirectLight™

MEMS switches use movable micro-mirrors — mirror alignment determines coupling efficiency. DirectLight™ uses a solid-state beam-steering element with no mechanical components in the optical path. Consequences: MEMS insertion loss varies across ports and degrades as mirrors drift (periodic recalibration needed); DirectLight™ delivers consistent 0.5–1.5 dB across the full port matrix with no calibration drift over product lifetime.

§ 12.3 · MEMS vs DirectLight™ comparison table

At 6 DR tests/year + 2.4 real failover events = 8.4 switching cycles/year per path. At a 50,000-cycle MEMS mechanical limit, 20 years of operation consumes a meaningful portion of lifetime — and MEMS vendors may not warranty the switch under high-cycle DR testing. DirectLight™ has no cycle limit — DR test frequency has zero impact on OCS lifetime.

§ 12.3 · DR Test Impact on MEMS callout

Only for MEMS. Micro-mirror alignment is sensitive to mechanical shock and sustained vibration — MEMS OCS are not suitable for environments with significant HVAC vibration or seismic activity. DirectLight™ has no moving parts in the optical path and operates correctly at any orientation and in vibrating environments.

§ 12.3 · Vibration sensitivity row

POLATIS® Series 7000: 8×8 to 384×384 in a single chassis, non-blocking. MEMS OCS: typically 32×32 to 128×128 — larger configurations require cascaded chassis with additional 2–3 dB insertion loss per stage, materially affecting loss budgets on long inter-site spans.

§ 12.1 · Technology comparison table · Max port count row

OEO vs OOO comparison

An OEO conversion pair introduces 4–6 dB total loss. The POLATIS® OCS introduces 0.5–1.5 dB. On a long inter-site span with a tight optical power budget, the 4–6 dB OEO penalty may require optical amplification — adding cost, latency, and maintenance complexity. OEO is also protocol-specific, requiring replacement at each speed generation.

§ 12.1 · OEO vs OOO · Insertion loss row

Three specific gaps: (1) FRR requires pre-computed alternate paths in the MPLS control plane but cannot pre-provision at the physical layer; (2) FRR cannot detect gradual optical degradation — only reacts to link-down events; (3) FRR protects MPLS traffic only, not FC SAN, vMotion, OOB management, database replication, or other non-IP services that occupy separate OCS ports.

§ 12.2 · When OEO/IP-layer switching alone is insufficient

RFP & evaluation criteria

From Section 12.1: (1) insertion loss across full port matrix; (2) APS switchover time; (3) MTBF and mechanical cycle limit; (4) vibration/orientation sensitivity; (5) OPM per port — standard vs optional; (6) programmable API standard — RFC 8040 vs proprietary; (7) maximum port count in single chassis; (8) dark fiber pre-provisioning support; (9) protocol/data-rate agnosticism. The comparison table scores POLATIS® DirectLight™ against MEMS, OEO, and manual patch across all nine.

§ 12.1 · Technology comparison table

RESTCONF API: IETF RFC 8040 with YANG models per RFC 6020/7950. Physical infrastructure: ANSI/TIA-942-C (May 2024). Connector types: IEC 61754-7-2 and ANSI/TIA-604-5 (MTP). Fiber: OS2 per ITU-T G.652.D. Data rates: IEEE 802.3 (all generations). All standards citations documented in the References section.

§ References · Full standards citation table

Always use 3 ports per 1+1 APS protected service (working input + protection input + equipment output). Divide total port count by 3 for the protected service count: 32×32 → 10 services · 64×64 → 21 services · 96×96 → 32 services · 128×128 → 42 services · 192×192 → 64 services · 256×256 → 85 services · 384×384 → 128 services. Any vendor quoting more services per port count is assuming simple pass-through routing, not APS switching.

§ 9.2 · Port math corrected · OCS sizing table