<?xml version="1.0" encoding="utf-8"?>
<?xml-model href="rfc7991bis.rnc"?>

<rfc
  xmlns:xi="http://www.w3.org/2001/XInclude"
  category="info"
  docName="draft-fjeldstrom-meditation-on-connectivity-01"
  ipr="trust200902"
  obsoletes=""
  updates=""
  submissionType="independent"
  xml:lang="en"
  version="3">
  <front>
    <title abbrev="Meditation on Connectivity">A Systemic Meditation on
      Internet Connectivity Equilibrium</title>
    <seriesInfo name="Internet-Draft"
                value="draft-fjeldstrom-meditation-on-connectivity-01"/>

    <author fullname="Erik Fjeldstrom" initials="E."
            surname="Fjeldstrom">
      <organization>Independent</organization>
      <address>
        <email>erik_fjeldstrom@yahoo.ca</email>
      </address>
    </author>

    <date year="2026" month="1" day="15"/>

    <keyword>connectivity history</keyword>

    <abstract>
      <t>
        This document presents a systemic meditation on how the Internet
        arrived at its present connectivity equilibrium. The analysis
        proceeds by retrospective reconstruction: examining
        observable adaptations, constraints, and deferred
        decisions across multiple layers of the stack, rather than by
        benchmarking, simulation, or protocol comparison.
      </t>
      <t>
        The term "meditation" is used deliberately to indicate a method
        grounded in historical observation, accumulated operational
        experience, and the interpretation of persistent compensatory
        mechanisms as empirical evidence of structural conditions.
        The document does not assign fault, advocate specific remedies,
        or propose new protocol mechanisms. Instead, it seeks to explain
        how a sequence of locally rational responses to real pressures
        interacted over time to produce a stable, but heavily mediated,
        connectivity equilibrium at Internet scale.
      </t>
    </abstract>
  </front>

  <middle>
    <section>
      <name>Purpose and Scope</name>
      <t>
        This document reconstructs how the Internet arrived at its
        present connectivity equilibrium by examining observable
        adaptations, constraints, and deferred decisions over time. It
        does not assign fault, advocate specific remedies, or
        propose new protocol mechanisms. Instead, it seeks to explain
        why the system evolved as it did, given the pressures it faced
        and the locally rational responses available to its
        participants.
      </t>
      <t>
        The analysis adopts a retrospective, systems-oriented
        perspective. It treats historical adaptations as evidence of
        underlying structural conditions rather than as errors or
        oversights. Decisions are evaluated in the context in which
        they were made, with attention to urgency, uncertainty, and
        available alternatives at the time. This framing is
        intentionally descriptive rather than corrective.
      </t>
      <t>
        A central premise of this document is that systemic outcomes
        cannot be understood solely by examining individual design
        choices in isolation. Instead, they emerge from the interaction
        of multiple pressures, operating at different timescales,
        that shape what kinds of decisions are feasible, visible, or
        deferrable. The intent here is to surface those interactions.
      </t>
      <t>
        This document is analytical rather than prescriptive. Its
        purpose is to make visible a pattern of systemic behavior that
        is otherwise easy to overlook precisely because the system has
        continued to function.
      </t>
      <t>
        A companion document revisits end-to-end reasoning under these
        contemporary conditions and examines possible architectural
        response space. The present document confines itself to
        reconstruction and classification and does not propose remedies.
      </t>

      <section>
        <name>Central and Subsidiary Theses</name>

        <section>
          <name>Central Thesis</name>
          <t>
            The Internet's current connectivity equilibrium does not
            arise from the failure of a single architectural principle
            or protocol. Rather, it reflects the convergence of multiple
            eroded assumptions about physics, topology, authority, cost,
            and trust that once made ambient end-to-end connectivity
            inexpensive. As those assumptions eroded independently under
            new physical and policy constraints, the system responded by
            introducing mediation, buffering, and policy at multiple
            layers. The resulting equilibrium is stable not because the
            original assumptions still hold, but because compensatory
            mechanisms successfully absorbed their loss.
          </t>
        </section>

        <section>
          <name>Subsidiary Thesis</name>
          <t>
            Debates that localized the end-to-end problem primarily at
            the transport layer were not incorrect in their
            observations, but were constrained in scope by the urgency
            and visibility of transport-layer failures. They implicitly
            assumed that L4 was the first or only layer at which
            end-to-end semantics were withdrawn. In reality, analogous
            withdrawals had already occurred at the physical, link,
            and network layers, each for the same underlying reason:
            preventing a single participant from imposing unbounded cost
            on others. Structural pressures above and below the
            transport layer both demanded immediate attention and
            obscured the gradual loss of semantic clarity at L4,
            delaying focused reconsideration.
          </t>
        </section>

        <section>
          <name>Scope Clarification</name>
          <t>
            This observation is not intended to dismiss transport-layer
            research or to suggest that such work was conceptually
            misguided. Rather, it reflects the practical reality that
            urgent, layer-local failures necessarily shaped the framing
            of contemporaneous debate. Narrow focus under operational
            pressure should be understood as a constraint on visibility,
            not as an architectural error.
          </t>
        </section>

        <section>
          <name>Clarifying Observation (Ambient Endpoints)</name>
          <t>
            Throughout the stack, endpoints are ambient: each layer
            defines its own notion of an endpoint that is assumed to
            exist prior to higher-layer interaction. Physical endpoints
            exist as attached transceivers; link-layer endpoints exist
            as members of a broadcast or multicast domain; network-layer
            endpoints exist as addressable nodes within a routing scope;
            transport-layer endpoints exist as sockets and flows; and
            application endpoints exist as semantic actors.
          </t>
          <t>
            End-to-end reasoning therefore depends on the continued
            ambient availability of endpoints at each layer. As mediation
            and scoping were introduced to contain cost and enforce policy,
            the ambient nature of endpoints was progressively withdrawn
            or made conditional at multiple layers. A recurring structural
            pressure underlying these changes was the need to prevent
            any single participant from imposing unbounded cost on others,
            whether through fault, misconfiguration, or asymmetric
            resource consumption. As ambient participation was withdrawn
            to bound such costs, higher layers were forced to compensate,
            doing so as effectively as possible using the authority and
            visibility available to them. This observation explains why
            end-to-end behavior degraded independently across layers
            without any single point of failure.
          </t>
        </section>
      </section>

      <section>
        <name>Ambient Endpoints and Their Progressive Withdrawal (By
          Layer)</name>
        <ul spacing="normal">
          <li>Physical Layer (L1): Attachment as Participation</li>
          <li>
            <ul spacing="compact">
              <li>Ambient assumption: If a device is physically
                attached, it can participate in communication on equal
                terms.</li>
              <li>Withdrawal: Red/blue separation, switched media, and
                link termination replaced shared energy with bounded
                fault domains.</li>
              <li>Pressure: A single faulty or malicious transmitter
                could impose unbounded disruption on all others.</li>
              <li>Result: Physical attachment no longer implies ambient
                participation; existence becomes conditional and
                mediated.</li>
            </ul>
          </li>

          <li>Link Layer (L2): Membership as Reachability</li>
          <li>
            <ul spacing="compact">
              <li>Ambient assumption: Membership in a broadcast domain
                implies mutual reachability.</li>
              <li>Withdrawal: VLANs, multicast filtering, and
                suppression replaced flat broadcast with
                administratively scoped domains.</li>
              <li>Pressure: Broadcast amplification and heterogeneous
                media made shared fate expensive.</li>
              <li>Result: Link-layer endpoints remain, but membership is
                policy-defined rather than ambient.</li>
            </ul>
          </li>

          <li>Network Layer (L3): Addressability as Existence</li>
          <li>
            <ul spacing="compact">
              <li>Ambient assumption: An address implies routability and
                potential reachability.</li>
              <li>Withdrawal: Policy routing, routing-domain separation,
                and later firewalls conditioned reachability.</li>
              <li>Pressure: Divergent trust domains and administrative
                scale.</li>
              <li>Result: Addressability no longer implies permission or
                path availability.</li>
            </ul>
          </li>

          <li>Transport Layer (L4): Packet Arrival as Conversation</li>
          <li>
            <ul spacing="compact">
              <li>Ambient assumption: If packets arrive, a conversation
                may proceed; failure is explicit.</li>
              <li>Withdrawal: Admission control, state limits, silent
                drops, and middlebox mediation.</li>
              <li>Pressure: State exhaustion, asymmetric resources, and
                ambiguity of silence.</li>
              <li>Result: Transport endpoints persist, but
                conversational availability becomes inferred.</li>
            </ul>
          </li>

          <li>Application/Semantic Layer (L7): Success as
            Correctness</li>
          <li>
            <ul spacing="compact">
              <li>Ambient assumption: Successful interaction implies
                semantic correctness.</li>
              <li>Withdrawal: Retries, gateways, relays, and masking
                introduced ambiguity.</li>
              <li>Pressure: Uptime expectations and partial failure
                tolerance.</li>
              <li>Result: Semantic endpoints remain, but correctness is
                increasingly inferred.</li>
            </ul>
          </li>
        </ul>
        <t>
          This inventory provides the analytical baseline for the
          remainder of
          this document. Later sections treat these progressive
          withdrawals as
          observed structural conditions, not as isolated design
          mistakes.
        </t>
      </section>
    </section>

    <section>
      <name>Baseline Assumptions and Early Operating Conditions</name>
      <t>
        Early Internet architecture assumed relatively stable hosts,
        cooperative administration, and ambient reachability.
        Hosts were institutionally operated, and participation implied
        adherence to shared norms and oversight.
      </t>
      <t>
        Under these conditions, admission control and exposure were
        host-local concerns. Semantic authority, policy authority,
        and operational responsibility were closely aligned.
      </t>
      <t>
        These assumptions reflected lived operational reality at the
        time and were sufficient for the Internet's formative scale and
        threat model.
      </t>

      <section>
        <name>Early RFC Evidence, Grouped by Theme</name>
        <t>
          The following historical material is drawn from early RFCs and
          related meeting notes. These sources are grouped
          thematically rather than chronologically in order to highlight
          recurring problem framings and system pressures
          that were recognized while the network was still forming. None
          of these documents should be read as definitive
          blueprints for later architecture; instead, they record how
          designers and operators understood emerging constraints
          in real time.
        </t>

        <section>
          <name>Mediation, Local Control, and Administrative
            Boundaries</name>
          <t>
            Several early documents frame network interaction as
            mediated negotiation between autonomous systems, rather than
            as transparent end-to-end exchange.
          </t>
          <ul>
            <li>
              RFC 8 (1969) <xref target="RFC8"/> presents interaction as
              a sequence of steps across local control components: a
              user program establishes local arrangements, reaches a
              remote system, and requests service from that system's own
              control program. This actor/system framing emphasizes
              locality and administrative authority over abstraction
              layering.
            </li>
            <li>
              RFC 706 (1975) <xref target="RFC706"/> explicitly proposes
              selective refusal of traffic at the Host/IMP boundary,
              allowing a host to instruct the network to discard messages
              from misbehaving or unwanted sources as early as possible.
              This reflects early recognition that unconditional acceptance
              is unsustainable and that refusal must occur at a control
              boundary.
            </li>
          </ul>
          <t>
            Together, these sources show that mediation and refusal were
            treated as foundational capabilities, not as later security
            add-ons.
          </t>
        </section>

        <section>
          <name>Identity, Accountability, and the Meaning of
            "Free"</name>
          <t>
            Early discussions consistently treat network endpoints as
            accountable identities rather than anonymous communication
            primitives.
          </t>
          <ul>
            <li>
              RFC 147 (1971) <xref target="RFC147"/> defines sockets
              primarily as unique identifiers bound to processes and
              hosts, with explicit attention to logging
              and accounting. Communication is from one identifiable
              socket to another, reinforcing the notion of accountable
              endpoints.
            </li>
            <li>
              RFC 491 (1973) <xref target="RFC491"/> challenges the
              assumption that "free" network services must be loginless.
              Padlipsky argues that identity binding via login may
              still be required for authentication and access control,
              and proposes uniform free accounts as a portability
              compromise. This highlights early tension between
              convenience and semantic integrity.
            </li>
          </ul>
          <t>
            These discussions anticipate later concerns about identity,
            attribution, and consent, and reject the idea that free
            services imply absence of control.
          </t>
        </section>

        <section>
          <name>Fragmentation, Heterogeneous Environments, and Why
            "Normal" Features Were Deferred</name>
          <t>
            Plurality and heterogeneity were recognized as intrinsic
            conditions from the outset, and early operational reality
            shaped which features were urgent.
          </t>
          <ul>
            <li>
              RFC 169 (1971) <xref target="RFC169"/> notes that the
              number of networks had already grown to the point where
              participants could not all be familiar
              with each other, and explicitly invites discussion of
              diverse systems, protocols, and user communities.
              Fragmentation is treated as a given, not as a deviation.
            </li>
            <li>
              RFC 898 (1984) <xref target="RFC898"/> reflects mature
              experience with heterogeneous gateways, subnetworks, and
              autonomous systems, documenting how routing, translation,
              and management complexity scale with diversity.
            </li>
          </ul>
          <t>
            A related historical point is that many "normal" features
            associated with managed local networks, such as automatic
            configuration,
            routine endpoint discovery, and pervasive service location,
            were not treated as architectural necessities in the early
            Internet.
            This was not because such features were unknown, but because
            the environment did not yet demand them: early
            internetworking
            connected a relatively small number of large,
            institutionally operated hosts across administrative
            boundaries, rather than dense
            intranets of frequently rebooting, mobile endpoints. In that
            setting, explicit local arrangements, operator knowledge,
            and manually
            coordinated configuration were sufficient, and the
            architectural forcing function was inter-networking between
            distinct domains
            rather than internal plug-and-play convenience.
          </t>
          <t>
            As the Internet later grew inward into campuses and
            enterprises, accumulating large multi-LAN environments,
            higher endpoint churn,
            and widespread non-expert operation, automatic configuration
            and discovery became economically and operationally
            necessary, and
            the absence of first-class primitives increasingly had to be
            compensated elsewhere. RFC 1029 (1988) provides a concrete
            example of
            this inward growth pressure, addressing ARP scaling, bridge
            intelligence, reboot detection, and cache coherence in large
            multi-LAN Ethernet environments where frequent host churn
            and internal topology complexity had become dominant
            concerns.
          </t>
        </section>

        <section>
          <name>Physical Reality, Delay, and Layer Blurring</name>
          <t>
            Several early documents show that physical constraints
            immediately stress interaction models and blur later
            conceptual layer boundaries.
          </t>
          <ul>
            <li>
              RFC 263 (1971) <xref target="RFC263"/> describes a "very
              distant" Host/IMP interface in which the host participates
              directly in framing, CRC generation,
              and retransmission. Reliability and framing are treated as
              boundary-of-control concerns rather than as cleanly
              separated layers.
            </li>
            <li>
              RFC 346 (1972) <xref target="RFC346"/> observes that
              satellite delay renders character-at-a-time remote echo
              marginal or unusable, even when throughput
              is unchanged. Postel emphasizes buffering strategy and
              suggests relocating input/echo semantics closer to the
              user system.
            </li>
          </ul>
          <t>
            These documents illustrate that delay and physical distance
            expose semantic assumptions early, forcing pragmatic
            integration across
            what would later be labeled layers.
          </t>
        </section>

        <section>
          <name>Cost, Noise, Control-Plane Externalities, and the Turn
            Toward Managed High-Bandwidth Networks</name>
          <t>
            Economic cost, background traffic, and control-plane scaling
            pressures appear early and intensify as bandwidth increases.
          </t>
          <ul>
            <li>
              RFC 392 (1972) <xref target="RFC392"/> measures host CPU
              and paging costs for network transmission, showing that
              the cost of moving data can exceed the
              cost of remote computation. Networking is treated
              explicitly as a distributed-systems cost problem rather
              than a free transport service.
            </li>
            <li>
              RFC 425 (1972) <xref target="RFC425"/> identifies "random
              prodding and poking" (e.g., host surveys) as a significant
              and unattributed source of overhead,
              and proposes consolidation and consent as remedies-an
              early recognition of background noise as a systemic
              externality.
            </li>
            <li>
              RFC 898 (1984) <xref target="RFC898"/> documents routing
              update storms, neighbor-probe scaling (e.g., N-squared
              behavior), and buffer exhaustion in gateways,
              illustrating how control-plane traffic can dominate useful
              forwarding work.
            </li>
            <li>
              RFC 1077 (1988) <xref target="RFC1077"/> synthesizes these
              concerns in the context of gigabit networking, explicitly
              reframing the future Internet as a management
              architecture. Importantly, this was not a speculative or
              marginal position: RFC 1077 reports the outcome of a
              DARPA-convened working group
              composed of principal architects, operators, and major
              stakeholders from the military, government, and research
              communities. It reflects
              the operational priorities of organizations that were
              already among the largest and most demanding users of
              packet networks, particularly
              in command-and-control, scientific computing, and secure
              communications contexts.
            </li>
            <li>
              RFC 1093 (1989) <xref target="RFC1093"/> makes the
              architectural consequences of these pressures operational.
              In defining the NSFNET routing architecture, it
              explicitly enforces policy-based filtering between the
              NSFNET backbone, regional networks, and peer networks such
              as ARPANET/MILNET. Certain
              routes are deliberately suppressed, metrics are
              reconstituted centrally, and trust is assigned by
              Autonomous System rather than by reachability
              alone. This document is notable as one of the earliest
              points where the evolving model of the Internet is
              acknowledged implicitly through
              implementation: the architects were aware that pure
              reachability was no longer sufficient, and encoded
              governance, policy, and functional
              separation directly into the routing fabric because they
              had to make the system operate at scale.
            </li>
          </ul>
          <t>
            Taken together, these sources show a clear progression:
            increasing bandwidth does not eliminate cost or noise, but
            instead
            shifts the limiting factors toward control, coordination,
            security, governance, and explicit policy enforcement.
          </t>
        </section>
      </section>
    </section>

    <section>
      <name>Emergence of Existential Stressors</name>
      <t>
        The progressive withdrawal of ambient endpoints described
        earlier did not occur in a vacuum. It was driven by a set of
        existential stressors
        that demanded immediate response and shaped which adaptations
        were feasible, visible, or deferrable. These stressors were
        recognized early and
        recur throughout the historical record.
      </t>

      <section>
        <name>Fragmentation and Administrative Plurality</name>
        <t>
          As documented as early as RFC 169 <xref target="RFC169"/>, the
          network rapidly evolved into an environment of multiple,
          independently administered systems. Designers
          no longer assumed global familiarity, uniform policy, or
          shared objectives. This plurality forced early attention to
          gateway design, routing
          boundaries, and management coordination, and made purely
          uniform solutions impractical.
        </t>
      </section>

      <section>
        <name>Physical Distance, Delay, and Interaction Breakdown</name>
        <t>
          Physical realities such as propagation delay exposed fragile
          interaction semantics almost immediately. RFC 346 <xref
            target="RFC346"/> shows that even modest increases
          in delay (e.g., via satellite links) could render
          character-at-a-time interaction unusable, prompting discussion
          of buffering strategies and
          relocation of input/echo processing. These effects occurred
          well before Internet-scale deployment.
        </t>
      </section>

      <section>
        <name>Cost and Host Resource Exhaustion</name>
        <t>
          Economic viability emerged as a dominant constraint. RFC 392
          <xref target="RFC392"/> demonstrates that host CPU time,
          paging behavior, and operating-system abstractions
          could make network transmission more expensive than remote
          execution itself. This reframed networking as a
          distributed-systems cost problem
          rather than a mere communications issue.
        </t>
      </section>

      <section>
        <name>Background Traffic and Unattributed Load</name>
        <t>
          Control-plane and exploratory traffic quickly became a
          measurable burden. RFC 425 <xref target="RFC425"/> documents
          how host surveys and other unsolicited probes
          generated significant overhead without clear attribution,
          motivating proposals for consolidation and explicit consent.
          These concerns foreshadow
          later issues with background chatter and steady-state
          coordination traffic.
        </t>
      </section>

      <section>
        <name>Unconditional Acceptance and Denial of Service</name>
        <t>
          The assumption that hosts must accept all traffic proved
          untenable. RFC 706 explicitly identifies denial-of-service
          risks from misbehaving
          peers and proposes selective refusal at the Host/IMP boundary.
          This represents early recognition that availability requires
          the ability to
          decline traffic before host resources are consumed.
        </t>
      </section>

      <section>
        <name>Routing Scale, Control-Plane Costs, and Exit-Gateway
          Geometry</name>
        <t>
          By the early 1980s, routing itself had become a stressor. RFC
          898 <xref target="RFC898"/> documents how routing update
          floods, neighbor probing, and limited buffers
          strained gateways, and how thinking in terms of entrance and
          exit gateways reshaped autonomous systems into transit
          fabrics. These dynamics
          parallel later experiences with relay-centric architectures at
          higher layers.
        </t>
      </section>

      <section>
        <name>Security Normalization: Routing Withdrawal, Filtering, and
          Firewalls</name>
        <t>
          By the early 1990s, operational security controls such as
          routing withdrawal, packet filtering, and firewall choke
          points were no longer
          exceptional mechanisms but standard operational practice. RFC
          1244 (Site Security Handbook) <xref target="RFC1244"/> treats
          these mechanisms as routine tools available
          to site operators, including selective route suppression,
          gateway filtering, and controlled connectivity.
        </t>
        <t>
          A key inflection point for this normalization was the 1988
          Internet worm. RFC 1135 (1989) <xref target="RFC1135"/>, a
          retrospective on the incident, contains a
          blunt assessment in its Security Considerations: "If security
          considerations had not been so widely ignored in the Internet,
          this memo
          would not have been possible." In the aftermath, many sites
          tightened access, some disconnected entirely, and the
          community accelerated
          incident response coordination and perimeter controls.
        </t>
      </section>

      <section>
        <name>Evolving Internet Membership: From IP Reachability to
          Application-Level Participation</name>
        <t>
          RFC 1287 (1991) <xref target="RFC1287"/> makes explicit that
          the original IP-connectivity definition of the Internet had
          already broken down. Systems could be
          considered part of the Internet despite partial connectivity,
          policy filtering, or lack of IP reachability, so long as they
          participated
          at higher layers (e.g., RFC 822 mail). The architects proposed
          shifting the organizing principle of the Internet from IP
          addressability
          to application-level naming and directories.
        </t>
      </section>

      <section>
        <name>Inward Growth and Configuration Complexity</name>
        <t>
          RFC 1029 (1988) <xref target="RFC1029"/> documents the
          operational pressures that arise as the Internet grows inward
          into large multi-LAN environments: address
          resolution scaling, bridge intelligence, reboot detection, and
          cache coherence. This reinforces that partial visibility and
          constrained
          reachability can be expected outcomes of internal complexity
          and churn.
        </t>
      </section>

      <section>
        <name>Architectural Closure and the End of Universal
          Routability</name>
        <t>
          By the late 1980s and early 1990s, the Internet's core
          architectural tensions were no longer latent. They were
          explicitly identified,
          debated, and-in key places-encoded into operational practice.
        </t>
        <t>
          RFC 1093 (1989) <xref target="RFC1093"/> provides a concrete
          example of functional separation and policy-mediated
          reachability at backbone scale: military-only
          routes (ARPANET/MILNET) were deliberately suppressed from
          civilian regional backbones, with Autonomous Systems serving
          as trust and
          policy boundaries.
        </t>
        <t>
          RFC 1627 (1994) "Network 10 Considered Harmful" <xref
            target="RFC1627"/>, marks a clear self-awareness moment: the
          community recognized that the fully routable,
          globally unique IPv4 Internet was becoming operationally
          fragile under address exhaustion and policy constraints. While
          the specific
          compensations adopted later differed from what many hoped
          (e.g., NAT and application-layer identity became structural),
          the underlying
          pressures were already visible and the direction of travel was
          clear.
        </t>
        <t>
          Taken together, these stressors explain why compensatory
          mechanisms emerged and hardened. They also show that many
          pressures commonly
          attributed to later Internet growth were visible-and actively
          discussed-by no later than the early 1990s.
        </t>
      </section>

      <section>
        <name>Historical Context: Architectural Closure
          (1972-1994)</name>
        <t>
          This history should not be read as a failure narrative. The
          record indicates that by the early 1990s the Internet's core
          architectural
          tensions were already clearly identified and, in key
          operational networks, treated as constraints that could not be
          wished away.
        </t>
        <t>
          Across the sources reviewed here, a consistent arc is visible:
        </t>
        <ul>
          <li>
            1972-1975: Delay, background traffic, and selective refusal
            were already recognized as systemic issues (e.g., satellite
            delay effects,
            survey "noise", and early refusal/filtering proposals).
          </li>
          <li>
            1984: Routing and gateway complexity, update scaling, and
            the inevitability of policy and control-plane costs were
            discussed as
            operational realities.
          </li>
          <li>
            1988-1989: High-bandwidth planning reframed the Internet as
            a management architecture, while backbone routing explicitly
            enforced
            administrative separation and policy filtering.
          </li>
          <li>
            1991: Security controls (withdrawal, filtering, firewalls)
            were normalized as routine operations, and the definition of
            "on the Internet"
            began shifting upward from IP reachability toward
            application-level naming, directories, and relay-mediated
            participation.
          </li>
          <li>
            1994: The end of universal routability under IPv4 was
            recognized as a practical inflection point; subsequent
            decades largely operationalized
            compensations rather than discovering new categories of
            constraint.
          </li>
        </ul>
        <t>
          This framing is essential context for revisiting end-to-end
          reasoning in a world where reachability is conditional,
          identities are
          increasingly application-scoped, and intermediaries are
          structural.
        </t>
      </section>
    </section>

    <section>
      <name>Observed Adaptive Responses</name>
      <t>
        The adaptive responses that emerged as ambient reachability was
        progressively withdrawn can be grouped into several recurring
        patterns.
      </t>
      <t>
        This section marks the transition from historical reconstruction
        to
        structural observation: these patterns are treated as convergent
        adaptations to shared constraints, not as a protocol-by-protocol
        survey.
      </t>
      <t>
        These patterns appeared independently across applications,
        vendors, and administrative domains, yet converged on similar
        structural solutions.
      </t>

      <section>
        <name>Relay-Centered Connectivity</name>
        <t>
          One of the earliest and most persistent adaptations was the
          introduction of relays. Rather than assuming that two
          endpoints could establish
          direct communication, systems increasingly routed interaction
          through one or more intermediary nodes that were known to be
          reachable from both sides.
        </t>
        <t>
          Mail transfer agents, application-layer gateways, TURN-like
          relays, rendezvous servers, and later cloud-hosted service
          front ends all exemplify
          this pattern. Relays provided a point of policy enforcement,
          buffering, identity translation, and fault isolation. While
          they increased latency and
          centralized load, they dramatically reduced the requirement
          for mutual ambient reachability.
        </t>
      </section>

      <section>
        <name>Protocol Encapsulation and Substrate Reuse</name>
        <t>
          Another major adaptation was the reuse of widely permitted
          substrates to carry new application semantics. HTTP emerged as
          the dominant example of
          this pattern.
        </t>
        <t>
          As early as RFC 3205 (2002) <xref target="BCP56"/>, the IETF
          recognized that protocol designers were deliberately layering
          new services over HTTP in order to traverse
          firewalls, proxies, and network address translators. This
          practice was sufficiently widespread to require formal
          guidance, resulting in BCP 56. Two
          decades later, the same BCP was revised and reissued as RFC
          9205 (2022) <xref target="BCP56"/>, reflecting accumulated
          operational experience rather than a change in direction.
        </t>
        <t>
          The persistence of BCP 56 over twenty years demonstrates that
          HTTP substrate reuse was not a transient workaround but a
          durable response to
          structural connectivity constraints.
        </t>
      </section>

      <section>
        <name>Stateful Traversal and Long-Lived Associations</name>
        <t>
          Where direct inbound reachability was unavailable, systems
          shifted toward models that established outbound-initiated,
          long-lived associations.
          These associations inverted the direction of connectivity:
          endpoints that could not accept unsolicited inbound traffic
          instead maintained persistent
          outbound sessions to rendezvous points.
        </t>
        <t>
          Examples include message polling, push-notification channels,
          long-polling, WebSockets, and later QUIC-based connections.
          These techniques
          transformed connectivity from a stateless addressing problem
          into a stateful session management problem, trading simplicity
          for reliability under
          constrained reachability.
        </t>
      </section>

      <section>
        <name>Identity Elevation and Application-Scoped Authority</name>
        <t>
          As network-layer identity became unreliable or ambiguous,
          applications increasingly bound identity and authority at
          higher semantic layers.
          Authentication tokens, application-level identifiers, and
          service-specific namespaces replaced implicit trust in source
          addresses.
        </t>
        <t>
          This shift aligned authority with mechanisms that applications
          could control, but further decoupled application semantics
          from network topology.
          Endpoints were no longer defined primarily by where they were
          located, but by what credentials or context they presented.
        </t>
      </section>

      <section>
        <name>Silent Failure Tolerance and Retry Semantics</name>
        <t>
          As ambient reachability became unreliable, applications
          adapted by treating silence as an expected condition rather
          than as an exceptional failure.
          Packet loss, filtering, middlebox interference, and
          policy-based drops are often indistinguishable from delay or
          congestion at the application layer.
        </t>
        <t>
          Rather than assuming explicit failure signaling, applications
          adopted retry loops, timeouts, exponential backoff, and
          idempotent operations.
          These techniques allow progress in the presence of partial
          failure but shift complexity upward: correctness becomes
          probabilistic and inferred rather than explicit.
        </t>
        <t>
          This adaptation increases robustness under constrained
          reachability but also obscures failure causes and complicates
          diagnosis. Silent tolerance
          trades semantic clarity for survivability, reinforcing the
          broader trend of compensating at higher layers for withdrawn
          ambient guarantees below.
        </t>
      </section>

      <section>
        <name>Transport-Layer Repair Attempts: SCTP and QUIC</name>
        <t>
          The Stream Control Transmission Protocol (SCTP)
          <xref target="RFC4960"/> represents an early attempt to
          preserve transport-layer semantic clarity in the face of
          eroding endpoint assumptions. Standardized around 2000, SCTP
          introduced multi-homing, association-based identity,
          path-aware failure detection, message framing, and
          multistreaming. Together, these features explicitly rejected
          the assumption that a single IP address uniquely and stably
          identifies a transport endpoint.
        </t>
        <t>
          SCTP distinguished between path failure and peer failure,
          attempted to maintain semantic precision under partial
          failure, and treated transport associations, not addresses,
          as the primary unit of identity. In doing so, SCTP
          anticipated many later concerns about mobility, multihoming,
          and ambiguous silence.
        </t>
        <t>
          However, SCTP assumed that new transport semantics could
          deploy transparently through the network. By the time of its
          standardization, that assumption had already been withdrawn:
          middleboxes, firewalls, and NATs were pervasive, and
          unfamiliar transport protocols were routinely blocked. As a
          result, SCTP's technically sound repairs were largely
          displaced by compensations implemented above the transport
          layer.
        </t>
        <t>
          QUIC <xref target="RFC9000"/>, by contrast, represents a later
          and more successful adaptation. Rather than repairing L4 in
          place, QUIC relocates transport semantics into user space
          and runs over UDP, a substrate already widely permitted. QUIC
          encrypts most transport headers, preventing ossification by
          intermediaries, and treats connection identity, path
          migration, and congestion control as application-visible
          concerns.
        </t>
        <t>
          The contrast between SCTP and QUIC is illustrative. SCTP
          attempted to restore ambient transport semantics that the
          network no longer supported. QUIC accepts mediation as
          structural and adapts by shifting authority upward,
          aligning deployment reality with semantic control. This
          contrast reinforces the broader pattern observed throughout
          this document: when ambient assumptions are withdrawn at a
          given layer, durable solutions tend to emerge by relocating
          responsibility rather than by attempting restoration in place.
        </t>
      </section>

      <section>
        <name>Application-Guided Path Selection and Cost
Signaling</name>
        <t>
          A later and more explicit form of semantic elevation appears
          in the Application-Layer Traffic Optimization (ALTO) protocol
          (RFC 7285) <xref target="RFC7285"/>. ALTO exposed network
          cost, locality, and preference information as an
          application-consumable service, allowing endpoints to make
          informed choices among multiple reachable peers or resources.
        </t>
        <t>
          This represented a qualitative shift in responsibility.
          Traditional routing determines how packets flow once a
          destination is chosen; ALTO assisted applications in deciding
          which destinations should be chosen in the first place. In
          effect, ALTO performed a form of quasi-source routing at L7:
          the network supplied advisory cost information, but the
          application selected targets and thereby shaped traffic
          patterns.
        </t>
        <t>
          Cost, congestion, policy, and locality, once implicit
          properties of the network fabric, were surfaced explicitly to
          applications. This shift acknowledged that reachability alone
          no longer provided sufficient semantic guidance for efficient
          or stable behavior at scale.
        </t>
        <t>
          ALTO did not replace routing, nor did it alter forwarding
          behavior. Instead, it compensated for the loss of ambient
          semantic information by elevating selected network knowledge
          to a controlled, advisory interface.
        </t>
        <t>
          In practice, however, ALTO saw limited deployment outside a
          small number of research and operator-driven environments.
          Much like SCTP at the transport layer, it represented a
          semantically well-founded architectural repair that failed to
          align with prevailing deployment incentives. Application
          developers largely bypassed ALTO in favor of self-managed
          heuristics, static configuration, or embedding cost and
          locality inference directly into application logic, often
          using widely permitted substrates and measurement-based
          adaptation.
        </t>
        <t>
          As a result, ALTO functions primarily as evidence of
          architectural recognition rather than as a dominant
          operational mechanism: it demonstrates that the need for
          explicit cost and locality signaling was understood, even as
          most implementations chose compensatory approaches that
          avoided new dependencies on network-provided control planes.
        </t>
      </section>
    </section>

    <section>
      <name>Persistence and Normalization of Compensation</name>
      <t>
        Over time, compensatory mechanisms ceased to be exceptional.
        What began as fallback behavior hardened into steady-state
        infrastructure. Relay paths became primary paths, and indirect
        connectivity became the default assumption rather than the
        contingency plan.
      </t>
      <t>
        This persistence had several reinforcing effects. First,
        widespread deployment increased the return on further
        investment in compensatory mechanisms, making them more capable
        and more attractive. Second, their effectiveness reduced the
        frequency of visible failures that might have triggered
        architectural reconsideration.
      </t>
      <t>
        In the presence of more urgent, existential concerns, other
        issues were routinely deferred until they themselves became
        urgent. Because compensatory mechanisms continued to work, the
        cost of revisiting underlying assumptions appeared higher
        than the cost of continued adaptation.
      </t>
      <t>
        As a result, the system accumulated technical and conceptual
        debt without a clear moment at which repayment appeared
        necessary or even desirable.
      </t>
      <t>
        When a system model depicts a viable path that is consistently
        avoided, the discrepancy should be attributed to the model or
        the path, not to the actors responding rationally to observed
        constraints.
      </t>
    </section>

    <section>
      <name>Indicators: Structural Load and Constraint</name>
      <t>
        Despite continued operation, the system began to exhibit
        recurrent indicators of underlying load and constraint. These
        indicators were not catastrophic failures, but patterns that
        suggested increasing reliance on compensation and diminishing
        alignment between architectural assumptions and operational
        reality.
      </t>
      <t>
        Such indicators included loss of locality, concentration of load
        onto shared infrastructure, opaque or delayed failure modes,
        and growing difficulty in determining where authority and
        responsibility for communication decisions actually resided.
      </t>
      <t>
        These signals were often diffuse and probabilistic rather than
        binary. They manifested as degraded efficiency, increased
        complexity, or brittleness under stress rather than as
        immediate outages. Because the system continued to function,
        they were tolerated rather than treated as forcing events.
      </t>
      <t>
        The absence of a single, unambiguous failure made it difficult
        to justify a coordinated architectural response.
      </t>
    </section>

    <section>
      <name>Analysis: Compensatory Mechanisms as Evidence</name>
      <t>
        When a system model depicts a viable path that is consistently
        avoided, the discrepancy should be attributed to the model or
        the path, not to the actors responding rationally to observed
        constraints.
      </t>
      <t>
        A familiar example is the formation of pedestrian "desire
        paths." Such paths arise when users repeatedly choose routes
        that better reflect actual needs than those anticipated by the
        original design. Over time, repeated use alters the environment
        itself, and what began as an exception becomes a structural
        feature.
      </t>
      <t>
        ALTO illustrates an attempt to formalize application-visible
        cost signaling after routing and admission authority had
        already moved. Its limited impact is therefore informative: it
        demonstrates both the recognition of the problem and the
        difficulty of addressing it once compensatory mechanisms have
        become structural.
      </t>
      <t>
        In the Internet's case, compensatory connectivity mechanisms
        functioned as desire paths. They revealed a mismatch between
        architectural assumptions about reachability and the
        operational conditions under which the system was actually
        used. Their persistence and success transformed them from
        temporary adaptations into defining characteristics of the
        system.
      </t>
      <t>
        Seen in this light, compensatory mechanisms are not merely
        technical artifacts; they are empirical signals about where
        system models no longer align with reality.
      </t>
      <t>
        A similar interpretive stance appears in human-system design.
        When users repeatedly avoid an architected path, analysis
        treats the avoidance as evidence of misaligned assumptions
        rather than as user error. Norman's discussion of "desire
        paths" frames such behavior as empirical data about real
        constraints and incentives, not as deviation from intent
        <xref target="Design"/>. The persistence and convergence of
        compensatory mechanisms in Internet connectivity can be
        understood in the same way: not as architectural failure, but
        as evidence that certain assumptions no longer held under
        operational conditions.
      </t>
    </section>

    <section>
      <name>Post-Desire Path: Three Signals of an Unresolved
        Architectural Shift</name>
      <t>
        The desire-path argument establishes that persistent operator
        behaviour is evidence of a mismatch between the model and the
        environment. The following RFCs are useful precisely because
        they show the Internet recognizing the mismatch while stopping
        short of formally resolving it.
      </t>
      <t>
        The observations in this section are descriptive rather than
        prescriptive: they examine how the mismatch has been
        acknowledged and framed, not how it ought to be resolved.
      </t>

      <section>
        <name>RFC 7288: Firewalls as a Persistent Feature Without Formal
          Architectural Status</name>
        <t>
          RFC 7288 <xref target="RFC7288"/> is notable less for any
          specific proposal than for the careful position it occupies
          within the existing architectural narrative.
        </t>
        <t>
          The document acknowledges the widespread and long-standing
          presence of firewalls, and does so in a pragmatic and
          operationally grounded way. At the same time, it deliberately
          avoids treating firewalls as a permanent structural element
          of the Internet architecture. Instead, they are discussed as
          policy-enforcing devices that exist alongside the
          architecture rather than within its formal core.
        </t>
        <t>
          From a desire-path perspective, this restraint is
          understandable. RFC 7288 operates within an architectural
          framework that continues to value the end-to-end principle as
          a guiding ideal, even as practice has moved away from
          ambient inbound reachability. Rather than declaring that shift
          complete, the document treats firewalls as an external
          constraint that must be accommodated.
        </t>
        <t>
          The consequence of this position is not denial, but deferral.
          Firewalls are assumed to be present in practice, yet their
          ubiquity is not elevated to a baseline architectural
          condition. Subsequent designs are therefore encouraged to
          cope with their existence rather than to integrate them as
          a first-class premise, leading to repeated work on traversal,
          discovery, and rendezvous mechanisms instead of an explicit
          acknowledgement that ambient inbound reachability is no
          longer the norm.
        </t>
        <t>
          In this sense, the desire path is clearly visible, but the
          architectural map remains intentionally conservative about
          redrawing its boundaries.
        </t>
      </section>

      <section>
        <name>RFC 5218: When Widely Deployed Is Not the Same as
          Structurally Sound</name>

        <t>
          RFC 5218 <xref target="RFC5218"/> provides a useful corrective
          by explicitly cautioning against equating deployment success
          with architectural merit.
        </t>
        <t>
          The Internet has repeatedly adopted mechanisms that were
          operationally expedient under pressure, such as address
          sharing, middleboxes, and application-layer workarounds,
          without those mechanisms being clean fits for the original
          architectural model. RFC 5218 recognizes that popularity
          can arise from necessity, inertia, or lack of alternatives,
          rather than from correctness.
        </t>
        <t>
          This distinction matters here because the current connectivity
          equilibrium is often defended on the grounds that it works or
          is widely used. RFC 5218 reminds us that such arguments
          describe outcomes, not structure.
        </t>
        <t>
          The desire-path framework explains why this happens. When the
          environment changes faster than the model, actors will
          choose survivable routes even if they deform the original
          plan. Over time, these routes harden, not because they are
          ideal, but because they are viable.
        </t>
        <t>
          RFC 5218 gives us permission to say plainly that the
          Internet's current shape may be stable without being
          architecturally resolved.
        </t>
      </section>

      <section>
        <name>RFC 7305: The Consequence: Control Migrates to
          Layer 7</name>

        <t>
          RFC 7305 <xref target="RFC7305"/> is best read as an
          observation about where meaningful decisions now occur.
        </t>
        <t>
          As lower-layer assumptions about reachability, symmetry, and
          transparency eroded, applications were forced to compensate.
          Authentication, discovery, mobility, policy, and even routing
          intent increasingly moved upward, until application protocols
          became the only layer with sufficient context to function
          reliably.
        </t>
        <t>
          The practical outcome is that many decisions traditionally
          associated with the network or transport layers are now made
          at layer 7, because only the application can see across NATs,
          firewalls, relays, and policy boundaries.
        </t>
        <t>
          This is not a design choice so much as a consequence of
          earlier non-decisions. By declining to formally acknowledge
          the withdrawal of ambient end-to-end reachability, the
          architecture implicitly delegated responsibility upward.
        </t>
        <t>
          The Internet still speaks in layers, but it now decides almost
          exclusively at the top.
        </t>
      </section>

      <section>
        <name>Synthesis</name>
        <t>
          Taken together, these RFCs describe a system that has adapted
          successfully while avoiding a full architectural reckoning.
        </t>
        <ul spacing="compact">
          <li>RFC 7288 shows a feature treated as temporary long after
            it became permanent.</li>
          <li>RFC 5218 warns against mistaking survival for
            correctness.</li>
          <li>RFC 7305 documents the resulting migration of control into
            application space.</li>
        </ul>
        <t>
          The desire paths are visible, continuous, and rational. What
          remains unresolved is not whether the Internet has adapted,
          but whether its architecture has yet caught up with its own
          behaviour.
        </t>
      </section>
    </section>

    <section>
      <name>Implications of the Present Equilibrium</name>
      <t>
        The reconstruction above yields both an observable system state
        and a set of limits on what can be inferred from that state. The
        following sections address these together: first by
        characterizing the present connectivity equilibrium as it
        exists, and then by clarifying what the reconstruction
        establishes about that equilibrium.
      </t>

      <section>
        <name>Present Equilibrium</name>
        <t>
          The Internet has settled into an equilibrium defined by these
          accumulated adaptations. This equilibrium is stable under
          current constraints and has enabled continued growth,
          innovation, and deployment. It is not characterized by
          collapse or obvious dysfunction.
        </t>
        <t>
          At the same time, this stability depends on the continued
          effectiveness of compensatory mechanisms. The system operates
          by routing around certain assumptions rather than revisiting
          them directly. As a result, architectural questions
          concerning endpoints, authority, and reachability are
          deferred rather than resolved.
        </t>
        <t>
          From a systems perspective, this equilibrium resembles a
          metastable regime: locally stable and resilient to small
          perturbations, yet dependent on sustained compensation and
          lacking strong restoring forces should underlying conditions
          change.
        </t>
      </section>

      <section>
        <name>What This Reconstruction Establishes</name>
        <t>
          This reconstruction suggests that the present connectivity
          model is not the result of a single decision or omission, but
          of sustained rational deferral under pressure. Major
          existential concerns demanded immediate action; secondary
          misalignments were tolerated because they admitted local and
          effective compensation.
        </t>
        <t>
          The historical record examined here is consistent with this
          pattern. The adaptations that preserved functionality also
          reshaped the system, making certain architectural questions
          harder to see precisely because they were successfully
          avoided.
        </t>
        <t>
          The presence of a stable equilibrium should not be read as an
          endorsement of that equilibrium. Stability here denotes
          persistence under prevailing constraints, not architectural
          optimality or normative correctness.
        </t>
        <t>
          This document does not establish that the present equilibrium
          is unstable, undesirable, or incorrect. It establishes only
          that the conditions which once justified deferring certain
          architectural questions have changed, making those questions
          newly visible.
        </t>
        <t>
          This document does not propose remedies, evaluate
          counterfactual architectures, or predict future outcomes. Its
          contribution is to clarify how the Internet arrived at its
          current state, and why questions about the suitability of
          that equilibrium have only recently become visible again.
        </t>
      </section>
    </section>

    <section>
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>

    <section>
      <name>Security Considerations</name>
      <t>
        This document is purely descriptive and retrospective. It does
        not propose new protocols, mechanisms, procedures, or
        operational practices, nor does it recommend changes to
        existing ones.
      </t>
      <t>
        As such, it introduces no new security considerations beyond
        those already present in the systems and practices discussed.
        Any security-relevant mechanisms referenced are included solely
        as historical and architectural context.
      </t>
    </section>
  </middle>

  <back>
    <references>
      <name>Informative References</name>

      <reference anchor="Design">
        <front>
          <title>The Design of Everyday Things</title>
          <author initials="D. A." surname="Norman" fullname="Donald A. Norman"/>
          <date year="2013"/>
        </front>
        <seriesInfo name="ISBN" value="978-0465050659"/>
        <refcontent>
          New York, Basic Books; rev. ed.
        </refcontent>
      </reference>

      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.8.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.147.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.169.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.263.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.346.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.392.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.425.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.491.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.706.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.898.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.1029.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.1077.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.1093.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.1135.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.1244.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.1287.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.1627.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.4960.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.5218.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.7285.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.7288.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.7305.xml"/>
      <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.9000.xml"/>
      <xi:include href="https://bib.ietf.org/public/rfc/bibxml-rfcsubseries/reference.BCP.56.xml"/>
    </references>
  </back>
</rfc>
