Network Infrastructure

From WebTV Wiki
Revision as of 21:05, 30 June 2021 by imported>Admin
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

This article will mainly focus on the service infrastructure for the first generation of WebTV/MSN TV. Information on the ISP itself or the service infrastructure for MSNTV 2 will not be present for the time being.

Overview

At its core, the WebTV/MSN TV network had two sets of machines that made up the service: frontend and backend servers. Frontend servers are essentially protocol servers, which are the servers WebTV/MSN TV boxes will connect to directly after dialing into the ISP, and both the box and frontend servers communicate using the WTVP protocol. Backend servers are data servers storing persistent data for various WebTV/MSN TV services. It is assumed that for any set of frontend servers, they would have corresponding backend servers to complement them. In the original architecture, there were also the following servers operating for other services (Source: http://web.archive.org/web/20190915174055/http://www.owlriver.com/casestudy/msdetails.html#_Toc462642186):

  • "4 Customer databases: 1 central read/write database, 2 load shedding read-only databases (can be fail-over read/write if needed), 1 billing read-only database."
  • "3 Electronic Program Guide servers (Sun E450s with lots of disk)"
  • "10 Mail Notify servers"
  • "8 Mail Gateways: 2 internal transit, 8 incoming, 2 outgoing mail hosts"
  • "7 logging hosts: 1 harvester (aggregates all logs), 2 servers which make the aggregated data available to various tools, 4 machines which run various other monitoring services."
  • "4 administrative hosts: remote console service, network boot server (golden master machine) and general administrative tools."
  • "2 Radius servers used by external ISP/IAPs to authorize customer access" (XXX: Radius as in this?)
  • "3 DNS name servers"
  • "2 Ad servers" (according to the internal Microsoft document, these were "the only NT boxes" in the entire service at the time)
  • "3 FlashROM servers"
  • "2 machines used for running backups"
  • "2 Scriptless servers to configure new machines." ("Scriptless" in this case refers to the pre-registration servers)

Server infrastructure was made up primarily of SPARC machines running Solaris, with BSDI servers and supposedly even Macintosh hardware being involved at one point. After Microsoft bought out WebTV Networks, they tried to sell the server operators on migrating everything to NT-based solutions. Eventually, a compromise was made where Dell hardware was purchased and used in the infrastructure, but ran x86 Solaris as opposed to anything based on Windows or Microsoft tools. This did not replace any SPARC hardware, however, at least for a time as far as we know.

The backend for WebTV/MSN TV's newsgroup service ran on a customized version of the INN (InterNetNews) news server on Solaris servers. Mail servers at one point ran on customized sendmail software, also on Solaris servers. The mail servers was eventually upgraded, but to what exactly isn't too clear. The LinkedIn profile of a former Microsoft employee claims that the WebTV/MSN TV mail system was upgraded to use Outlook 98, but this claim can't be validated as of now.

Service Groups

Service groups were WebTV/MSN TV's way of theoretically clustering service hardware. Service groups involve a set of frontend and backend servers with required inter-dependencies in place to deliver services to subscribers. While one can assume that WebTV/MSN TV would have service groups for each individual service, in production, frontend services at least would be pooled to reduce costs. This essentially meant hosting more than one service on frontend servers and sharing machines when isolation between service groups wasn't required. WNI's rules for clustering were as follows:

  • "Persistent data servers (the back-ends) tend to define how you cluster the front-end machines."
  • "Grouping will be influenced by costs."
  • "Service group size is defined by the exposure you are willing to face when a back-end machine dies."

Load Balancing

All load balancing for the crux of the WebTV/MSN TV service is taken care of with two methods. The first method is through the headwaiter service, which will serve any client a round robin list of servers for each WebTV/MSN TV service: if the first server for a service doesn't work, the client is expected to use the following servers in the service list if present. The second method used in the original infrastructure is by having one server for each service act as a "virtual host" through Alteon load balancing technology. This virtual host server would usually be the first server in a service list sent by the headwaiter and would hand operation over to an available server when connected to.

When a client connects to the service without any previous service entries for pre-registration or headwaiter servers stored, it will attempt to connect to IP 10.0.0.1 on port 1615 after successfully dialing into the ISP to connect to a pre-registration server. Assuming that the client isn't dialing into a local POP, this IP address would be internally resolved by the network routers to route the client to an available pre-registration server.

Miscellaneous

To combat against network attacks, WebTV/MSN TV added packet filtering at the network level on their servers. The entire service infrastructure was also designed to require as little direct maintenance as possible, so once the hardware was racked and wired up, all maintenance and rollouts were expected to be done remotely.

When it came to administration and monitoring service traffic, WebTV/MSN TV had various specialized tools to deal with this. For example, a tool named "NETEXEC" was developed that allowed any server-side commands to be executed against a certain criteria as specified in the query (e.g., executing a set of commands against a certain kind of frontend server or certain machines in a particular service group). WebTV/MSN TV also relied heavily on measurement tools to be able to get clear ideas on how their services were functioning, and also made visualization tools to be able to better understand the resulting data. It appears that one of these tools, dubbed "Cricket", eventually got released to the public in 1999 and open-sourced the following year. It has not received any major updates since early 2004, with the SourceForge site not having been updated since mid-2005. Surprisingly, the project had a small but loyal community of users and has received patches from contributors well into 2014, and in 2015, Jeff R. Allen, the original writer of the tool, has since ported its repository to GitHub, following waning trust in SourceForge and a sense of duty to keep it preserved for as long as time will allow. It has not shown activity since.