Real-time, self-managing data flows -- Unidata will foster and support the existence of real-time data flows that encompass a broad range of Earth-system phenomena, can be accessed with ease by all constituents, and are self managing in respect to changing contents and user needs.
--A goal of Unidata 2008: Shaping the Future of Data Use in the Geosciences
Highlights of the 6.2.1 release:
Highlights of the 6.3.0 release:
A special LDM training workshop aimed at Antarctic researchers involved in the creation of the IDD-Antarctic was conducted on February 3-4 in the COMET classroom.
In the following, the setup we have been moving towards for our toplevel IDD relay nodes (idd.unidata.ucar.edu and thelma.ucar.edu) is briefly described.
The developers involved in these efforts are:
John Stokes | cluster design and implementation |
Steve Emmerson | LDM-6 development |
Mike Schmidt | cluster design and system administration |
Steve Chiswell | IDD design and monitoring |
Tom Yoksas | configuration and stress testing |
In addition to atm.geo.nsf.gov the UPC operates the top level IDD relay nodes idd.unidata.ucar.edu and thelma.ucar.edu. Instead of idd.unidata and thelma.ucar being simple machines, they are part of a cluster that is composed of directors (machines that direct IDD feed requests to other machines) and data servers (machines that are fed requests by the director(s) and service those requests). We are using the IP Virtual Server (IPVS) available in current versions of Linux to forward feed requests from directors to data servers.
Our cluster currently uses Fedora Core 3 64-bit Linux run on a set of identically configured Sun SunFire V20Z 1U rackmount servers:
Sun SunFire V20Z
We will be replacing the V20Z directors with Dell PowerEdge 2850 rackmount servers in the near future. The 2850s are configured as follows:
Dell PowerEdge 2850
The SunFire V20Z machines have proved to be stellar performers for IDD work when running Fedora Core 3 64-bit Linux. We tested three operating systems side-by-side before settling on FC3:
All three operating systems are 64-bit. In our testing FC3 emerged as the clear winner; FreeBSD was second; and Solaris x86 10 was a distant third. As I understand it, RedHat Enterprise WS 4 is FC3 with full RH support.
Here is a schematic view of what idd.unidata.ucar.edu and thelma.ucar.edu currently look like:
|<----------- directors ------------>| +-------+ +-------+ | ^ | ^ V | V | +---------------+ +---------------+ idd.unidata | LDM | IPVS | | LDM | IPVS | thelma.ucar +---------------+ +---------------+ / \ | | / \ / \ | | / \ / \ +----+ | / \ +-------/-------\------|----------+/ \ | / \ | / \ | / \ +----------------+ \ | / \ / | \ V / \ / V \ +---------------+ +---------------+ +---------------+ | 'uni2' LDM | | 'uni3' LDM | | 'uni4' LDM | +---------------+ +---------------+ +---------------+ |<----------------- data servers ---------------------->|
The top level indicates two director machines: idd.unidata.ucar.edu and thelma.ucar.edu (thelma used to be a SunFire 480R SPARC III box). Both of these machines are running IPVS and LDM 6.3.0 configured on a second interface (IP). The IPVS director software forwards port 388 requests received on a one interface configured as idd.unidata.ucar.edu on one machine and thelma.ucar.edu on the other. The set of data server backends are the same for both directors (at present).
When an IDD feed request is received by idd.unidata.ucar.edu or thelma.ucar.edu it is relayed by the IPVS software to one of the data servers. Those machines are configured to also be known internally as idd.unidata.ucar.edu or thelma.ucar.edu, but they do not ARP, so they are not seen by the outside world/routers. The IPVS software keeps track of how many connections are on each of the data servers and forwards ("load levels") based on connection numbers (we will be changing this metric as we learn more about the setup). The data servers are all configured identically: same RAM, same LDM queue size (8 GB currently), same ldmd.conf contents, etc.
All connections from a downstream machine will always be sent to the same data server as long as its last connection has not died more than one minute ago. This allows downstream LDMs to send an "are you alive" query to a server that they have not received data from in a while. Once there have been no IDD request connections by a downstream host for one minute, a new request will be forwarded to the data server that is least loaded.
This design allows us to take down any of the data servers for whatever maintenance is needed (hardware, software, etc.) whenever we feel need to make changes. When a machine goes down, the IPVS server is informed that the server is no longer available, and all downstream feed requests are sent to the other data servers that remain up. On top of that, thelma.ucar.edu and idd.unidata.ucar.edu are on different LANs and may soon be located in different parts of the UCAR campus.
LDM 6.3.0 was developed to allow running the LDM on a particular interface (IP). We are using this feature to run an LDM on the same box that is running the IPVS director. The IPVS listens on one interface (IP) and the LDM runs on another. The alternate interface does not necessarily have to represent a different Ethernet device; it can be a virtual interface configured in software. The ability to run LDMs on specific interfaces (IPs) allows us to run LDMs as either data collectors or as additional data servers on the same box running the director. By data collector, I mean that the LDMs on the director machines have multiple ldmd.conf requests that bring data to the cluster (e.g., CONDUIT from atm, UIUC, and/or, NEXRAD2 from Purdue, HDS from here, IDS|DDPLUS from there, etc.). The data server LDMs request data redundantly from the director LDMs. We currently do not have redundancy for the directors, but we will be adding that in the future.
We are just getting our feet wet with this cluster setup. We will be modifying configurations as we learn more about how well the system works. In stress tests run here at the UPC, we were able to demonstrate that one SunFire V20Z was able to handle 50% more downstream connections than the SunFire 480R thelma.ucar.edu without introducing latency. With three data servers we believe that we can now field literally every IDD feed request in the world if we had to (the ultimate failover site). If the load on the data servers ever becomes too high, all we need do is add one or more additional boxes to the mix. The ultimate limiting factor in this setup will be the routers and network bandwidth here in UCAR.
This cluster relays an average of 120 Mbps (~1.2 TB/day) to some 220 downsteam connections. Peak rates can exceed 250 Mbps.
The NLDM network successfully relayed seven IDD data feeds to six geographic locations for several months. This demonstrated that NNTP and the INN implementation constitute a feasible alternative for data relay. INN's additional features make it an attractive candidate for this purpose. These results were presented at a Unidata Seminar in November, and results as well as an in depth discussion of the benefits and costs of using this technology are provided in an internal white paper by Anne, Using INN for Data Relay.
As a result of this work, Steve Emmerson and Anne were tasked with exploring the form and features of an ideal data relay system. Although still in draft form, this internal white paper has evolved to also survey possible benefits of other relevant technologies, as well as to make recommendations regarding where Unidata should be in five years with respect to data delivery. The paper also includes some plans for describing how the IDD can be transitioned from one protocol to another.
In order to correctly gauge real-time status of the IDD, it is important for all sites participating to accurately maintain their clocks (easily done through Network Time Protocol software and servers).
The NWS transition from the CRAFT distribution network (built by OU) to their own network that relies on use of Internet2 was completed in the early to mid-October timeframe.
The EFData-based broadcast is scheduled to be turned off at the end of the day on March 31.