|Date and Author(s)|
REMINDER: There will be an extended downtime for AuN the week of January 29th. Please read this entire message. It contains information which could effect your research and class schedules, including a planned extended down time for BlueM. There are two big HPC news updates to share. As we have discussed in the past BlueM is being moved from NREL back to campus and we are in the process of purchasing a new HPC platform. As of yesterday we finally have a tentative schedule for the move of BlueM and we received best and final offers from vendors for the new platform. We should be making a decision on the new machine the first week in January. The move of BlueM has been in the works for over a year. There was a minor hiccup in the plans given to us recently. IBM is dropping support for the Blue Gene line. Because of the lack of support and a number of related issues we have made the decision to retire Mc2. The major users of Mc2 were notified of this decision some time ago. Since Mc2 and AuN share a file system scratch data created on Mc2 will be available even after it is retired. Home directories will be available for a short time also. The new machine will double the compute capability of Mc2. We want to thank NREL for hosting BlueM. We are moving because they need the space, power, and cooling capacity for a new machine they are purchasing. They are having a machine room shutdown in February to prep for the new machine and asked us to be out by that time. Obviously the move of BlueM will require an extended shutdown. We had hoped to have this done between semesters but we could just not get everything orchestrated for that time. We are planning a shutdown and move of BlueM the week of January 29th. This is a much less complicated move than the move to NREL so we hope to be back up the next week. Sorry for the inconvenience. As always, if you need compute resources for classes Mio is available. Please let us know of your needs. I am very excited about our new machine. We will share details when we can. What I can tell you now is that it will have a compute capability of 200 Tflops, with up to 384 Gbytes per node, a new file system and it will be water cooled.