IT Service Desk Update

IT Service Desk technicians are on duty. Please connect with us via Chat, Telephone or email.

Research Computing

BigGreen features 276 central processing unit cores, 552 gigabytes of memory and more than 10 terabytes of storage. Eight NVidia Tesla graphics processing units with 448 cores each provide support for massively parallel computation, pushing BigGreen to roughly six Teraflops—or six trillion floating point operations per second—of theoretical peak computing power. A variety of scientific software packages are installed and available for use on the cluster, including COMSOL, Multiphysics, Mathematica and CLC Genomics WorkBench.

BigGreen was made possible in part by a National Science Foundation grant that funds “Cyberinfrastructure for Transformational Scientific Discovery in West Virginia and Arkansas (CI-TRAIN),” a partnership among eight higher education institutions in West Virginia and Arkansas.

In 2015, the Campus Bridging team at Extreme Science and Engineering Discovery Environment (XSEDE) assisted rebuilding the cluster with XSEDE-Compatible Basic Cluster (XCBC).

The CI-TRAIN project is a partnership of institutions of higher education to transform the practice of information technology services for enabling scientific discovery. The CI-TRAIN project was founded by institutions in Arkansas and West Virginia in a partnership that builds on common research in nanoscience and geosciences and leverages complementary expertise.

Institutions of higher education in Arkansas and West Virginia and Oklahoma have partnered to build advanced cyberinfrastructure that is advancing science in several domains and transforming the practice of information technology services.

Click here for a detailed schedule of CI-Train seminars.
For more information please visit CI-Train.org.

Information Technology staff assist and provide support for faculty, students and researchers in different areas of research computing including:

  • Grant proposal preparations
  • Data Management plans
  • Hardware and software acquisition specifications
  • Vendor communication and pricing
  • Hardware setup and software installation
  • Server and Storage Hosting
  • Virtual Machines Hosting
  • Performance tuning and monitoring
  • Linux and Windows systems administration and configuration

Marshall University has joined Internet2®, the advanced networking consortium. Through its membership, the university’s students, faculty and staff will have access to Internet2’s premier, ultrafast nationwide network which connects research and educational institutions in the U.S. and interconnects with international research networks worldwide.

Marshall’s connection to the Internet2 Network is made possible through a partnership with OARnet, Ohio’s statewide research and education network, and Merit, Michigan’s statewide research and education network and an Internet2 Connector organization.

Internet2 is the foremost U.S. advanced networking consortium. Led by the research and education community since 1996, Internet2 promotes the missions of its members by providing both leading-edge network capabilities and unique partnership opportunities that together facilitate the development, deployment and use of revolutionary Internet technologies. By bringing research and academia together with technology leaders from industry, government and the international community, Internet2 promotes collaboration and innovation that has a fundamental impact on the future of the Internet.

For more information about Internet2, please visit their website by Clicking here.

Marshall University’s Campus Network (MUnet) is a state-of-the-art 10Gb Ethernet backbone based network linking all buildings on the Huntington Campus with WAN links to our regional campus, centers, and medical clinics. MUnet supports over 11,000 switched gigabit Ethernet ports and nearly 3,500 WiFi 5 (802.11ac) wireless access points. The Huntington Campus is connected to the South Charleston Campus by a 1Gb Transparent LAN Service (TLS) circuit provided by Frontier Communications (formally Verizon) and a 1Gb diverse path Segra MPLS circuit. The Mid Ohio Valley Center campus in Point Pleasant is linked to the Huntington Campus by a 100Mb Frontier Communications TLS circuit. The Medical Education Building located on the Spring Valley VA Medical Center campus is connected by a 1Gb Frontier Communications TLS circuits. Various smaller learning centers like the Larry Joe Harless Center in Gilbert and clinical facilities are connected via 10 Mb Frontier Communications TLS or Segra MPLS circuits.

The Huntington Campus network is linked by a university-owned metro fiber point to point service to the Robert C. Byrd Center for Flexible Manufacturing, MU Visual Arts Center, and Department of Dietetics facilities located in downtown Huntington.

An FCC grant-funded project was completed in 2011 to extend the current Huntington metro fiber network to St. Mary’s Medical Center, St. Mary’s Medical Center Education Center, Cabell Huntington Hospital, the Marshall University Medical Center, the JCESOM Fairfield campus, and the Marshall University Robert C. Byrd Biotechnology Science Center, and MUnet with two redundant 10 Gb Ethernet rings. These metro Ethernet rings provide the bandwidth and redundancy needed to enable the next-generation medical and collaboration technologies.

The MUnet campus networks are connected via 10Gb dual diverse path connections through our Internet Service Provider (ISP), the Ohio Academic and Research Network (OARnet). Marshall University is also a member of Internet2 and is connected to Internet2 with 7Gb of service. A total of 3Gb commodity Internet Service is currently being provided to MUnet subscribers. This bandwidth and redundancy provide the reliability and services needed to support current campus initiatives.

All MUnet services provide full Quality of Service (QoS) on all network ports and multicasting in support of voice, data, and video services and other real time applications. All services are switched and operate at full wire speeds.

MUnet supports full Voice over IP, VoIP, telephony services with unified communications and voice mail to over 4,000 extensions and a limited number of FAX and other analog lines via analog gateways.

MUnet central video conferencing services support full High Definition (HD) conferencing at 720p or 1080p. All HD endpoints are capable of a four way video call. Support for video calls with more than four concurrent endpoints is provided by a 20 port Multi Point Control units supporting full HD.

Web Conferencing for virtual classrooms is provided by the Microsoft Teams. This service provides a full virtual classroom experiences with student breakout rooms, lecture recording/archiving, and poll/question/quizzing during on-demand archived sessions. Rooms are also available for campus meetings and other event functions.

Marshall University Information Technology in collaboration with researchers and scientists are in the process of building an optimized network scientific research and data transfer funded by NSF CC*DNI award.

The objectives of this project are:

    • Improve research data flows and remove any data flow constraints.
      • Aggregate network traffic from other network domains.
      • Improve high-speed access to online scientific applications and data generated at MU, other institutions, and federal agencies
        • Cisco 10 G switches at the endpoint buildings to provide 10G access to researchers’ workstations.
        • Cisco switches with 40G capabilities to provide 40G backbone and support the 10g endpoint switches.
        • Dedicated and independent 1GE uplink to Internet2 and OARnet
    • Implement a science DMZ within the campus network infrastructure to allow the trusted data to travel outside the firewall.
      • Find bottlenecks and optimize the network for high-volume transfer of datasets
      • Utilize perfSONAR for performance Measurement
      • Implement switches supporting SDN
        • Support OpenFlow-based architectures, experimentally and in operations.
        • Install Data Transfer Node
    • Improve Identity Management and Authentication
      • Fully implement InCommon over the next year to eighteen months.
      • Implement CILogon
      • Implement Eduroam
    • Access training for key IT staff members for new CI tools (perfSONAR, InCommon, CILogon, Eduroam, etc.
    • Continue IPv6 implementation for the entire university computing facilities.
    • Selective on-premise deployment to only the services and software that are good fit for local hosting while extending other services to other locations (i.e. Big Data Analytics locally, REDCap remotely or local HPC as a staging resource before going to XSEDE).
    • Promote and demonstrate cyber-infrastructure initiatives like XSEDE to the campus research community.
    • Develop closer partnerships with other institutions’ IT research computing teams and coordinate hosting meetings and workshops.
    • Improve high-speed access to online scientific applications and data generated at MU, other institutions and federal agencies.

Click the images below to expand.

News


Research Computing Workshop: The Internet of ThingsDecember 2, 2015The Research Computing Advisory Council is pleased to invite you to a workshop titled The Internet of Things, taxonomy, research opportunities and challenges on Thursday December 3th 2015 from 10:00 am-12:00 pm. The presentation is facilitated by Ed Aractingi and ...
Research Computing Advisory Council WorkshopOctober 29, 2015The Research Computing Advisory Council is pleased to host a workshop titled Volumetric visualization/segmentation of MRI/CAT scans with Avizo (Part1) on November 5th 2015 from 10:00 am-12:00 pm. The presentation is facilitated by Dr. Jack Smith and hosted at the ...
Marshall University receives $500,000 to support high-performance networking for researchSeptember 15, 2015HUNTINGTON, W.Va. – Marshall University has been awarded nearly $500,000 by the National Science Foundation (NSF) to improve campus-wide computer networking in support of research. The collaborative grant was received by a team that includes Dr. Jan I. Fox, senior ...
VIEW MORE NEWS

Research Computing Advisory Committee


maher
Dr. John Maher
somerville
Dr. Charles Somerville
cecchetti
Dr. Alfred Cecchetti
szwilski
Dr. Anthony Szwilski
prewitt
Dr. Michael W. Prewitt
jacksmith
Dr. Jack Smith
wael_zatar
Dr. Wael Zatar