i2-news - [I2-NEWS] Scientists Reveal At SC11 Conference Advancements in 100 Gbps Networks Needed For Next Generation Research and Discovery
Subject: News for and about the Internet2 community
List archive
[I2-NEWS] Scientists Reveal At SC11 Conference Advancements in 100 Gbps Networks Needed For Next Generation Research and Discovery
Chronological Thread
- From: Todd Sedmak <>
- To:
- Subject: [I2-NEWS] Scientists Reveal At SC11 Conference Advancements in 100 Gbps Networks Needed For Next Generation Research and Discovery
- Date: Thu, 17 Nov 2011 11:10:19 -0500 (EST)
Scientists Reveal At SC11 Conference Advancements in 100 Gbps Networks Needed
For Next Generation Research and Discovery
Media contact: Joe Mambretti, (312) 503-0735 or
SEATTLE—-Nov. 17, 2011—-Today at the 24th annual SC Conference (SC11) -- the
foremost international high performance computing conference -- researchers
demonstrated advancements in developing 100 Gbps networks within the U.S. and
internationally that are necessary for the next generation of research and
discovery.
”Next generation science increasingly requires investigations based on
extremely large volumes of data that must be transported across wide
distances with exceptionally high performance,” said Bill Fink, advanced
technology researcher at the NASA Goddard Space Flight Center. “For example,
NASA is developing a next generation network platform to support a wide range
of strategic research projects including in the areas of advanced networking,
climate science, earth science and astrophysics.”
Many new high-performance data intensive research investigations, called
petascale science, will be increasingly applied to discovery domains,
including weather and climate simulation, nuclear simulations, cosmology,
quantum chemistry, lower-level organism brain simulation, and fusion science.
Current networks provisioned for 10 Gbps do not provide sufficient rate, time
and volume performance for many emerging applications. Petascale science
involves not only the creation of massive datasets generated at
supercomputer, instrumentation, and experimental facilities, but also
subsequent analysis of that data by a user community that may be distributed
across many laboratories and universities, across the U.S. and across the
world. Exceptionally time-efficient data flows for petascale science over
wide areas are persistent requirements for many advanced research
disciplines. These projects are developing techniques to optimize WAN file
transfer at 100 Gbps in part by designing data transfer utilities, protocols,
and techniques that enable extremely high sustained end-to-end flows,
including disk-to-disk and memory-to-memory.
The NASA Center for Climate Simulation is also using high performance
computing to flow 100’s of terabytes of high-resolution climate forecasts
from its Goddard Institute for Space Studies as well as its Global Modeling
and Assimilation Office groups. These high-resolution climate forecasts will
be major contributors to the next United Nations Intergovernmental Panel on
Climate Change Assessment Report, which will be published in 2013.
NASA also has established a partnership with the International Center for
Advanced Internet Research at Northwestern University (iCAIR) and the
Laboratory for Advanced Computing at the University of Chicago (LAC) to
investigate novel architecture, technology (including new protocols), and
techniques for data intensive scientific investigation based on 100 Gbps
capabilities. As part of this research, iCAIR and LAC are conducting
experimental investigations, using novel cloud technology for data intensive
science on a national Open Science Data Cloud testbed, which is supported by
the Open Cloud Consortium.
NASA has had an ongoing collaborative relationship with the Mid-Atlantic
Exchange (MAX) for over 10 years experimenting with and implementing new
network technologies. The MAX also provides NASA Goddard Space Flight
Center's (GSFC) High End Computer Networking (HECN) group with connectivity
to high-speed research and educational networks.
Background information:
The SC11 network demonstrations were provisioned on a 100 Gbps testbed
extending from the NASA Goddard Space Flight Center in Greenbelt, Maryland,
through the MidAtlantic Crossroads Exchange (MAX) near Washington DC to the
StarLight International/National Communications Exchange in Chicago to the
SC11 network (SCinet) at the conference center in Seattle. In Chicago, the
Metropolitan Research and Education Network (MREN) is supporting the
demonstrations with StarWave, a multi 100 Gbps exchange facility. Internet2
supports the 100 Gbps path between MAX and StarLight, and the Department of
Energy’s Energy Sciences network (ESnet) supports the 100 Gbps path between
StarLight and Seattle.
These demonstrations were established in partnership with multiple
corporations, including Ciena, Alcatel, Fujitsu, Brocade, Force10, and
Juniper, which provided the wide range of advanced 100 Gbps technology used
at the SC11 conference to establish the conference’s 100 Gbps network and to
support the national 100 Gbps testbed. The conference demonstrations were
part of the SCinet Research Sandbox, an activity that supports advanced
networking collaborations, including a partnership with the SC Technical
Program.
About NASA Goddard Space Flight Center
NASA's Goddard Space Flight Center is home to the nation's largest
organization of combined scientists, engineers and technologists that build
spacecraft, instruments and new technology to study the Earth, the sun, our
solar system, and the universe. (www.nasa.gov/centers/goddard)
About the International Center for Advanced Internet Research (iCAIR) at
Northwestern University
The International Center for Advanced Internet Research (iCAIR) at
Northwestern University accelerates leading-edge innovation and enhanced
global communications through advanced technologies, in partnership with
numerous international community, and national partners. iCAIR partners with
EVL at University of Illinois at Chicago, Argonne National Laboratory, and
Calit2/UCSD, in collaboration with Canada's CANARIE and the Netherlands'
SURFnet, to manage and grow the StarLight optical network exchange.
(www.icair.org)
About StarLight
StarLight is the world's most advanced national and international
communications exchange facility. StarLight provides advanced networking
services and technologies that are optimized for high-performance,
large-scale metro, regional, national and global applications. With funding
from the National Science Foundation (NSF), StarLight was designed and
developed by researchers, for researchers. StarLight is managed by the
Electronic Visualization Laboratory (EVL) at the University of Illinois at
Chicago, the International Center for Advanced Internet Research (iCAIR) at
Northwestern University, the Mathematics and Computer Science Division at
Argonne National Laboratory, and Calit2 at University of California, San
Diego, in partnership with Canada's CANARIE national networking organization
and The Netherlands' SURFnet. (www.startap.net/starlight)
About the MidAtlantic Crossroads Exchange (MAX)
The MidAtlantic Crossroads exchange (MAX) is a regional optical network
consortium founded by Georgetown University, George Washington University,
the University of Maryland, and Virginia Tech. MAX serves Maryland, Virginia,
and the District of Columbia region with a suite of advanced networking
service capabilities, including advanced optical-based networking facilities
in McLean, VA and College Park, MD. MAX is implementing 100G services,
including 100G interfaces to interconnect with major national R&E networks
and the NGIX-E exchange. (www.maxgigapop.net)
About the Metropolitan Research and Education Network (MREN): The
Metropolitan Research and Education Network (MREN), an advanced research and
education (R&E) network provides services among seven states in the upper
Midwest, including the management of a metro-area optical networking facility
located at the StarLight International/National Communications Exchange
Facility. The MREN facility exclusively focuses on providing service and
infrastructure support for large-scale data-intensive R&E activities.
(www.mren,org)
About the Laboratory for Advanced Computing
The Laboratory of Advanced Computing (LAC) at the University of Chicago
performs research in the analysis of big data, data intensive computing,
cloud computing and high performance networking. (www.labcomputing.org)
About the Open Cloud Consortium (OCC)
The Open Cloud Consortium (OCC) manages cloud computing infrastructure to
support scientific research, such as the Open Science Data Cloud, cloud
computing testbeds, such as the Open Cloud Testbed, develops reference
implementations, benchmarks and standards, such as the MalStone Benchmark,
sponsors workshops and other events related to cloud computing, and provides
support to advanced research projects related to cloud technology. The Open
Cloud Consortium is a consortium managed by the Center for Computational
Science Research, Inc., which is an Illinois based not-for-profit
corporation. (www.opencloudconsortium.org)
About SCinet
During the SC conferences, one of the most powerful and advanced networks in
the world – SCinet is implemented. Provisioned each year for the duration of
the conference, SCinet brings to life a highly sophisticated, very high
capacity networking infrastructure that supports the revolutionary
applications and network experiments that are the trademark of the SC
Conference. SCinet serves as the platform for exhibitors to demonstrate the
advanced computing resources of their home institutions and elsewhere by
supporting a wide variety of bandwidth-driven applications including
supercomputing and cloud computing. SCinet is created through a unique body
of volunteers that are world-class subject matter experts who have amassed an
incredible breadth and depth of networking and network construction
experience.
About Internet2
Internet2, whose network is owned by U.S. research universities, is one of
the world’s most advanced networking consortium for global researchers and
scientists who develop breakthrough Internet technologies and applications
and spark tomorrow’s essential innovations. Internet2 consists of more than
350 U.S. universities; corporations; government agencies; laboratories;
higher learning; and other major national, regional and state research and
education networks; and organizations representing more than 50 countries.
Internet2 is a registered trademark. For more information, see
www.internet2.edu
About ESnet
The Energy Sciences Network (ESnet) is a high-speed network serving thousands
of Department of Energy scientists and collaborators worldwide. A pioneer in
providing high-bandwidth, reliable connections, ESnet enables researchers at
national laboratories, universities, and other institutions to communicate
with each other using the collaborative capabilities needed to address some
of the world's most important scientific challenges. Managed and operated by
the ESnet staff at Lawrence Berkeley National Laboratory, ESnet provides
direct high-bandwidth connections to all major DOE sites, multiple cross
connections with Internet2/Abilene, and connections to Europe via GEANT and
to Japan via SuperSINET, as well as fast interconnections to more than 100
other networks. Funded principally by DOE's Office of Science, ESnet services
allow scientists to make effective use of unique DOE research facilities and
computing resources, independent of time and geographic location. (www.es.net)
About SC11
SC11, sponsored by the IEEE Computer Society and the Association for
Computing Machinery (ACM) offers a complete technical education program and
exhibition to showcase the many ways high performance computing, networking,
storage and analysis lead to advances in scientific discovery, research,
education and commerce. This premier international conference includes a
globally attended technical program, workshops, tutorials, a world class
exhibit area, demonstrations and opportunities for hands-on learn
- [I2-NEWS] Scientists Reveal At SC11 Conference Advancements in 100 Gbps Networks Needed For Next Generation Research and Discovery, Todd Sedmak, 11/17/2011
Archive powered by MHonArc 2.6.16.