May 2, 2018 |

The Computation Institute at the University of Chicago covers GlobusWorld 2018:

Most of us are now comfortable with cloud computing, enough to often take it for granted. Whether it’s saving our photos in cloud storage, accessing our email from multiple devices, or streaming a high-definition video on the bus, moving data to and from a distant computing center has become second nature.

But for science, embracing the cloud is far more complicated. Scientific data is typically larger, more complex, and more sensitive than consumer data, and researchers use it in experimental and advanced ways, applying deep analysis and machine learning methods to extract new discoveries.

For two decades, Globus has taken on this challenge by regularly introducing new services that make it easier for scientists to manage their data. From the early days of grid computing in the 90s to the cloud computing of today, Globus has reduced the peskiest barriers for data-driven research, including transfer, search, authentication, and analysis.

“A lot of people think of Globus as just file transfer, but there’s a lot more to it,” said co-founder Steve Tuecke at the 2018 edition of GlobusWorld, the group’s annual user meeting in Chicago. “Our mission is to increase the efficiency and effectiveness of researchers engaged in data-driven science and scholarship through sustainable software.”

So while the meeting in late April happened to coincide with an important file transfer milestone -- 400 petabytes of data moved between Globus endpoints via the service since 2010 -- many of the talks, tutorials, and user stories focused instead on what comes after data reaches its destination. Whether it’s enabling the discovery of promising new materials, helping coordinate multi-site research projects in neuroscience and molecular biology, or facilitating campus- and country-wide storage networks, Globus is increasingly a critical behind-the-scenes partner in some of today’s most exciting science.

Read the full story: