Long, long ago, in a time when professional travel was still possible, Rachana Ananthakrishnan, executive director & head of products for Globus at the University of Chicago, visited the Indiana University (IU) campus in Bloomington to give the annual IU Women in Cybersecurity talk, sponsored by IU’s Center of Excellence for Women & Technology(CeWIT) and the Center for Applied Cybersecurity Research (CACR).
She also stopped by the Science Node offices to share her insights on how cyberinfrastructure providers have been shaped by unique aspects of the research enterprise. Recent disruptive technologies and trends are creating new challenges and opportunities in data management for today’s research community.
Welcome, Rachana! I’ve been hearing a lot recently about data overload in the research community. Can you break down what that really means and why it’s a problem?
"Researchers want to spend more time on their research and less time on data management – it’s not their end goal. Their mission is to get insights out of the data and derive value from the data as quickly as possible. These petabytes and even terabytes of data are becoming more distributed, whether they exist in an institution’s data center or in the cloud. Collaborating in a simple and secure manner is needed, so researchers are turning to modern tools and frameworks.
In almost all cases, there are standard operations people do. They have to move data across systems for processing, visualization, and archiving. They have to share with collaborators and do that securely. There’s metadata description to associate for discovery. And finally there’s automating these processes as much as possible for efficiency. For example, constructing data pipelines that researchers can run easily. Our team at Globus provides constructs for these with the mission of reducing time to science and discovery."