April 19, 2024

Using Data to Shape a Library’s Direction | Data-Driven Academic Libraries

From

131205_dataAccess to good data on key metrics such as circulation and student visits always helps make a better case for the important role libraries play on campus. But using data proactively to address emerging trends and challenges is “what it really means to be a data-driven organization” said Sarah Tudesco, Assessment Librarian, Yale University, during yesterday’s “What Is a Data-Driven Academic Library?” webcast.

The webcast was the first of a free, three-part LJ series developed partnership with Electronic Resources and Libraries (ER&L) and sponsored by ProQuest, Springer, and Innovative Interfaces. The series is moderated by Bonnie Tijerina, Head of E-Resources and Serials for Harvard Library, and founder of ER&L.

Tudesco suggested a five-part process for libraries interested in making data central to strategic decision making: (1) identify questions, (2) develop a plan to collect the necessary data to answer those questions, (3) collect data, (4) analyze the data, and (5) generate actionable recommendations.

Broad questions, such as “is our library meeting the needs of our community now, and will it in five years?” will need to be broken down into more manageable chunks, she added. Those might include questions about who is currently using the collection, whether a library is more geared toward graduate students or undergrads, who is participating in programs, or who is currently using the library spaces. Establishing manageable questions will help anchor efforts going forward, Tudesco said. Otherwise, it’s very easy for a project to get lost, given the volume of usage data.

After establishing the questions, data will need to be compiled from different sources, which Tudesco grouped into three “buckets”: systems, workflow, and patron input. Systems data is the most straightforward. Tools such as Google Analytics can help track website traffic. And data on circulation, financials, collections and acquisitions, e-resource use, gate counts, and resource sharing are tracked internally, although vendor support may be needed to extract information from some systems.

Many libraries have systems to track workflow, such as staff records of questions answered or time spent helping patrons at reference desks. Technical services departments may keep records of the number of items cataloged per year, and preservation departments might track the number of preservation projects completed annually. Compiling this type of data may prove to be a challenge, however, since there are often many different systems and tracking may be inconsistent.

And patron input—drawn from surveys, focus groups, and social media analytics—is a source of qualitative data that can offer additional perspective on systems and workflow data.

Collection also involves placing data in a program where it can be manipulated and analyzed, such as a spreadsheet or SQL database. And even ubiquitous programs like Microsoft Excel are powerful tools in the hands of expert users.

“I really advocate becoming proficient at Excel,” Tudesco said, describing it as a “core tool in your arsenal” and advising viewers to consider taking a class on the program to deepen their understanding of the program’s capabilities. Similarly, she noted that Google recently produced a MOOC on Google Analytics to help users get the most out of the service. Even though courses on spreadsheets and analytics software tend to be targeted at corporate audiences, these classes still offer plenty of information that a data-driven library could use. Other programs, such as Open Refine, can help clean up data and make disparate datasets work together, while ATLAS.ti can help with qualitative data analysis.

Analysis should culminate in actionable recommendations, and Tudesco advised viewers to keep in mind that library data can be very specialized, and that one key goal is to translate this information for a provost or other administrators. Data visualization can help, but “it’s not just about developing beautiful charts…. Learn to tell a story,” she said.

The three-part webcast series will continue next Wednesday, December 11 with The Evolution of Usage and Impact: Analyzing and Benchmarking Use, featuring presentations by Emily Guhde, Online Services Librarian for NC Live; Jill Morris Assistant Director for NC Live; and Michael Levine-Clark, Associate Dean for Scholarly Communication and Collections Services for the University of Denver; along with special guests Jason Price, Program Manager for the Statewide California Electronic Library Consortium; and John MacDonald, Associate Dean for Collections for the University of Southern California. The series will conclude on December 18 with Measuring Impact: Redefining Scholarly Value Through New Data featuring speakers Jason Priem, co-founder of ImpactStory; Gregg Gordon, President and CEO of the Social Science Research Network; and Jennifer Lin, Senior Product Manager for the Public Library of Science.

Share
Matt Enis About Matt Enis

Matt Enis (menis@mediasourceinc.com; @matthewenis on Twitter) is Associate Editor, Technology for Library Journal.

Comments

  1. zia abdelkader says:

    data is important for all librarian’s decision

  2. Love the term “data driven.” I’m happy to see anyone suggest we analyze data rather than simply track and record statistics. I’m a big advocate of letting computers do the math and librarians reap the benefit.

    I posted similar stuff on my own blog, if it is of interest:
    http://www.joshinglibrarian.info/2013/05/presentation-putting-value-in.html
    http://www.joshinglibrarian.info/2013/05/mission-statements-promises-we-make.html
    http://www.joshinglibrarian.info/2013/05/measuring-value-delivered-by-library.html
    http://www.joshinglibrarian.info/2013/05/postscript-library-statistics-and.html

    I only wish more public librarians attempted the kinds of rigorous evaluation you’ve written about in this post.