The document discusses collaboration between programmers and archivists at UC San Diego Library. It describes how they use an agile development approach called sprints, where cross-functional teams work in short cycles to develop digital library products and services. This approach promotes information sharing and coordination across projects, but can also lead to role ambiguity and increased coordination costs. The document also provides references to open-source technical tools and platforms used by the digital collections team.
Report
Share
Report
Share
1 of 17
More Related Content
Technology & Archives: Exchange Forum Programmer & Archivist Collaboration
1. Technology & Archives: Exchange Forum
Programmer & Archivist Collaboration
Society of California Archivists
May 9, 2014
Matt Critchlow & Cristela Garcia-Spitz
UC San Diego Library
2. A Brief History….
Digital Library
Steering
Committee
Digital Library
Products
Group
Digital Library
Operations
Group
Digital Library
Reformatting
Group
Metadata
Policy Group
Digital
Collections
Group
Digital Library
Development Program
&
Research Data Curation
Program
14. Pros & Cons
• Closer coordination &
improved team work
• Broader communication
& participation
• Promotes info sharing
across projects
• Distributed decision-making
& process overload
• Role ambiguity & occasional
role conflict
• Additional costs associated
with
monitoring, controlling, &
coordinating groups
ConsPros
1Ford, Robert C. and W. Alan Randolph. Cross-functional structures: a review and integration of matrix
organization and project management. Journal of Management: June, 1992.
Thank the group for having us todayQuick summary of what we will be talking about – focus on collaborative structure and toolsIntro: Cristela – background in Archives, transitioned to Digital Library with this position (middle man – represent archivist perspective to IT, and vice versa IT to archivists) Matt – role, worked with Archivists before?
Digital Library began very small (Director, Analyst, eventually Project Manager) and was established on the concept of coordinating the skills, knowledge and experience of staff distributed in various areas of the library to build digital collections, access systems and tools.Traditionally, DLDP has had a role as coordinator between metadata, subject specialist, programmers, othersStrong focus on content from Special Collections & Archives, also work closely with Information Technology Services, Metadata Services, and now Research Data CurationFormed groups to address different aspects of digital initiatives:Steering committee has 2 AULS for Collections Services and Academic ServicesCollection curators in Collections groupReformatting group consists of audio/video/image techniciansMetadata Policy Group – formerly Cataloging CommitteeEach group meets routinely – some of them biweekly others monthly or quarterly
Products Group oversees DAMS development (this is where Matt and I work most closely)Members – 3 from Digital Library, 2 ITS, MS and RDCSometimes charged short term subgroups to carry out specific tasks and pull in people from different areas of the library(leaned heavily on Metadata Policy Group and Special Collections & Archives for work on collection records, DAMS events, metadata order)Ex. UI functionality group - Opportunity to involve public services staff (SCA, C&O, DUS)These groups formed over the last two years while we were designing and implementing our latest version of our digital repository DAMS4Depending on the charge, may meet briefly or more long term
Been using Confluence & Jira as knowledge management tool and issue tracking for about 4 years nowAll of the Digital Library groups have a space in Confluence, so it is used to define the role of each group and document decisions (meeting minutes, project details, timelines, also technical specs, user stories, etc.)Makes it easy to search and link to the work of the different groups (also spy on others)
Example of one of the subgroups: see charge, membership, keep track of timeline and documentation, link to previous work
Click on UI Round 2 Review – as we did internal assessment, track progress: Priority, Level of Effort, and StatusSimple communication tool – don’t need wiki or anything sophisticated to do this. Can be replicated in Excel.
Now that our DAMS is in Public Beta, we’re gathering not only internal requirements via groups like the one Cristela just showedNow also from external users (contact form, Disqus comments, etc.)We are logging issues reported by users with this Confluence page. In the table entries we add a date, who reported it, and descriptive information.When a ticket is created to address the reported issue, it shows inline (DHH-450) and the status is automatically shown as well.So what happens when a ticket is created?
All tickets end up in our DAMS Product Backlog in JIRAThe Backlog is generally sorted in order of importance, but this can change at any time if the Products Group decides otherwiseFor tickets that are part of a larger effort, they are tagged with an Epic which is an umbrella topic. Epics are shown on the left with progress barsIn order for us to take action on tickets, however, they need to ultimately be prioritized enough for us to work on them in one of our upcoming Sprints
This page shows the team membership as well as a Sprint ScheduleIt shows each Sprint, the date range for the Sprint (every 2 weeks), a link to a Report that I’ll talk more about in a minute, and then the team participation for the SprintWe try to project out our team participation as far as possible so we can plan accordinglyPrior to the start of a new Sprint, we have a Product Backlog meeting where we review the Backlog shown on the previous slide and prioritize for the upcoming Sprint
Once work has been designated for a Sprint, we have a view in JIRA that shows the tickets in the various stages of workflowOn the left is work that is designated to be completed in the Sprint, but has yet to be startedThe middle in Progress column are active ticketsFinally when a developer believes they have completed work on a ticket it is resolved and placed in the Done queueAt the end of the Sprint, our Product Owner Gabriela Montoya reviews each ticket and either accepts or rejects.If all is successful, we have a new release of the Product
Now that work has completed for the Sprint, we want to communicate that outWe start with a Sprint Report, which I alluded to earlier on the Schedule pageIt contains a summary, a list of applications that were released and the corresponding version numbers. The summary is sent out as part of a weekly email from our Internal Communications Director, alerting all Library staff that new DAMS release was created
Further down we get more granularWe add graphs that show what kind work was accomplished. New Features vs Bug Fixes, for exampleThen we mention highlights from the release that we think will be particularly important to stakeholdersThen finally we list all the tickets that were completed by ProjectThis could certainly be done in Excel, Word, Google Docs, etc. What’s nice about this format, for us, is almost 100% of this report is generated automatically from data in JIRA.While we do send out the summary to all staff, and these Sprint Reports are available to all staff to review, we know that not everyone looks at them
So at the end of it all, ifCristela (who reported the first 2 tickets shown) doesn’t want to deal with keeping in touch with the process I just describedShe can come back here and see that her tickets were indeed closed and get the latest status.- This is part of our strategy to bridge the gap between the technical day-to-day, and keep our stakeholders in the loop and confident that their voices were indeed heard
- While our workflow and communication strategy is continually improving, there are certainly pros and cons to the hierarchy of our organizationIncreased Communication Channels in recent yearsFrom DLDP (with hierarchy of groups Cristela showed in earlier slide) <-> ITSTo DLDP, Research Data Curation Program, UC-wide initiatives, Community Collaborators (Hydra and Blacklight projects)It’s great having this extended reach to new stakeholders, as well as including some (like public services) that were on the fringe of the product in the pastBut it comes with an increased overhead and makes prioritization in the short and long term more complexThere’s also confusion at times regarding who can really speak authoritatively to the importance of particular features requests, content prioritization, etc.While we have a more transparent, integrated, collaborative, communication platform, not all of the folks involved in the product feel involved and heard. Still believe: “If you want to go fast, go alone. If you want to go far, go together”But going far together isn’t always easyIn addition to the workflow and process we just described, we’ve tried taking some other steps to address the divideCristela’s going to talk about a recent effort that we felt was successful
Implementing a matrix is a complex process, requiring organizational, structural, and cultural changes over time. New tools such as Confluence/JIRA facilitate this process.Exploring new ways of community interactionRecently did a one-day unconference internally (about 30 people from different areas of the Library attended)It was an opportunity for those who work on digital initiatives to meet each other, talk openly and creatively about common issues, and voice ideas for future directions.
See content at library.ucsd.edu/dcGo to Digital Library Development site for info on DAMS (links to data model and technical architecture on GitHub, info on Hydra) and Metadata, etc.Go to Research Data Curation site for info on campus-wide Research Cyberinfrastructure Initiative, data management plan tools, etc.Many individuals have contributed to the development of the UC San Diego Library DAMS. Acknowledgement & Thanks! (especially those here today)