BHL
Archive
This is a read-only archive of the BHL Staff Wiki as it appeared on Sept 21, 2018. This archive is searchable using the search box on the left, but the search may be limited in the results it can provide.

BHL staff fall meeting 2011 notes

Back to 2011 BHL Staff Meeting Agenda | Parking Lot
Jump to BHL Workflow notes

9:15 [30 min] New BHL MOU | Lead: Martin Kalfatovic Notes: Grace Costantino

Governance: There is an approved new memorandum of understanding with a 3 level organizational group:

1) Institutional Council - anyone who signs MOU saying they want to be part of the team and want to participate in some way

2) Steering Committee
These members can participate in governance of BHL and pay a 10,000 per year for support fee to support BHL development. The steering committee will decide how the money will be spent. If you are not a steering committee member, you can always upgrade later.

3) Executive Group
This group does the day to day management of BHL activities. Cathy Norton is the current chair. Nancy will be the chair in March.
A new election at March meeting will fill behind Nancy (Connie may continue for another year as secretary)

All steering committee members can join in on weekly Executive Group calls.
We do need to come up with a list of contacts for each Steering Committee institution.
Martin will set up discussion list for the contacts from the Steering Committee.

Global Governance:

key things:


Trish: How do we share info in our side with global partners, collaborate, and ask questions?
Martin: We might have international worker bee groups like our worker-bee groups in the US to allow us to collaborate and communicate more.

Suzanne: We want it to be ground up but we need someone to help us make the connection. From the top down someone needs to make something happen
Martin: We will follow up on that. There are staff issues and language issues that make it complicated but we'll work on it.

Susan: BHL has a set of bi-laws. Does the change in governance affect these?
Martin: Good question. The Steering Committee will review them and see if we need to make adjustments to those.

Bianca: Within the US/UK framework, are we trying to structure communication between governance and worker bees more for the future? Will top levels talk to lower levels more?
Martin: We want a more efficient way to get things happening at governance level, and a better way to allow those happenings to be communicated to the group. wants We might use the list serv to flow information out to everyone more efficiently.

Suzanne: There is a black hole for getting information from higher levels. Will there be yet another list serv?
Martin: We will not commit to anything now but there will be a more active communication stream and I will talk to all of you to see what we want that stream to be. I am responsible for communications in general from the Executive Committee, so let me know what you want from me. SIL posts notes of meetings now, so that might be a good guide for communicating from the executive group.

If there are any questions, email Martin and cc' Bianca.

Martin: SIL will be hiring project manager. I will take over for Tom. The project manager will take care of lots of help with communications/spreadsheet work/etc. This person will make communication more active and efficient.

Suzanne: How many BHL staff people are there everywhere? (full-time BHL)
Martin: I don't know what's happening at other institutions. SIL will have martin, bianca, project manager, gilbert, 1/4 erin, stefaan; missouri has michelle and scanners, chris, trish, mike, david (full-time citebank). One other things we did in 2010 was a survey of all participants with categories of staff indicating what they do and what percentage of FTE was devoted to BHL. We came up then with an in-kind estimate of BHL for these staff members. Probably would be good to do this again for newcomers.

Mary from Cornell: Where is everyone getting their money from?
Martin: Institutional funds, outside grant funds, and there were others included in the document discussed above. It also included estimates for cost of scanning at each institution (full load cost of scanning with work). I will take this report on to get it out to everyone.

Trish: Project manager will be for us/uk. william hired as global project manager but he's taken on role as tech team project manager. Are the roles of each clearly defined?
martin: No. We'll have to work on defining that. William will be global management with Bianca's help. Tech team is self-organizing.

Bianca: I sit in every Monday on tech team weekly calls to see what information I can glean and bring to staff. That's an opportunity for communication but we do need to define roles.

Martin: BHL-E has good organizational structure. We will try to copy that for us/uk.

Suzanne: bhl classic didn't lose any partners but they just went to different places in the governance structure. but we did gett cornell and usgs. does anyone know what's going on in Philadelphia since they were absorbed by Drexel.

Becky: their library still exists. it's now academy of sciences of philadelphia of Drexel University. we don't know more beyond that.

suzanne: does anyone know the library structure at Drexel?
martin: very hierarchical.

martin: Since cornell is a large-scale digital library shop, we need to get together with them and usgs to see what we need to do to integrate them so we can understand what's happening with them and with us. we've ingested lots of content from IA but we need to know what more there is.

suzanne: we need to collaborate better amongst ourselves too.

martin: it will be an exciting new year as we transition.

=================END DISCUSSION========================================

9:45 [30 min] BHL Domains of Tasks | Lead: Bianca Crowley Notes: Matt Person


A discussion of the various domains of tasks needed to run the BHL. A goal of the discussion will be to assign task domain leads who can serve as the organizers and points of contact for these tasks. What tasks are you involved in/want to get involved in? What task domains are not listed? One of the key tasks is BHL documentation, i.e. the BHL Cookbook and there is a need to improve this task going forward.

  1. Bianca began by speaking about the concept of Domains of tasks: referring to Tom – organically working together – no demands…the need to get jobs done though.
  2. General Points:
  3. Bianca how do we get ideas through tech group?
  4. Bianca pointed out buckets of tasks on posters on the wall; purpose of next block of time is to assign points of contact people:
  5. Social Media: Point people: Gilbert, Bianca…Gilbert is the “flickr” images go to guy.
    1. Grace will speak more about this at lunch about social media
  6. Reference and scanning requests – point people -Grace and Bianca
  7. Gemini -point people: Grace, Bianca as well, Gemini is explained by Grace, issue creation, explained the feedback button
  8. Portal editing point people grace
    1. pagination: point people: gilbert, chris cardon, JJ, Bianca, Martin
    2. can others take on pagination tasks?
    3. Bianca: maybe have pagination interns in the future
    4. JJ: pagination = 20-30 minutes per book –
    5. prioritizing books based on flickr, etc
    6. Cornell (please fill in your name) said: they have a tool, will share….
    7. Keri, Trish, Bianca maybe pagination task force, to discuss and for training
  9. cite bank – point person: Trish
  10. Collection development – Bianca and collections committee
  11. Internal wiki process – everyone has access, everyone contributes: mentioning the cookbook Martin: anyone can join the wiki please make clear who you are when you request membership
  12. Grace cookbook: a place for everyone to place best practices…buckets, sections…Grace, what should priority be in terms of date June 1 how to have time …tabrace with comments: private wiki, identifiers and doi’s, and open access…our policy…)le idea of deadline “BHL Cookbook outline and assignments: all sections, and those assigned to it…in blue…in purple, because it makes sense for you.action item, look at the page and get tto grace for questions signup sheet in
  13. Public wiki dev
  14. Scanning workflow – point people: Bianca "birds eye view", Grace: “on the ground nuts and bolts
  15. Technical dev – point people: Grace- general,
    1. More technical people: Bianca, Martin, Chris
    2. Trish mentioned William had been taking global and tech group notes; Martin points out Mc Arthur grant requirement changes coming up;
    3. Martin- today’s agenda and outcomes will inform tech development,
  16. Permissions- point people: Bianca
  17. Global coordination point people William
Below: not discussed, future follow up
  1. Reporting and communication
  2. Pub relations and marketing – on hold
  3. Public wiki development
  4. Need to think of above 3 in future
Bianca ended session.

10:30 [60 min] | Lead: Keri Thompson Notes: Grace Costantino


handout - mental visualization of the BHL workflows

7 workflows for getting content into BHL - BHL means portal or Citebank (w/ Citebank it's content brought in from other contributors)

Above handout groups things into different sections:

IA scribe scanning:
shipping books to IA facilities, after which they get into BHL through IA - easy workflow. Scribe is the machine that does the scanning. There are individual people that work on each machine. IA does all derivative files and rough pagination.

Botanicus Workflow:
Content goes exclusively through MOBOT. This workflow is how all MOBOT content gets into BHL. To date MOBOT and IA content sat separately but now we're bringing MOBOT content into IA to synchronize. This is primarily for global replication.

Send to MOBOT Workflow:
NYBG and MCZ do this. It's not the most sustainable workflow but does exist.
Joe: This is mostly for things that cannot be scanned at IA for size (size of books or foldouts) and special collection and fragile items scanned in-house. Harvard has internal digitizing facility to which they can send content of the above type for scanning. They send about 40 at a time to the process after review of materials. These items are checked out in online system, enabling them to create report from ILS with metadata that they need to send over with books for imaging. Imaging services then scans the books. This usually takes a few months. They create a lot of metadata with scans and send back to Harvard on transportable hard drive all the files, metadata, and per page OCR. They send METS files for items and MARCXML for each file. All of this is sent to MOBOT. File format used is JP2 images. Up until now that's what's been sent to MOBOT but recent discussions reveal that there need to be tweaks to make the process easier for MOBOT. MCZ is now going to create regular JPGs instead of JP2s and will adjust a few other things as well. They will send a few scanned items with new workflow to MOBOT to see if that makes it more efficient.

NYBG: Similar workflow to MCZ for MOBOT-sent stuff. Don't send as robust a metadata package. They're doing in-house scanning through grants that they have and send them to MOBOT on external hard drive for ingestion into BHL. They're interested in fine-tuning the workflow.

MOBOT: Michelle: recently talked to Mike Bloomberg. Didn't know Joe had been sending MARC XML files, which she needs and had been creating from scratch. They are going to see if they can manipulate the MARC XML files to use so Michelle doesn't have to create the MARC from scratch.

Joe: The MARC XML are cryptically named and it wasn't obvious that they were there.

Michelle: We need to talk more with Joe because the changes and the new insights will save a lot of time and make things more efficient.

Bianca: We all know how much work it is on everyone's end for the MOBOT workflow. This is not efficient for large chunks of stuff. MOBOT staff work is very intensive for this process. They're sending content through the Botanicus workflow. They're not using the OCR MCZ is creating. They're using their own software to make derivative files and OCR. It's not something we can open up as a free-for-all for everyone to send their content.

Keri: The MOBOT workflow is also the least-efficient so it makes the most sense to send content through IA whenever possible.

Chris: I'm also involved in the JRS discussions (African colleagues talking about digitizing content in Africa). Imagine the communication problems we have just within our own institutions, and it's that much harder for the African colleagues. There will be a report from the meeting that will be very eye-opening.

Bianca: We want to focus away from these workflows and focus more on Magical spreadsheets and Macaw, which are alternate ways to get content into IA. Another point: We'll go into detail on this later, but there is also additional content in Citebank that we also want to get into BHL. These are different kinds of files and formats than our standard type and we need new workflows to get them in. Some societies have only article PDFs, which we can't put in portal now, so we put it into Citebank. Trish and team at MOBOT manages this workflow to get things into Citebank. We're thinking about in the future having people contribute things to Citebank, but that's for a later discussion.

Magical Spreadsheets:

Keri: What are the important pieces of puzzle? Both magical spreadsheets and Macaw help you create the metadata files that IA expects. Specifically, I'm not sure how the magical spreadsheets work, but the important files that you need are: Marc xml, Item level data, page level data, and image scans. IA expects these in XML and JP2s zipped or tar file. From experience we know that if you submit them to IA, it is good to super-compress JP2s. Most institutions have MARC that they can get into xml from catalogs. The difficult data is page and item level data. Everyone will have a different method for retrieving item level data from their catalog. The big pain is the page level data. Doing it by hand doesn't scale. IA ia expecting more than just page number and image number. They also expect pixel dimensions and such.

Macaw:

Keri: For SIL in-house scanning (mostly folios and some rare), Joel Richard developed Macaw, which is a metadata collection workflow system that allows us to do pagination before the content is sent to the portal. It creates all derivatives, processes files, packages them, and pipes them up to IA at night. Macaw runs on a web server. It's in php and will be released as open source on google code. It might be option for some institutions with some technical ability to help get scans into IA. It will not work for everyone because you need some technical expertise and to be able to set up small web server. We will work on setting up alternate options like a centralized macaw system, but we're not yet sure on how or when that would work. Joel is the contact for Macaw.

John: If there's a centralized installation, would I still work on files locally and then send them all to SIL? Then it's still a similar process to the current MOBOT process and it defeats whole purpose.

Keri: It's not as tedious a process to get the content into IA as MOBOT workflow.

John: The bottleneck is whether small institutions have infrastructure to set this up. But it looks like it's actually a simple install.

Martin: Let's table details for now and we can discuss centralized system later. For people that already have robust scanning systems in place, we can discuss how to use MACAW to transfer data to IA.

Keri: We have well-documented the files you need and how you send files to IA. That documentation is on the public wiki. Joel wrote up in detail, with Mike, how to get files up. The first step is talking to IA and getting an account with them.

Simon Sherrin: At australia, we've grabbed the code and prepped it and are working collaboratively to set it up. We're setting up a workflow at the Museum Victoria and will have an additional scanner that will go around to Australian institutions so others can scan material as well. We're almost at the point of getting things up to IA. The problem is having a central site that people can send content to to scan. We want individual institutions to paginate their files, and that's the complicated process. The upload process is also not perfect, and it often fails. if we could have one central place where we could deal with upload issues to IA, that would help. I will report back later on the process and be in constant contact with Joel.

Marty: How is material in IA identified to be harvested for BHL?

Keri: It's put in a particular collection, which is part of the item-level metadata. That's part of working with IA. You have to get rights to upload to your particular collection. Mike is the "magic" guy that harvests the files.

Marty: We already have some material in IA, not all of which is appropriate for BHL.

Keri: You can put things in up to three collections.

Mary: So, to get it into BHL, all we have to do is put the tag in to make it part of the BHL collection?

keri: Yes.

john: Is wonderfetch dead?

keri: No. People scanning with IA still use wonderfetch.

Becky: MLIS connecting content project: biggest obstacle is figuring out how to get levels of metadata needed for boutique scanning of field books and getting them into IA. This is all a work in progress.

suzanne: macaw might be solution for connecting content?

becky: chris freehand suggested: because field books are small and a manageable chunk of scans, it might be easier to use magic spreadsheet and then use IA book uploader because it's a manageable collection of content. The metadata problem is the bigger problem.

caroline: There's two metadata problems: how do you get the item and page-level metadata and what kinds of metadata do you actually need to get in order to let these items be found by people.

christine (from Field book project): We also need to know how we massage information that's made for published material for use in BHL?

Martin: We keep saying we're putting field notebook stuff into BHL, but the interpretations by the two projects of what that means is off, and it worries me. We were thinking about different things when we said we'd accept field notebooks. We're not expecting field notebooks to show up in BHL and have them sit alongside existing traditional books. We're working now on a macaw module to assist with scanning workflow process that will handle field notebook scanning.

PARKING LOT: field notebooks

christine (from field notebooks): We're really interested in macaw because, though magical spreadsheets is a good fit for now for the grant, with the deadlines we have, macaw would be an interesting option. If it just requires some technical workflow, it might be an option. It might work better if someone else can host for small institutions that don't have technical infrastructure to set up the application themselves.

bianca: (introduces Caroline). Caroline, can you give a brief overview of the Field notebook project?

caroline: I'm the project manager for the field book project. It's based at SI. It's a three year project for improving access to field notes, journals, photo albums, and other material that helps document collecting specimens. We're developing the system and moving it into islandora. We've been talking to BHL from beginning of project to make sure that metadata is very similar to BHL metatada. We're using MODS and collection-level records. We want to move forward and have page-level delivery, but we're not there yet. Since we're on deadlines to present the content, there's been discussion using BHL.

bianca: Now it's time to explain what magical spreadsheets are and do a macaw demo.

Joel will do a five minute demo. see link for demo for more information: macaw.joelrichard.com

Magic Spreadsheets:
Mike: in most cases if macaw can be set up as a centralized system, it will be a better system and will supersede the use of the spreadsheet. It was originally set up in response to the Darwin's library project, when I needed something to use immediately. It worked and allowed us to get 250 items up to IA. It's an excel workbook with spreadsheets to collect title, item level metadata. There's an app that takes spreadsheets, macro file and images, creates the correct files, and uploads to IA. It's like macaw but more intensive. For field notebooks, it might be a good solution, but in most cases macaw would be the way to go.

bianca: For the spreadsheets, people have to manually input all the information. It's good for small projects but does not scale.

mike: There is a line of metadata for each page, but I created some auto fill mechanisms that prevented me from having to manually input all data for each page.

bianca: If someone wants to work with them, what do they do?

mike: We have a template for the spreadsheet. They will fill it out and decide whether to use the utility or send them up to IA. But they also have to send image files to me, so it's a workflow for me as well.

trish: Is the magic spreadsheet up anywhere?

mike: Not sure. It's in the dropbox account, but not sure if it's on the wiki. It would be worthwhile to put it up there.

bianca: Are we saying it's okay to pursue the magical spreadsheet method?

becky: Because of time limits, the Field Notebooks project will probably use magical spreadsheets, but what we need to know is a workflow plan and metadata plan to give to people scanning field notebooks.

bianca: Is the magical spreadsheet a viable method for other institutions? Are there certain requirements so you know which to use?

keri: We should handle it one on one with each institution to decide what meets their needs. Depending on how much stuff they have, they could do the manual book uploader.

martin: We need to revisit what we did years ago to get a survey of what institutions have content-wise that's not yet in the portal but needs to get there. Then we can decide which buckets to put them in for ingest.

bianca: That is where technology, collection and workflows meet.

martin: Collections is the place to start. we need to do a quick survey of people.

kevin: Being able to upload remotely from macaw is the best solution. Not having to do it remotely by uploading to drive and sending to MOBOT to upload would be great.

=========END DISCUSSION=========================================

1:00 [60 min] Round Robin | Lead: Bianca Crowley Notes: Michelle Abeln


Round Robin Notes:

Questions put together by Collections Group.

Wiki worksheet was filled out ahead of time, and has great information.

NHM [Allison]: scanning but not huge amount, dependent on money flow. Have own scribe machine; 15-20 items a fortnight. IA does all QA. Would like to have in-house scans ingested into BHL content.

Kew [David]: have been in talks with Allison at NHM on how to begin. Still at starting point, working out if they can scan in-house or ship to NHM scanning machines. Some staff doesn't want rare material shipped off premises. Have not done any catalogue linkages, interested in what other institutions are doing. [Bianca]: do you have a sense of timing on when you want to interact with new members group and start scanning. [David]: as soon as possible. [Bianca]: are you able to do gap filling for botanical collections? [David]: yes, that's our aim, is to help fulfill that role. Kew holdings are currently not in OCLC.

MBL [Matt P.]: introduces new library director Holly Miller. Cathy Norton is no longer director. Are actively scanning, but basically doing Gemini requests. Sending out small shipments every month or so. In the past, have done more rigorous QA, so do minimal check-in of journals when returned. [Bianca]: does your minimal checkin conform to QA standards? {Matt P.]: we really don't have time for even that amount of QA; have purchased a scanner within the past year, but have lost funding to staff the scanner. Thinking of more creative ways to deal with that (ex: volunteers with skills to help scanning). Have been linking Voyager catalogue to MBL scanned content, have from beginning. Systems librarian runs a script to help that. Yes, have Gemini and other backlogs. Tries to keep up with important priority materials. Did some pagination a year ago. Have scanned 300-500 volumes in post 1923-1960 category. Diane Reinlinger went through Stanford Database to find materials out of copyright. Only one instance of being contacted by outside institution to remove post copyright content (New York Academy of Sciences).

MCZ [Joe]: head of technical services, we are actively scanning with IA, slowly shipping to IA facility in Boston once every 2 months or so, about 200 volumes. But more and more of it moving towards in-house scanning. For remaining funding (which runs out at end of fiscal year-end of June 2012), wants to concentrate on rare and special collections, all of which is done in-house, sent to MOBOT. We manually link BHL holdings of our own content to catalogue, haven't found a way to automate process. Harvard automatically harvests open access journals, and creates catalogue records for these journals, working with system to try to pull in BHL holdings as well, was taking a long time, so did all links manually. Manageable backlog; Gemini requests are underway or are in queue. In-house scanning takes a long time, so there is a long list of Gemini requests waiting to be done, but it will just take a while due to length of scanning process. Haven't done any work in post copyright content, but are very interested, but up in the air about whether it can be done at any time in the future (depends on Harvard legal department).

NYBG: [Kevin]: manager of technical projects at NYBG; currently not doing at BHL scanning, not sending out any shipments; do Gemini fill-ins when we can, shipping non-BHL shipment this fall, on a Metropolitan NY Library Grant. Have in-house scanning process, digital library through Content DM, funded through a Mellon Grant, material currently not in BHL, but would like to get it there eventually. Does manual catalogue linking to all BHL content. Do have a scanning backlog, mostly Gemini requests for gap-fills, including a rescan of Prodromos that has quality issues. Possibly interested in scanning post copyright content, but decision needs to be made higher up. [Bianca]: Who works on Gemini requests? [Kevin]: myself and Don Wheeler.

AMNH: [Bianca speaking for Matthew Bolen, unattended]: Matthew had a concern that institutions aren't doing retrospective QA, got interns to help. He said they've found a lot of problems, are now out of scanning funds, and their scanning center has moved and they cannot send shipments out. But are sending small shipments to SMI via BHL fedex account. He's not sure how to address the QA issues now that scanning facility is closed. [Kevin]: has talked with Matthew, feels Matthew has found more errors; Kevin says they've been contacted by users with QA errors, work with Mike L. and MOBOT to get those fixed pages inserted into books. A "fix-around" around IA, errors are still present in IA. [Suzanne]: parking lot!

SMI: [Grace]: still sending shipments, Gilbert pulls books, scanning is almost completely Gemini requests. In-house scanning of folios done in house through MACAW. QA is being done. Gilbert and Grace doing paginated as needed. [Bess]: have only added links once-did a global addition of 856 tags to MARC records in ILS that point out to BHL, only for SMI scanned monographs. Hoping to do another addition of monographs and serials this winter, links for holdings, whether for SMI scanned content or not. [Keri]: gets a report out of IA that matches bib numbers and barcodes. [Grace]: small Gemini backlog, but long queue of rare material through MACAW. Interested in post 1923 content. [Martin]: investigating doing post 1923 material, interested in being more risky in scanning more recent content.

CORNELL: have not done any IA scanning, but have done other scanning in-house participants in microsoft scanning project (now defunct), need to figure out how to get that material into BHL; have several other scanning projects, some of which is BHL relevant; contracted with Tribonux (sp?) in Montreal to send out scanning shipments. Have some scanning done in house, some go into existing collections; some goes into local repository (e-commons). Have a lot of Google-scanned materials if appropriate to ingest. [Doug]: wouldn't some of previously scanned IA material be in BHL? [Martin]: only found 7 titles. [Bianca]: explains ingest criteria. [Cornell]: perhaps incomplete metadata. [Suzanne]: parking lot! [Cornell]: interested in adding links to all scanned material, have been talking to several people about getting better material from Serial Solutions. Currently don't have any IA backlog. Are currently involved in a law suit against Hathi Trust dealing with orphaned works. Interested in that type of content, but experiencing risks.

FIELD: [Christine]: we originally partnered with Urbana Champaign, and have recently received funds from African Council to scan AFrican materials, shipment sent to Ft. Wayne. Trying to get rare materials involved. Works with Grace to ship materials to SMI to fulfill Gemini requests and gap fills. No in house scanning done at the moment. Does manual catalogue links, basically whatever she finds, to all held materials. Adds links when shipments returned, part of QA process. Backlog of Gemini requests, most require special handling, doesn't think many will get done because of African Council funding. Interested in post copyright material, but no time to really get involved. QA is done by volunteers or students. Limited staff to help.

MOBOT: [Doug and Michelle]: All of our scanning is done in-house through our Botanicus workflow. We have five in-house scanners that share time scanning books and herbarium specimens; due to how funding and grants are working at the moment, more time will be spent on specimens in foreseeable future. QA/pagination done as Linkages are made to all held BHL materials, whether we've scanned or not. ts. scanner share time with scanning specimens, due to funding/ grants book scanning might slow

CAL ACAD: [Becky]: IA scanning workflow: shipping it IA station in San Francisco, don't have to pay for shipping, can just drive shipments to facility. Haven't started doing QA, have just recently been getting shipments back, hope to start soon. Not doing much in-house scanning yet, all in house scanning done in relation to FieldBook project. Don't have facilities to do oversize materials. Not yet doing regular linkages, but doing some randomly, when requested, very interested in getting links in. Currently processing IA returns, fairly up to date on Gemini requests. Would love to scan post-1923 content, not sure if legal department would be willing to greenlight. Can consult on site to paralegal, but would need a 'fact sheet' on how/ why we are scanning what we are scanning. Thinks would help persuade legal department. [Martin]: interested in work group to draft this type of document; Doug and Becky interested in begin a part of this group, Joe also interested. Martin volunteers SMI legal team.

BHL-E: [Henning]: currently have 27 libraries or museums scanning content for BHL-E; ongoing process, everyone scanning differently, most of them not in mass. Will continue on to next year. Have 4-5 million pages across these institutions.

BHL-AU: [Simon]: did passable American accent, just starting scanning process, in middle of setting up workflow, working with Joel R. to upload to IA, at the moment, using volunteers from museum to do scanning work. Doing Australian journals from Australian scientific institutions, going directly to institutions to do up to the minute content.

=========END DISCUSSION=========================================

2:00 [30 min] BHL Tech updates | Lead: Mike Lichtenberg Notes: JJ Ford


Mike Lichtenberg provides a brief update on BHL Technical development projects, esp. regarding the proposed changes to the BHL user interface (UI)

I. MIKE: Tech Team's Recent Additions to the BHL Portal


II. MIKE: Tech Team is currently working on:

III. BIANCA: Q&A