The Difference Between AACR, LOC, DDC, FRBR, CCO, MARC and RDA, a blog post by Suzie Pocatligo at her blog How to Catalog a Hiccup. (“How to Catalogue a Hiccough”?) It does what it says on the tin.
James Weinheimer‘s post Tim Berners-Lee on the Semantic Web kicked off a very interesting discussion on the ngc4lib mailing list about the FRBR user tasks, the Semantic Web, the RDA vocabulary that will let people use RDA on the Semantic Web, and more. Check out the archives to read it and don’t miss where Shawne Miksa starts a new thread on User Tasks–Outdated? Why?
Jonathan Rochkind, Karen Coyle, Diane Hillmann, Eric Lease Morgan are all there, so you know it’s going to be informed, opinionated, lively, intelligent, and just the kind of thing where you wish everyone was sitting together over a drink.
If you’re not on ngc4lib, consider joining.
Yeah. I know.
Bibliographic Control Alphabet Soup: AACR to RDA and Evolution of MARC is a 90-minute “webinar” (I hate that word) on Wednesday 14 October 2009. Starts at 1:30 pm Eastern (18:30 UTC). I thought it was free but it costs about $100, so there you go. If you can’t make it Wednesday you can still pay to watch it later.
Barbara Tillett is talking about AACR2, RDA, VIAF, and linked data; William Moen is talking about his research into MARC usage; and Diane Hillmann is talking about RDA elements and vocabularies and gets a bit FRBRy:
As the lynchpin of hugely successful efforts by libraries to provide information on their holdings to both local and remote users, MARC has had an illustrious presence. However, the format is beginning to fail libraries as many of our partners and potential partners in a greatly enriched information ecosystem do not “speak” MARC but handle their data in very different ways. RDA elements and vocabularies represent the distillation of library descriptive knowledge, optimized for use within an environment that speaks XML, RDF, and linked data, and which seek to express that knowledge in an FRBR-aware manner. This webinar will provide a brief overview of RDA elements and vocabularies.
There are two things that continue to make OpenLibrary less than useful for me: 1. lack of links to source materials outside of Internet Archive (e.g. Google) and 2. the tremendous amount of unreconciled duplicate date in the OL archive.
Given the home-grown database management system used by OpenLibrary, it seems to me that the best way to solve problem #2 will probably be to FRBRize the existing data. It’s been a long time since I’ve seen any update on this effort; how’s it going?
There was a short thread following with some back and forth. Karen Coyle replied:
Lee, FRBR-ization is in test at the moment. The hardest part is figuring out a good user experience when only a few items have multiple editions. But that is in progress.
Yes, there are duplicates. Those will be removed by re-running the duplicate detection on the database, and that will be more efficient once the WorKs are gathered together because that pinpoints a lot of the duplicates. The algorithm will only take us so far, however, so there are plans to provide support for merging of items and authors by users. It all has to be coordinated with re-directing from previously used IDs so that no linking is broken.
I’m not sure about your #1 — there are links to Google on the Edition pages when a Google item is detected, and the link states whether it is a snippet, full view or now view. http://openlibrary.org/b/OL2873790M/Raintree-County [with ISBN] http://openlibrary.org/b/OL6026352M/Raintree-County [without ISBN]
Is this not what you are looking for?
So, can you give us any more details, or is this something you will simply present to us when it’s completed to your satisfaction? For me, personally, the web interface is irrelevant; I’m much more interested in that the data will look like when it is retrieved via an API, and that can be exposed long before you have figured out how to present a “good user experience.”
Sorry, I have no idea how it will work in the APIs, and don’t know if that’s been worked out yet. I’ll try to get an answer for that, but it’s possible that it isn’t known yet. We’re still trying to figure out which data elements will be resident on the Work record/template and which on the edition(Manifestation) template. Just to give you an idea of the progress. And in terms of merging manifestations into works, we’re using author, title and uniform title when it is available. The first pass will have errors, and there will need to be a way to allow users to merge and unmerge as a way to correct those.
The database will consist of Works and Manifestations (called ‘editions’) — it will not be just display, but two linked bibliographic ‘levels’. There isn’t enough information in the bibliographic data to provide accurate expressions, but it should be possible to at least provide a separate view of different languages. (Not to mention that there isn’t a lot of agreement in the bibliographic world about where to divide expressions and works… but that’s a whole different conversation.)
Browse the whole thread to get the whole exchange, but it’s all been over for a month.
Passing this on from Kelley McGrath (co-author of Identifying FRBR Work-Level Data in MARC Bibliographic Records for Manifestations of Moving Images in Code4Lib 5). Get in touch with her if you’re interested in this project.
OLAC (Online Audiovisual Catalogers) has been investigating ideas for improving access to moving image works for some time (see http://www.olacinc.org/drupal/?q=node/27). I am hoping to apply for a grant for a demonstration project. I have put together an overview of what I hope can be done at http://ilocker.bsu.edu/users/kmcgrath/world_shared.
The basic goals I see for this experiment at this point are:
1. Import and convert current MARC manifestation-level bibliographic records into FRBR-based records, including both work/primary expression records (similar to an IMDb record) and records with expression- and manifestation-level limiters linked to items in libraries or archives to help users find the particular item(s) that meet their needs.
2. Create an end-user interface for searching, browsing, and obtaining moving image materials that leverages facets based on structured data to improve browsing and navigation.
3. Develop a back-end maintenance module that supports adding new records, editing and merging existing records, and deleting records in single record and batch mode.
I am looking for people who would be interested in brainstorming and fleshing out the details of a grant proposal and possibly being involved in an eventual grant.
If you would like to participate in this discussion, please send me an email at firstname.lastname@example.org by Monday, October 12, stating why you’re interested in this project and what you think you could contribute. We need all sorts of perspectives, including cataloging/metadata, database design, web design, user needs assessment, user interface construction, budgeting, etc., etc. Or even if you’re just interested in taking part in the discussion.
Cataloging and Metadata Services Librarian (A/V)
Ball State University
Muncie, IN 47306
“Twisty Little Passages Not So Much Alike: Applying the FRBR Model to a Classic Computer Game” was presented by Matthew Kirschenbaum, Doug Reside, Neil Fraistat, Jerome McDonough, and Dennis Jerz at Digital Humanities 2009 in June. (The classic computer game is Adventure.)
The conference program is only available as humungus 52 MB PDF and isn’t on the readable web, so to read the full abstract of the paper you’ll have to download it and look on page A22. I can’t even easily copy and paste a sample paragraph, I’m afraid, so you’re on your own.
(Thanks to Kevin Hawkins for telling me about this.)