Monday, August 10, 2009

Upgrade Notes

During my recent work on moving the Papyrological Navigator from Columbia to NYU, I ran into some issues that bear noting. It's a bit hard to know whether these are generalizable, but they seem to me to be good examples of the kinds of things that can happen when you're upgrading a complex system, and I don't want to forget about them.

Issue #1
Search results in the PN are supposed to return with KWIC snippets, highlighting the search terms. As part of the move, I upgraded Lucene to the latest release (2.4.1). The Lucene in the PN was 2.3.x, but the developer at Columbia had worked hard to eke as much indexing speed out of it as possible, and had imported code from the 2.4 branch, with some modifications. Since this code was really close to 2.4, I'd had reason to hope the upgrade would be smooth, and it mostly was. Highlighting wasn't working for Greek though, even though the search itself was...

Debugging this was really hard, because as it turned out, there was no failure in any of the running code. It just wasn't running the right code. A couple of the slightly modified Lucene classes in the PN codebase were being stepped on by the new Lucene because instead of a jar named "ddbdp.jar", the new PN jars were named after the project in which they resided (so, "pn-ddbdp-indexers.jar". And they were getting loaded after Lucene instead of before. Not the first time I'd seen this kind of problem, but always a bit baffling. In the end I moved the PN Lucene classes out of the way by changing their names and how they were called.

Issue #2

This one was utterly baffling as well. Lemmatized search (that is, searching for dictionary headwords and getting hits on all the forms of the word—very useful for inflected languages, like Greek) was working at Columbia, and not at NYU. Bizarre. I hadn't done anything to the code. Of course, it was my fault. It almost always is the programmer's fault. A few months before, in response to a bug report (and before I started working for NYU), I had updated the transcoder software (which converts between various encodings for Ancient Greek) to conform to the recommended practice for choosing which precomposed (letter + accent) character to use when the same one (e.g. alpha + acute accent) occurs in both the Greek (Modern) and Greek Extended (Ancient) blocks in Unicode. Best practice is to choose the character from the Greek block, so \u03AC instead of \u1F71 for ά. Transcoder used to use the Greek Extended character, but since late 2008 it has followed the new recommendation and used characters from the Greek block, where available. Unfortunately this change happened after transcoder had been used to build the lemma database that the PN uses to expand lemmatized queries. So it had the wrong characters in it, and a search for any lemma containing an acute accent would fail. Again, all the code was executing perfectly; some of the data was bad. It didn't help that when I pasted lemmas into Oxygen, it normalized the encoding, or I might have realized sooner that there were differences.

Issue #3

Last, but not least, was a bug which manifested as a failure in certain types of search. "A followed by B within n places" searches worked, but "A and B (not in order) within n places" and "A but not B within n places" both failed. Again, no apparent errors in the PN code. The NullPointerException that was being thrown came from within the Lucene code! After a lot of messing about, I was able to determine that the failure was due to a Lucene change that the PN code wasn't implementing against. Once I'd found that, all it took to fix it was to override a method from the Lucene code. This was actually a Lucene bug (https://issues.apache.org/jira/browse/LUCENE-1748) which I reported. In trying to maintain backward compatibility, they had kept compile-time compatibility with pre-2.4 code, but broken it in execution. I have to say, I was really impressed with how fast the Lucene team, particularly Mark Miller, responded. The bug is already fixed.

So, lessons learned:


  1. Tests are good. I didn't have any available for the project that contained all of the bugs listed here. They exist (though coverage is spotty), but there are dependencies that are tricky to resolve, and I had decided to defer getting the tests to work in favor of getting the PN online. Not having tests ate into the time I'd saved by deferring them.

  2. In both cases #1 and #3, I had to find the problem by reading the code and stepping through it in my head. Practice this basic skill.

  3. Look for ways your architecture may have changed during the upgrade. Anything may be significant, including filenames.

  4. Greek character encoding is the Devil (but I already knew that).

  5. It's probably your fault, but it might not be. Look closely at API changes in libraries you upgrade. Go look at the source if anything looks fishy. I didn't expect to find anything wrong with something as robust as Lucene, but I did.

Friday, January 23, 2009

Endings and Beginnings

It's been that sort of a week. Great beginning with the inauguration on Tuesday and the start of a new Obama presidency. My wife was in tears. Growing up in a small southern town, she never imagined she'd see a black president, and now our youngest daughter will never know a world in which there hasn't been one. Sometimes things do change for the better.

On a personal note, I gave my notice to UNC on Tuesday. My position was partially funded with soft money, and one-time money is one of the primary ways they're trying to address the budget crisis, in order not to lay off permanent employees (as is right and proper). I'm rather sad about leaving, but I will be starting a job with the NYU digital library team in February, working on digital papyrology. This has the look of a job where I can unite both the Classics geek and the tech geek sides of my personality. I may become unbearable.

Wednesday, December 31, 2008

OpenLayers and Djatoka

For the last few weeks, I've been playing around with the new JPEG2000 image server released by the Los Alamos National Labs (http://african.lanl.gov/aDORe/projects/djatoka/). I never could get the image viewer released along with it to work, and I immediately thought of OpenLayers (http://openlayers.org/), a javascript API for embedding maps. OpenLayers is like Google Maps in many ways, but Free. Besides maps, it works very well for any image, and provides a lot of tools developed for mapping, but also useful for displaying and working with any large image. I wanted to use OpenLayers support for tiled images in conjunction with Djatoka's ability to render arbitrary sections of an image at a number of zoom levels (the number of levels available depends on how the image was compressed).

After a lot of messing around and some false starts, I've developed a Javascript class that supports Djatoka's OpenURL API. I've been testing it on JPEG2000 images created with ContentDM in the UNC Library's digital collections, with a good deal of success. The results are not yet available online, because I don't have a public-facing server I can host it on, but the source code is up on github here.

Instructions:

Install Djatoka. Incidentally, in order to get this in the queue for installation on our systems, I had to make Djatoka work on Tomcat 6. The binary doesn't work out of the box, but when I rebuilt it on my system (RHEL 5), it worked fine.

Copy the adore-djatoka WAR into your Tomcat webapps directory. Follow the instructions on the Djatoka site to start the webapp.

Grab a copy of OpenLayers. Put the OpenURL.js file in lib/OpenLayers/Layer/ and run the build.py script.

To just run the demo, copy the djatoka.html, the OpenLayers.js you just built, and the .css files from OpenLayers/theme/ and from the examples/ directory, as well as the OpenLayers control images from OpenLayers/img into the adore-djatoka directory in webapps. You should then be able to access the djatoka.html file and see the demo.

This all comes with no guarantees, of course. It seems to work quite well with the JPEG2000 images I've tested, and the tiling means that each request of Djatoka consumes an equal amount of resources. I've run into OutOfMemoryErrors when requesting full-size images, but this method loads them without any problem.

Update (2009-01-05 14:37): I've posted a fix to the OpenURL.js script for a bug pointed out to me by John Fereira on the djatoka-devel list. If you grabbed a copy before now, you should update.

Update: screenshots --







Wednesday, October 29, 2008

Thoughts on crosswalking

For the second Integrating Digital Papyrology project, we need to develop a method for crosswalking between EpiDoc (which is a dialect of TEI) and various database formats. We've thought about this quite a bit in the past and we think that we don't just want to write a one-off conversion because (a) there will be more than one such conversion and (b) we want to be able to document the mappings between data sources in a stable format that isn't just code (script, XSLT, etc.)

Some of the requirements for this notional tool are:


  • should document mappings between data formats in a declarative fashion

  • certain fields will require complex transformations. For example, the document text will likely be encoded in some variant of Leiden in the database, and will need to be converted to EpiDoc XML. This is currently accomplished by a fairly complex Python script, so it should be possible to define categories of transformation which would signal a call to an external process.

  • some mappings will involve the combination of database fields into a single EpiDoc element, and others, the division of a single field into multiple EpiDoc elements

  • Context-specific information (not included in the database) will need to be inserted into the EpiDoc document, so some sort of templating mechanism should be supported.

  • The mapping should be bidirectional. We aren't just talking about exporting from a database to EpiDoc, but also about importing from EpiDoc, which is envisioned as an interchange format as well as a publication format. This is why a single mapping document, rather than a set of instructions on how to get from one to the other would be nice.


So far, my questions to various lists have turned up favorable responses (i.e. "yes, that would be a good thing") but no existing standards....

Monday, October 20, 2008

On Bamboo the 2nd

I spent Thursday - Saturday last week at the second Bamboo workshop in San Francisco. So some reactions:

1) The organizers are well-intentioned and are sincerely trying to wrestle with the problem of cyberinfrastructure for Digital Humanities.

2) That said, it isn't clear that the Bamboo approach is workable. The team is very IT focused, and while they seem to have a solid grasp of large-scale software architecture, the ways in which that might be applied to the Humanities with any success aren't obvious. There was a lot of misdirected effort between B1 and B2 by some very smart people, who I must say had the good grace to admit it was a nonstarter. Their attempt to factor the practices of scholars into implementable activities resulted in something that lacked enough context and specificity to be useful. A refocusing on context and on the processes that contain and help define the activities happened at the workshop and seems likely to go forward.

3) The workshops themselves seem to have been quite useful. I wasn't at any or the round one workshops, and I doubt I'll be at any of the others (I represented the UNC Library because the usual candidates weren't available), but everyone I talked to was very engaged (if often skeptical). The connections and discussion that seem to have emerged so far probably make the investment worthwhile, even if "Bamboo" as conceived doesn't work.

4) The best idea I heard came (not surprisingly) from Martin Mueller, who suggested Bamboo become a way to focus Mellon funding on projects that conform to certain criteria (such as reusable components and standards) for a defined period (say five years). The actual outcome of the current Bamboo would be the criteria for the RFP. Simple, encourages institutions to think along the right lines, might actually do some good, and might allow participation by smaller groups as well.

5) There was a lot of talk about the people who are both researchers and technologists (guilty). These were variously defined as "hybrids," "translators," and, most offensively, "the white stuff inside the Oreo." None of this was meant to be offensive, but in the end, it is. People who can operate comfortably in both the worlds of scholarship and IT can certainly be useful go-betweens for those who can't, but that is not our sole raison d'être. Until recently there haven't been many jobs for us, but that seems to be changing, and I hope it continues to. See Lisa Spiro's excellent recent post on Digital Humanities Jobs and Sean Gillies, who without having been there, manages to capture some of the reservations I feel about the current enterprise and pick up on the educational aspect. One possible useful future for Bamboo would be simply to foster the development of more "hybrids."

6) The Bamboo folks have set themselves a truly difficult task. They are making a real effort to tackle it in an open way, and should be commended for it. But it is a very hard problem, and one for which there is still not a clear definition. The software engineer part of my hybrid brain wants problems defined before it will even consider solutions. The classicist part believes some things are just hard, and you can't expect technology to make them easy for you.

Sunday, September 28, 2008

Go Zotero!

The Thomson Reuters lawsuit against the developers of Zotero is getting a lot of notice, which is good.

I've noticed that in the library world, when people mention getting sued, it's with fear and the implication that this represents the end of the world. It's an interesting contrast coming from working for a startup (albeit a pretty well-funded one) where lawsuits == a) publicity, and are not to be feared (perhaps even to be provoked) and/or b) are a signal that you've scared your competitors enough to make them go running to Daddy, thus unequivocally validating your business model.

This is an act of sheer desperation on the part of Thomson Reuters. They're hoping GMU will crumble and shut the project down. I do hope Dan has contacted the EFF (donate!) and that the GMU administration will take this for what it is: fantastic publicity for one of their most important departments and an indicator that they are doing something truly great.

Friday, August 15, 2008

Back from Balisage

I never made it to Extreme, Balisage's predecessor, despite wanting to very badly, so I'm very glad I did go to its new incarnation. I'm still processing the week's very rich diet of information, but it was very, very cool.

Simon St. Laurent, who wrote one of the first XML books I bought back in 1999, Inside XML DTDs has a photo of one of the slides from my presentation in his Balisage roundup post. This is the kind of κλέος I can appreciate!

Thursday, August 14, 2008

Balisage Presentation online

I just rsynced up my presentation on linking manuscript images to transcriptions using SVG for Balisage, that I gave this morning. It's at http://www.unc.edu/~hcayless/img2xml/presentation.html. The image viewer embedded into the presentation is at http://www.unc.edu/~hcayless/img2xml/viewer.html. Text paths are still busted at the highest resolution, as you'll see if you zoom all the way in, but apart from that it seems to work.

Balisage has been a really great conference so far. I highly recommend it.

Saturday, May 31, 2008

New TransCoder release

This is something I've been meaning to wrap up and write up for a while now: thanks to the Duke Integrating Digital Papyrology grant from the Andrew W. Mellon Foundation, I've been able to make a bunch of updates to the Transcoder, a piece of software I originally wrote for the EpiDoc project. Transcoder is a Java program that handles switching the encodings of Greek text, for example from Beta Code to Unicode (or back again). It's used in initiatives like Perseus and Demos. I've been modifying it to work with Duke Databank of Documentary Papyri XML files (which are TEI based). Besides a variety of bug fixes, there is now also included in Transcoder a fully-functional SAX ContentHandler that allows the processing of XML files containing Greek text to be transcoded.

There are a lot of complex edge cases in this sort of work. For example, Beta Code (or at least the DDbDP's Beta) doesn't distinguish between medial (σ) and final (ς) sigmas. That's an easy conversion in the abstract (just look for 's' at the end of a word, and it's final), but when your text is embedded in XML, and there may be an expansion (<expan>) tag in the middle of a word, for example, it becomes a lot harder. You can't just convert the contents of a particular element--you have to be able to look ahead. The problem with SAX, of course, is that it's stream-based, so no lookahead is possible unless you do some buffering. In the end what I did was buffer SAX events when an element (say a paragraph) marked as Greek begins, and keep track of all the text therein. That let me do the lookahead I needed to do, since I have a buffer containing the whole textual content of the <p> tag. When the end of the element comes, I then flush the buffer, and all the queued-up SAX events fire, with the transcoded text in them.

That's a lot of work for one letter, but I'm happy to say that it functions well now, and is being used to process the whole DDbDP. Another edge case that I chose not to solve in the Transcoder program is the problem of apparatus and their contents in TEI. An <app> element can contain a <lem> (lemma) and one or more <rdg> (readings). The problem with it is that the lemma and readings are conceptually parallel in the text. For example:

The quick brown <lem>fox</lem> jumped over the lazy dog.
                <rdg>cat</rdg>


The TEI would be:

The quick brown <app><lem>fox</lem><rdg>cat</rdg></app> jumped over the lazy dog

So "cat" follows immediately after "fox" in the text stream, but both words occupy the same space as far as the markup is concerned. In other words, I couldn't rely only on my fancy new lookahead scheme, because it broke down in edge cases like this. The solution I went with is dumb, but effective: format the apparatus so that there is a newline after the lemma (and the reading, if there are multiple readings). That way my code will still be able to figure out what's going on. The whitespace so introduced really needs to be flagged as significant, so that it doesn't get clobbered by other XML processes though. That has already happened to us once. It caused a bug for me too, because I wasn't buffering ignorable whitespace.

All that trouble over one little letter. Lunate sigmas would have made life so much easier...

Sunday, March 16, 2008

D·M·S· Allen Ross Scaife 1960-2008

On Saturday afternoon, March 15th, I learned that my friend Ross had died that morning after a long and hard-fought struggle with cancer. He was at his home in Lexington, Kentucky, surrounded by his family.

Ross was one of the giants of the Digital Classics community. He was the guiding force behind the Stoa, and the founder of many of its projects. Ross was always generous with his time and resources and has been responsible for incubating many fledgling Digital Humanities initiatives. His loss leaves a gap that will be impossible to fill.

Ross was also a good friend, easy to talk to, and always ready to encourage me to experiment with new ideas. I miss him very much.

What he began will continue without him, and though we cannot ever replace Ross, we can honour his memory by carrying on his good work.

update (March 21, 21:04)

Dot posted a lovely obituary of Ross at the Stoa. Tom and several others have posted nice memorials as well.

On a happier note: my daughter, Caroline Emma Ross Cayless was born at 11:52 pm, March 19th.

Wednesday, January 23, 2008

Catching up

My New Year's resolution was to write more, and specifically to blog more, but so far all of my writing has been internally focussed at my job.  So I shall have another go...

Speaking of New Year's, I spent a chunk of New Year's Eve getting The Colonial and State Records of North Carolina off the ground.  It's driven by the eXist XML database, of which I've grown rather fond.  XQuery has a lot of promise as a tool for digital humanists with large collections of XML.

Monday, October 22, 2007

I've been at the Chicago Colloquium on Digital Humanities and Computer Science since yesterday, presenting on the Colonial and State Records project (available soon at http://docsouth.unc.edu).

Interesting themes that have emerged:
  • The importance of Not Reading, i.e. how to use computational tools to investigate textual spaces when there is more text than you can digest by reading cover-to-cover.
  • Going beyond search: Discovery is an important task, but it's one we do quite well now, how do we go beyond just finding stuff and start to explore the data spaces that digital methods make available? Visualization tools are going to be an important component of this exploration. Digitization and search hasn't changed the nature of research. It has improved the speed with which research is done (nobody spends years producing concordances anymore), but it hasn't changed the questions we ask.
  • The dawn of Eurasian scholarship (this from Lewis Lancaster's talk): the divide between Occidental and Oriental scholarship no longer makes any sense (well, it never really did) and is probably over.

Monday, May 21, 2007

Note to job seekers

When applying for a programming job, listing Dreamweaver as a skill is an automatic 50 demerits.

Thursday, March 01, 2007

I'm going to be a digital librarian!

As of March 15th, I will be working for the UNC Library as a digital library programmer. I'm going to miss Lulu a lot. It's been a wonderful environment to work in, with people I'm going to find hard to leave. But working with collections like Documenting the American South is a text geek's Nirvana, so it was far too good an opportunity to pass up...

Tuesday, February 06, 2007

How has Ruby blown your mind?

...asks Pat Eyler

I had the opportunity to learn Ruby as part of a work project last year and was immediately impressed by its object-orientation, its use of blocks, the straightforward way it handles multiple inheritance with modules, and just the elegance and speed with which I could work in it. The moment that really changed the way I saw the language came when I had to generate previews of Word and OpenDocument (ODT) documents uploaded to the site I was working on. Converting Word to ODT seemed like the way to go, since ODT has a zipped XML format, and can therefore be transformed to XHTML. I have a lot of experience using XSLT to transform XML from one vocabulary to another, so this seemed like well explored territory to me, even if it would take a fair amount of work to accomplish. As usual, I did some web-trolling to see who had dealt with this issue before me, in case the problem was already solved. Google pointed me at J. David Eisenberg's ruby_odt_to_xhtml, which looked like a good start. It didn't do everything I wanted, in particular it didn't handle footnotes adequately, but I didnt expect it would be too hard to modify. The surprises came when I looked at the code...

The first surprise was the utter lack of XSLT. Not a huge surprise, perhaps. I'd already gathered that Rubyists viewed XML with a somewhat jaundiced eye. Tim Bray has lamented the state of XML support in Ruby as well. Tim is quite right about the relative weakness of XML support in Ruby, even though I absolutely agree with the practice of avoiding XML configuration files. There is a perfectly good Ruby frontend to libxslt, however, so it's use is not out of the question. But there it was: for whatever reason, the author had decided not to use the technology I was familiar with...why would he do that, and could I still use his tool?

The mind expansion came about when I started figuring out how to extend odt_to_xhtml to handle notes, which it was basically ignoring. I wanted to turn ODT footnotes into endnotes with named anchors at the bottom of the page, links in the text to the corresponding anchor, and backlinks from the note to its link in the text. Before describing what I found, I should give a little background on XSLT:

At its most basic, XSLT expects input in the form of an XML document, and produces either XML or text output. In XSLT, the functions are called templates. Templates respond either to calls (as do functions in most languages) or, more often, to matches on the input XML document. So a template like


<xsl:template match="text:p">
<p><xsl:apply-templates/></p>
</xsl:template>


would be triggered every time a paragraph element in an OpenDocument content.xml is encountered and would output a <p> tag, yield to any other matching templates, and then close the <p> tag.

As I looked at JDE's code, I saw lots of methods like this:


def process_text_list_item( element, output_node )
style_name = register_style( element )
item = emit_element( output_node, "li", {"class" => style_name} )
process_children( element, item )
end


emit_element does what it sounds like it does, adds a child element to the element passed in to the method with a hash of attribute name/value pairs. It's process_children that really interests me:


# Process an element's children
# node: the context node
# output_node: the node to which to add the children
# xpath_expr: which children to process (default is all)
#
# Algorithm:
# If the node is a text node, output to the destination.
# If it's an element, munge its name into
# <tt>process_prefix_elementname</tt>. If that
# method exists, call it to handle the element. Otherwise,
# process this node's children recursively.
#
def process_children( node, output_node, xpath_expr="node()" )
REXML::XPath.each( node, xpath_expr ) do |item|
if (item.kind_of?(REXML::Element)) then
str = "process_" + @namespace_urn[item.namespace] + "_" + item.name.tr_s(":-", "__")
if ODT_to_XHTML.method_defined?( str ) then
self.send( str, item, output_node )
else
process_children(item, output_node)
end
elsif (item.kind_of?(REXML::Text) && !item.value.match(/^\s*$/))
output_node.add_text(item.value)
end
end
#
# If it's empty, add a null string to force a begin and end
# tag to be generated
if (!output_node.has_elements? && !output_node.has_text?) then
output_node.add_text("")
end
end


Mind expansion ensued. This Ruby class was doing exactly the same thing that I'd expect an XSLT stylesheet to do, with the help of a few lines of code to keep it going! process_text_list_item is a template! Coming from Java and then PHP, I'd have no hesitation switching to XSLT to accomplish a bit of XML processing like this, but in Ruby, there really wasn't any need. I could write XSLT-like code perfectly naturally without ever leaving Ruby!

Now, I still like XSLT, and I'd still use it in many cases like this, because it's portable across different lanaguages and platforms. But here, where there are other considerations, it's wonderful that I'm not forced to step outside the language I'm working in to accomplish what I want. In order to extend the code to handle notes, I just added some new template-like methods to match on notes and note-citations, e.g.:


def process_text_note( element, output_node )
process_children(element, output_node, "#{text_ns}:note-citation")
end


In OpenDocument, notes are inline structures. The note is embedded within the text at the point where the citation occurs, so to create endnotes, you need to split the note into a citation link and a note that is placed at the end of the output document. To add the endnotes, I borrowed a trick from XSLT: modes. If an XSL template has a mode="something" attribute, then that template will not match on an input node unless it was dispatched with an <apply-templates mode="something"/>. So I did the same thing, e.g.:


def process_text_note_mode_endnote( element, output_node )
p = emit_element(output_node, "p", {"class" => "footnote"})
process_children(element, p, "#{@text_ns}:note-citation", "endnote")
process_text_s(element, p)
process_children(element, p, "#{@text_ns}:note-body/#{@text_ns}:p[1]/node()")
process_children(element, p, "#{@text_ns}:note-body/#{@text_ns}:p[1]/following-sibling::*")
end


The method that controls the processing flow in JDE's code is called analyze_content_xml. I just added a call to my moded methods in analyze_content_xml and modified process_children to take a mode parameter.


def process_children( node, output_node, xpath_expr="node()", mode=nil )
if xpath_expr.nil?
xpath_expr = "node()"
end
REXML::XPath.each( node, xpath_expr ) do |item|
if (item.kind_of?(REXML::Element)) then
str = "process_" + @namespace_urn[item.namespace] + "_" + item.name.tr_s(":-", "__")
if mode
str += "_mode_#{mode}"
end
if ODT_to_XHTML.method_defined?( str ) then
self.send( str, item, output_node )
else
process_children(item, output_node)
end
elsif (item.kind_of?(REXML::Text) && !item.value.match(/^\s*$/))
output_node.add_text(item.value)
end
end
#
# If it's empty, add a null string to force a begin and end
# tag to be generated
if (!output_node.has_elements? && !output_node.has_text?) then
output_node.add_text("")
end
end

Done. Easy. Blew my mind.

Saturday, January 20, 2007

Prototype grows up

http://prototypejs.org is the new site for Prototype 1.5. As the Ajaxian blog noted: Now with Documentation! Of course, Prototype always had some documentation; quite good documentation at that, even though there were substantial pieces missing and you had to go digging sometimes.

Prototype played a big part in reawakening my interest in Javascript as a programming language. I was rather anti-Javascript for a while, having fought many bloody battles with cross-browser incompatibilities in the early 2000's (UNC Chapel Hill, my then employer, had standardized, somewhat foolishly, on Netscape 4.7, but of course we had to support IE too -- nightmare). I got back into it seriously when I started to notice all of the AJAXy and Web 2.0-ish stuff going on. I've learned a lot from digging around in the Prototype source code, so the spotty documentation actually did me some good.

Kudos to the Prototype development team and the contributors to the documentation effort. You've done us all a great service! I look forward to using 1.5...

Thursday, August 24, 2006

XSL-FO 2.0 Workshop 2006

International Workshop on the future of the Extensible Stylesheet Language (XSL-FO) Version 2.0


XSL-FO 2.0 Workshop 2006

I have two suggestions:
  • use CSS instead of weird attribute-based style declarations.

  • for God's sake, have a reference implementation.

Wednesday, August 02, 2006

Boycott Blackboard!

I knew Blackboard had a patent application for their LMS, but apparently it has been granted and their first act was to file a lawsuit against one of their competitors. This is terrible on many levels. Not least that such a stupid patent should never have been granted,. Of course, the USPTO would probably let me patent my nose hair.

I certainly won't be using Blackboard for my XML class in the Fall, and I'd encourage other instructors to drop it too.

Friday, October 21, 2005

Google Library

I just read John Battelle's post on the AAP's lawsuit. The comments are particularly interesting, with a couple of very strident ones criticising Google. I have a theory about how Google plans to justify their actions:
  1. Libraries are allowed, under copyright law, to make a single copy of any work in their possession. This is called the Library Exemption. There is a nice outline of the terms here. The libraries themselves can't get in trouble for contracting with Google to do this for them, because they are receiving no commercial advantage from it. Google clearly is receiving a competitive advantage from it, BUT:
  2. They may be able to make a good case for Fair Use, depending on the nature of what they keep from the book. There are four aspects to be weighed in any Fair Use defense (see Wikipedia):
    1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
    2. the nature of the copyrighted work;
    3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
    4. the effect of the use upon the potential market for or value of the copyrighted work.
Clearly Google hopes for commercial advantage from the use of the scanned books, so they might fail the first test. The second doesn't really apply: these are clearly books subject fully to copyright law. It's the third and fourth aspects that I think are the center of Google's defense. A copy is a copy, but a searchable index created from a scanned copy is arguably a transformative use of the book. A human being can neither read the index, nor reconstruct the original from it, so Google may be able to successfully defend themselves on aspect #3. Their main weakness is the existence of page images from the original scan. These may or may not be stored and accessible in such a way that a whole copy of the original could be reconstructed and read. Aspect #4 is another winner for Google. The clear effect of this system will be to sell more copies of the publishers' books. The only (theoretical) commercial harm caused to the publishers is that they are effectively prevented from rolling a Google Print of their own, which might bring them in more money than simply selling their books. So Google wins on at least two of the four counts, and the act of copying itself is protected under the Library Exemption.

I suspect the AAP would have an uphill battle in winning this one. I wouldn't be surprised if they wanted Google to license their books for their index at some fairly exorbitant rate, and Google refused to pay because they're doing the publishers a favor. That would make the lawsuit a negotiating tactic.

Thursday, April 28, 2005

When SEOs Attack

Search Engine Foo: iUniverse Book Publishing: Book Publisher for Self Publishing and Print on Demand. Care to guess what terms they're optimizing for? It does seem to work. They show up #1 for a Google search on "self publishing." So clearly this sort of spamming works. But it leads to pretty hilarious prose:

iUniverse, the leading online book publisher, offers the most comprehensive book publishing services in the self-publishing industry—awarded the Editor's Choice award by PC Magazine and chosen by thousands of satisfied authors as the leading print-on-demand book publisher.

We help authors to prepare a manuscript, design and self-publish a book of professional quality, publicize and market their book, and print copies of their book for sale online and in bookstores around the world.

As an innovative book publisher, we also offer exclusive services such as our acclaimed Editorial Review and our revolutionary Star Program, designed to discover and nurture exceptional new talent within our growing author community.

Don't wait any longer to get that manuscript off your desk and into the marketplace. With iUniverse as your book publisher, you can become a published author in a matter of weeks. Why not get started today?


Yes, indeed. Publish your book with a publishing publisher and be published. Ouch. Not sure I'd pay them an exorbitant fee to edit my book.