Tag Archives: fident

DROID 6.2

The UK National Archive quietly released DROID 6.2 this month. I noticed only because of some mentions on Twitter. The file dates indicate the update was released on February 16. Here’s the new portion of the changelog:
Continue reading

File identification tools, part 10: Siegfried

“Do we really need another PRONOM-based file format identification tool?” That’s what Richard Lehane asked rhetorically last year on the Open Preservation Foundation blog. It was obviously rhetorical, since he’d gone ahead and done just that with a new tool called Siegfried. Siegfried recently turned up in some tweets by Ross Spencer, so it’s worth a mention here.
Continue reading

File identification tools, part 9: JHOVE2

The story of JHOVE2 is a rather sad one, but I need to include it in this series. As the name suggests, it was supposed to be the next generation of JHOVE. Stephen Abrams, the creator of JHOVE (I only implemented the code), was still at Harvard, and so was I. I would have enjoyed working on it, getting things right that the first version got wrong. However, Stephen accepted a position with the California Digital Library (CDL), and that put an end to Harvard’s participation in the project. I thought about applying for a position in California but decided I didn’t want to move west. I was on the advisory board but didn’t really do much, and I had no involvement in the programming. I’m not saying I could have written JHOVE2 better, just explaining my relationship to the project. JHOVE2 logo

The institutions that did work on it were CDL, Portico, and Stanford University. There were two problems with the project. The big one was insufficient funding; the money ran out before JHOVE2 could boast a set of modules comparable to JHOVE. A secondary problem was usability. It’s complex and difficult to work with. I think if I’d been working on the project, I could have helped to mitigate this. I did, after all, add a GUI to JHOVE when Stephen wasn’t looking.

JHOVE has some problems that needed fixing. It quits its analysis on the first error. It’s unforgiving on identification; a TIFF file with a validation error simply isn’t a TIFF file, as far as it’s concerned. Its architecture doesn’t readily accommodate multi-file documents. It deals with embedded formats only on a special-case basis (e.g., Exif metadata in non-TIFF files). Its profile identification is an afterthought. JHOVE2 provided better ways to deal with these issues. The developers wrote it from scratch, and it didn’t aim for any kind of compatibility with JHOVE.
Continue reading

File identification tools, part 7: Apache Tika

Apache Tika is a Java-based open source toolkit for identifying files and extracting metadata and text content. I don’t have much personal experience with it, apart from having used it with FITS. Apache Software Foundation is actively maintaining it, and version 1.9 just came out on June 23, 2015. It can identify a wide range of formats and report metadata from a smaller but still impressive set. You can use Tika as a command line utility, a GUI application, or a Java library. You can find its source code on GitHub, or you can get its many components from the Maven Repository.

Tika isn’t designed to validate files. If it encounters a broken file, it won’t tell you much about how it violates the format’s expectations.

Originally it was a subproject of Lucene; it became a standalone project in 2010. It builds on existing parser libraries for various formats where possible. For some formats it uses its own libraries because nothing suitable was available. In most cases it relies on signatures or “magic numbers” to identify formats. While it identifies lots of formats, it doesn’t distinguish variants in as much detail as some other tools, such as DROID. Andy Jackson has written a document that sheds light on the comparative strengths of Tika and DROID. Developers can add their own plugins for unsupported formats. Solr and Lucene have built-in Tika integration.

Prior to version 1.9, Tika didn’t have support for batch processing. Version 1.9 has a tika-batch module, which is described in the change notes as “experimental.”

The book Tika in Action is available as an e-book (apparently DRM free, though it doesn’t specifically say so) or in a print edition. Anyone interested in using its API or building it should look at the detailed tutorial on tutorialspoint.com. The Tika facade serves basic uses of the API; more adventurous programmers can use the lower-level classes.

Next: NLNZ Metadata Extraction Tool. To read this series from the beginning, start here.

File identification tools, part 6: FITS

FITS is the File Information Tool Set, a “Swiss army knife” aggregating results from several file identification tools. The Harvard University Libraries created it, and though it was technically open-source from the beginning, it wasn’t very convenient for anyone outside Harvard at first. Other institutions showed interest, its code base moved from Google Code to GitHub, and now it’s used by a number of digital repositories to identify and validate ingested documents. Don’t confuse it with the FITS (Flexible Image Transport System) data format.

It’s a Java-based application requiring Java 7 or higher. Documentation is found on Harvard’s website. It wraps Apache Tika, DROID, ExifTool, FFIdent, JHOVE, the National Library of New Zealand Metadata Extractor, and four Harvard native tools. Work is currently under way to add the MediaInfo tool to enhance video file support. It’s released as open source software under the GNU LGPL license. The release dates show there’s been a burst of activity lately, so make sure you have the latest and best version.

FITS is tailored for ingesting files into a repository. In its normal mode of operation, it processes whole directories, including all nested subdirectories, and produces a single XML output file, which can be in either the FITS schema or other standard schemas such as MIX. You can run it as a standalone application or as a library. It’s possible to add your own tools to FITS.
Continue reading

File identification tools, part 5: JHOVE

In 2004, the Harvard University Libraries engaged me as a contractor to write the code for JHOVE under Stephen Abrams’ direction. I stayed around as an employee for eight more years. I mention this because I might be biased about JHOVE: I know about its bugs, how hard it is to install, what design decisions could have been better, and how spotty my support for it has been. Still, people keep downloading it, using it, and saying good things about it, so I must have done something right. Do any programmers trust the code they wrote ten years ago?

The current home of JHOVE is on GitHub under the Open Preservation Foundation, which has taken over maintenance of it from me. Documentation is on the OPF website. I urge people not to download it from SourceForge; it’s out of date there, and there have been reports of questionable practices by SourceForge’s current management. The latest version as of this writing is 1.11.

JHOVE stands for “JSTOR/Harvard Object Validation Environment,” though neither JSTOR nor Harvard is directly involved with it any longer. It identifies and validates files in a small set of formats, so it’s not a general-purpose identification tool, but does a fairly thorough job. The formats it validates are AIFF, GIF, HTML, JPEG, JPEG2000, PDF, TIFF, WAV, XML, ASCII, and UTF-8. If it doesn’t recognize a file as any of those formats, it will call it a “Bytestream.” You can use JHOVE as a GUI or command line application, or as a Java library. If you’re going to use the library or otherwise do complicated things, I recommend downloading my payment-optional e-book, JHOVE Tips for Developers. Installation and configuration are tricky, so follow the instructions carefully and take your time.

JHOVE shouldn’t be confused with JHOVE2, which has similar aims to JHOVE but has a completely different code base, API, and user interface. It didn’t get as much funding as its creators hoped, so it doesn’t cover all the formats that JHOVE does.

Key concepts in JHOVE are “well-formed” and “valid.” When allowed to run all modules, it will always report a file is a valid instance of something; it’s a valid bytestream if it’s not anything else. This has confused some people; a valid bytestream is nothing more than a sequence of zero or more bytes. Everything is a valid bytestream.

The concept of well-formed and valid files comes from XML. A well-formed XML file obeys the syntactic rules; a valid one conforms to a schema or DTD. JHOVE applies this concept to other formats, but it’s generally not as good a fit. Roughly, a file which is “well-formed but not valid” has errors, but not ones that should prevent rendering.

JHOVE doesn’t examine all aspects of a file. It doesn’t examine data streams within files or deal with encryption. It focuses on the semantics of a file rather than its content. However, it’s very aggressive in what it does examine, so that sometimes it will declare a file not valid when nearly all rendering software will process it correctly. If there’s a conflict between the format specification and generally accepted practice, it usually goes by the specification.

It checks for profiles within a format, such as PDF/A and TIFF/IT. It only reports full conformance to a profile, so if a file is intended to be TIFF/A but fails any tests for the profile, JHOVE will simply not list PDF/A as a profile. It won’t tell you why it fell short.

The PDF module has been the biggest adventure; PDF is really complicated, and its complexity has increased with each release. Bugs continue to turn up, and it covers PDF only through version 1.6. It needs to be updated for 1.7, which is equivalent to ISO 32000.

Sorry, I warned you that I’m JHOVE’s toughest critic. But I wouldn’t mind a chance to improve it a bit, through the funding mechanism I mentioned earlier in the blog.

Next: FITS. To read this series from the beginning, start here.

File identification tools, part 3: DROID and PRONOM

The last installment in this series looked at file, a simple command line tool available with Linux and Unix systems for determining file types. This one looks at DROID (Digital Record Object IDentification), a Java-based tool from the UK National Archives, focused on identifying and verifying files for the digital repositories of libraries and archives. It’s available as open source software under the New BSD License. Java 7 or 8 is needed for the current release (6.1.5). It relies on PRONOM, the National Archive’s registry of file format information.

Like file, DROID depends on files that describe distinctive data values for each format. It’s designed to process large batches of files and compiles reports in a much more useful way than file‘s output. Reports can include total file counts and sizes by various criteria.

To install DROID, you have to download and expand the ZIP file for the latest version. On Windows, you run droid.bat; on sensible operating systems, run droid.sh. You may first have to make it executable:

chmod +x droid.sh
./droid.sh

Running droid.sh with no arguments launches the GUI application. If there are any command line arguments, it runs as a command line tool. You can type

./droid.sh --help

to see all the options.

The first time you run it as a GUI application, it may ask if you want to download some signature file updates from PRONOM. Let it do that.

It’s also possible to use DROID as a Java library in another application. FITS, for example, does this. There isn’t much documentation to help you, but if you’re really determined to try, look at the FITS source code for an example.

DROID will report file types by extension if it can’t find a matching signature. This isn’t a very reliable way to identify a file, and you should examine any files matched only by extension to see what they really are and whether they’re broken. It may report more than one matching signature; this is very common with files that match more than one version of a format.

It isn’t possible to cover DROID in any depth in a blog post. The document Droid: How to use it and how to interpret your results is a useful guide to the software. It’s dated 2011, so some things may have changed.

Next: Exiftool. To read this series from the beginning, start here.

File identification tools, part 2: file

A widely available file identification tool is simply called file. It comes with nearly all Linux and Unix systems, including Macintosh computers running OS X. Detailed “man page” documentation is available. It requires using the command line shell, but its basic usage is simple:

file [filename]

file starts by checking for some special cases, such as directories, empty files, and “special files” that aren’t really files but ways of referring to devices. Second, it checks for “magic numbers,” identifiers that are (hopefully) unique to the format near the beginning of the file. If it doesn’t find a “magic” match, it checks if the file looks like a text file, checking a variety of character encodings including the ancient and obscure EBCDIC. Finally, if it looks like a text file, file will attempt to determine if it’s in a known computer language (such as Java) or natural language (such as English). The identification of file types is generally good, but the language identification is very erratic.

The identification of magic numbers uses a set of magic files, and these vary among installations, so running the same version of file on different computers may produce different results. You can specify a custom set of magic files with the -m flag. If you want a file’s MIME type, you can specify --mime, --mime-type, or --mime-encoding. For example:

file --mime xyz.pdf

will tell you the MIME type of xyz.pdf. If it really is a PDF file, the output will be something like

xyz: application/pdf; charset=binary

If instead you enter

file --mime-type xyz.pdf

You’ll get

xyz.pdf: application/pdf

If some tests aren’t working reliably on your files, you can use the -e option to suppress them. If you don’t trust the magic files, you can enter

file -e soft xyz.pdf

But then you’ll get the uninformative

xyz.pdf: data

The -k option tells file not to stop with the first match but to apply additional tests. I haven’t found any cases where this is useful, but it might help to identify some weird files. It can slow down processing if you’re running it on a large number of files.

As with many other shell commands, you can type file --help to see all the options.

file can easily be fooled and won’t tell you if a file is defective, but it’s a very convenient quick way to query the type of a file.

Windows has a roughly similar command line tool called FTYPE, but its syntax is completely different.

Next: DROID and PRONOM. To read this series from the beginning, start here.