File identification tools, part 2: file

A widely available file identification tool is simply called file. It comes with nearly all Linux and Unix systems, including Macintosh computers running OS X. Detailed “man page” documentation is available. It requires using the command line shell, but its basic usage is simple:

file [filename]

file starts by checking for some special cases, such as directories, empty files, and “special files” that aren’t really files but ways of referring to devices. Second, it checks for “magic numbers,” identifiers that are (hopefully) unique to the format near the beginning of the file. If it doesn’t find a “magic” match, it checks if the file looks like a text file, checking a variety of character encodings including the ancient and obscure EBCDIC. Finally, if it looks like a text file, file will attempt to determine if it’s in a known computer language (such as Java) or natural language (such as English). The identification of file types is generally good, but the language identification is very erratic.

The identification of magic numbers uses a set of magic files, and these vary among installations, so running the same version of file on different computers may produce different results. You can specify a custom set of magic files with the -m flag. If you want a file’s MIME type, you can specify --mime, --mime-type, or --mime-encoding. For example:

file --mime xyz.pdf

will tell you the MIME type of xyz.pdf. If it really is a PDF file, the output will be something like

xyz: application/pdf; charset=binary

If instead you enter

file --mime-type xyz.pdf

You’ll get

xyz.pdf: application/pdf

If some tests aren’t working reliably on your files, you can use the -e option to suppress them. If you don’t trust the magic files, you can enter

file -e soft xyz.pdf

But then you’ll get the uninformative

xyz.pdf: data

The -k option tells file not to stop with the first match but to apply additional tests. I haven’t found any cases where this is useful, but it might help to identify some weird files. It can slow down processing if you’re running it on a large number of files.

As with many other shell commands, you can type file --help to see all the options.

file can easily be fooled and won’t tell you if a file is defective, but it’s a very convenient quick way to query the type of a file.

Windows has a roughly similar command line tool called FTYPE, but its syntax is completely different.

Next: DROID and PRONOM. To read this series from the beginning, start here.

File identification tools, part 1

This is the start of a series on software for file identification. I’ll be exploring as broad a range as I reasonably can within the blog format, covering a variety of uses. I’m most familiar with the tools for preservation and archiving, but I’ll also look at tools for the end user and at digital forensics (in the proper sense of the word, the resolution of controversies).

We have to start with what constitutes “identifying” a file. For our purposes here, it means at least identifying its type. It can also include determining its subtype and telling you whether it’s a valid instance of the type. You can choose from many options. The simplest approach is to look at the file’s extension and hope it isn’t a lie. A little better is to use software that looks for a “magic number.” This gives a better clue but doesn’t tell you if the file is actually usable. Many tools are available that will look more rigorously at the file. Generally the more thorough a tool is, the narrower the range of files it can identify.

Identification software can be too lax or too strict. If it’s too lax, it can give broken files, perhaps even malicious ones, its stamp of approval. If it’s too severe, it can reject files that deviate from the spec in harmless and commonly accepted ways. Some specifications are ambiguous, and an excessively strict checker might rely on an interpretation which others don’t follow. A format can have “dialects” which aren’t part of the official definition but are widely used. TIFF, to name one example, is open to all of these problems.

Some files can be ambiguous, corresponding to more than one format. Here’s a video with some head-exploding examples. It’s long but worth watching if you’re a format junkie.

The examples in the video may seem far-fetched, but there’s at least one commonly used format that has a dual identity: Adobe Illustrator files. Illustrator knows how to open a .ai file and get the application-specific data, but most non-Adobe applications will see it as a PDF file. Ambiguity can be a real problem when file readers are intentionally lax and try to “repair” a file. Different applications may read entirely different file types and content from the same file, or the same file may have different content on the screen and when printed. So even if an identification tool tells you correctly what the format is, that may not be the whole story. I don’t know of any tool that tries to identify multiple formats for the same file.

Knowing the version and subtype of a file can be important. When an application reads a file in a newer version than it was written for, it may fail unpredictably, and it’s likely to lose some information. Some applications limit their backward compatibility and may be unable to read old versions of a format. Subtypes can indicate a file’s suitability for purposes such as archiving and prepress.

I’ll use the tag “fident” for all posts in this series, to make it easy to grab them together.

Next: The shell file command line tool.

Dataliths vs. the digital dark age

Digital technology has allowed us to store more information at less cost than ever before. At the same time, it’s made this information very fragile in the long term. A book can sit in an abandoned building for centuries and still be readable. Writing carved in stone can last for thousands of years. The chances that your computer’s disk will be readable in a hundred years are poor, though. You’ll have to go to a museum for hardware and software to read it. Once you have all that, it probably won’t even spin up. If it does, the bits may be ruined. In five hundred years, its chance of readability will be essentially zero.

Archivists are aware of this, of course, and they emphasize the need for continual migration. Every couple of decades, at least, stored documents need to be moved to new media and perhaps updated to a new format. Digital copies, if made with reasonable precautions, are perfect. This approach means that documents can be preserved forever, provided the chain never breaks.

Fortunately, there doesn’t have to be just one chain. The LOCKSS (lots of copies keeps stuff safe) principle means that the same document can be stored in archives all over the world. As long as just one of them keeps propagating it, the document will survive.

Does this make us really safe from the prospect of a digital dark age? Will a substantial body of today’s knowledge and literature survive until humans evolve into something so different that it doesn’t matter any more? Not necessarily. To be really safe, information needs to be stored in a form that can survive long periods of neglect. We need dataliths.

Several scenarios could lead to neglect of electronic records for a generation or more. A global nuclear war could destroy major institutions, wreck electronic devices with EMPs, and force people to focus on staying alive. An asteroid hit or a supervolcano eruption could have a similar effect. Humanity might surive these things but take a century or more to return to a working technological society.

Less spectacularly, periods of intense international fear or attempts to manage the world economy might create an unfriendly climate for preserving records of the past. The world might go through a period of severe censorship. Lately religious barbarians have been sacking cities and destroying historical records that don’t fit with their doctrines. Barbarians generally burn themselves out quickly, but “enlightened” authorities can also decide that all “unenlightened” ideas should be banished for the good of us all. Prestigious institutions can be especially vulnerable to censorship because of their visibility and dependence on broad support. Even without legal prohibition, archival culture may shift to decide that some ideas aren’t worth preserving. Either way, it won’t be called censorship; it will be called “fair speech,” “fighting oppression,” “the right to be forgotten,” or some other euphemism that hasn’t yet lost credibility.

How great is the risk of these scenarios? Who can say? To calculate odds, you need repeatable causes, and the technological future will be a lot different from the comparatively low-tech past. But if we’re thinking on a span of thousands of years, we can’t dismiss it as negligible. Whatever may happen, the documents of the past are too valuable to be maintained only by their official guardians.

Hard copy will continue to be important. It’s also subject to most of the forms of loss I’ve mentioned, but some of it can survive for many years with no attention. As long as someone can understand the language it’s written in, or as long as its pictures remain recognizable, it has value. However, we can’t back away from digital storage and print everything we want to preserve. The advantages of bits are clear: easy reproduction and high storage density. This isn’t to say that archivists should abandon the strategy of storing documents with the best technology and migrating them regularly. In good times, that’s the most effective approach. But the bigger strategy should include insurance against the bad times, a form of storage that can survive neglect. Ideally it shouldn’t be in the hands of Harvard or the Library of Congress, but of many “guerilla archivists” acting on their own.

This strategy requires a storage medium which is highly durable and relatively simple to read. It doesn’t have to push the highest edges of storage density. It should be the modern equivalent of the stone tablet, a datalith.

There are devices which tend in this direction. Milleniata claims to offer “forever storage” in its M-Disc. Allegedly it has been “proven to last 1,000 years,” though I wonder how they managed to start testing in the Middle Ages. A DVD uses a complicated format, though, so it may not be readable even if it physically lasts that long. Hitachi has been working on quartz glass data storage that could last for millions of years and be read with an optical microscope. This would be the real datalith. As long as some people still know today’s languages, pulling out ASCII data should be a very simple cryptographic task. Unfortunately, the medium isn’t commercially available yet. Others have worked on similar ideas, such as the Superman memory crystal. Ironically, that article, which proclaims “the first document which will likely survive the human race,” has a broken link to its primary source less than two years after its publication.

Hopefully datalith writers will be available before too long, and after a few years they won’t be outrageously expensive. The records they create will be an important part of the long-term preservation of knowledge.

Posted in commentary. Tags: , . Comments Off on Dataliths vs. the digital dark age

New open-source file validation project

The VeraPDF Consortium has announced that it has begun the prototyping phase for a new open-source validator of PDF/A. This is a piece of the PREFORMA (PREservation FORMAts) project; other branches will cover TIFF and audio-visual formats. Participants in VeraPDF are the Open Preservation Foundation, the PDF Association, the Digital Preservation Coalition, Dual Lab, and Keep Solutions.

Documents are available, including a functional and technical specification. It aims at being the “definitive” tool for determining if a PDF document conforms to the ISO 19005 requirements. It will separate the PDF parser from the higher-level validation, so a different parser can be plugged in.

Validating PDF is tough In JHOVE, I designed PDF/A validation as an afterthought to the PDF module. PDF/A requirements affect every level of the implementation, so that approach led to problems that never entirely went away. Making PDF/A validation a primary goal should help greatly, but having it sit on top of and independent from the PDF parser may introduce another form of the same problem.

PDF files can include components which are outside the spec, and PDF/A-3 permits their inclusion. This means that really validating PDF/A-3 is an open-ended task. Even in the earlier version of PDF/A, not everything that can be put into a file is covered by the PDF specification per se. The specification addresses this by providing for extensibility; add-ons can address these aspects as desired. In particular, the core validator won’t attempt thorough validation of fonts.

A Metadata Fixer will not just check documents for conformance, but in some cases will perform the necessary fixes to make a file PDF/A compliant.

JHOVE ignores the content streams, focusing only on the structure, so it could report a thoroughly broken file as well-formed and valid. JHOVE2 doesn’t list PDF in its modules. Analyzing the content stream data is a big task. In general, the project looks hugely ambitious, and not every ambitious digital preservation project has reached a successful end. If this one does, it will be a wonderful accomplishment.

Posted in News. Tags: , , . Comments Off on New open-source file validation project

Update on the JHOVE handover

There’s a brief piece by Becky McGuinness in D-Lib Magazine on the handover of JHOVE to the Open Preservation Foundation. It describes upcoming plans:

During March the OPF will be working with Portico and other members to complete the transfer of JHOVE to its new home. The latest code base will move to the OPF GitHub organisation page. All documentation, source code files, and full change history will be publicly available, alongside other OPF supported software projects, including JHOVE2, Fido, jpylyzer, and the SCAPE project tools.

Once the initial transfer is complete the next step will be to set up a continuous integration (CI) build on Travis, an online CI service that’s integrated with GitHub. This will ensure that all new code submissions are built and tested publicly and automatically, including all external pull requests. This will establish a firm foundation for future changes based on agile software development best practises.

With this foundation in place OPF will test and incorporate JHOVE fixes from the community into the new project. Several OPF members have already developed fixes based on their own automated processes, which they will be releasing to the community. Working as a group these fixes will be examined and tested methodically. At the same time the OPF’s priority will be to produce a Debian package that can be downloaded and installed from its apt repository.

Following the transfer OPF will gather requirements from its members and the wider digital preservation community. The OPF aims to establish and oversee a self-sustaining community around JHOVE that will take these requirements forward, carrying out roadmapping exercises for future development and maintenance. The OPF will also assess the need for specific training and support material for JHOVE such as documentation and online or virtual machine demonstrators.

It’s great to know that JHOVE still has a future a decade after its birth, but what boggles my mind is the next sentence:

The transfer of JHOVE is supported by its creators and developers: Harvard Library, Portico, the California Digital Library, and Gary McGath.

I never expected to see my name in a list like that!

Posted in News. Tags: , , , . Comments Off on Update on the JHOVE handover

Honda MP3 player defect

Recently Eyal Mozes hired me to determine why the sound system in his new Honda Civic wouldn’t play some MP3 files. This was a chance to do some interesting investigative work, and I’ve found what I think is a previously unidentified product defect.

He sent me twenty MP3 files, ten of which would play on his system and ten of which wouldn’t. First I ran some preliminary tests, establishing that iTunes, QuickTime Player, Audacity, and even my older Honda stereo had no trouble with the files. Then I ran Exiftool on them and looked at the output to see what the difference was.

The first thing I looked for was variable bitrate encoding, which is the most common cause of failure to play MP3 files. None of them used a variable bitrate. Looking more closely, I saw that all the files had both ID3 V1 and V2 metadata. This is legitimate. In the files he’d indicated as non-playable, though, the length of the ID3 V2 segment was zero. This was true in all the non-playable files, while all the playable ones had ID3 V2 with some data fields. I verified with a hex dump that the start of the files was a ten-byte empty ID3 V2.3 segment.

I continued looking for any other systematic differences, but that was the only one I found. It’s highly likely that the MP3 software in Eyal’s car — and, therefore, in many Hondas and maybe even other makes — has a bug that makes a file fail to play if there’s a zero-length ID3 V2 segment. (Update: Just to be clear, this is a legitimate if unusual case, not a violation of the format.)

Eyal had gone to arbitration to get his vehicle returned under the warranty; Honda’s response was unimpressive. Initially, he told me, Honda claimed that the non-playing files were under DRM. This is nonsense; there’s no such thing as DRM on MP3 files. They withdrew this claim but then asserted that “compatibility issues” related to encoding were the problem, without giving any specifics. The ID3 header in an MP3 file is unrelated to the encoding of the file, and I didn’t see any systematic differences in encoding parameters between the playable and non-playable files. Honda claimed to be unable to tell how the files were encoded. They may not have been able to tell what software was used, but the only “how” that’s relevant is the encoding parameters.

This problem won’t make your brakes fail or your wheels fall off, but Honda should still treat it as a product defect, come up with a fix, and offer it to customers for free. If they can upgrade the firmware, that’s great; if not, they’ll have to issue replacement units. The bug sounds like one that’s easy to fix once the programmers are aware of it. The testing just wasn’t thorough enough to catch this case.

If anyone wants to hire me for more file format forensic work, let me know. This was fun to investigate.

A new home for JHOVE

Over a decade ago, the Harvard University Libraries took me on as a contractor to start work on JHOVE. Later I became an employee, and JHOVE formed an important part of my work. When I left Harvard, I asked for continued “custody” of JHOVE so I could keep maintaining it, and got it. Over time it became less of a priority for me; there’s only so much time you can devote to something when no one’s paying you to do it.

After a long period of discussion, the Open Preservation Foundation (formerly the Open Planets Foundation) has taken up support of JHOVE. In addition to picking up the open source software, it’s resolved copyright issues in the documentation with Harvard, really over boilerplate that no one intended to enforced, but still an issue that had to be cleared.

Stephen Abrams, who was the real father of JHOVE, said, “We’re very pleased to see this transfer of stewardship responsibility for JHOVE to the OPF. It will ensure the continuity of maintenance, enhancement, and availability between the original JHOVE system and its successor JHOVE2, both key infrastructural components in wide use throughout the digital library community.”

JHOVE2 was originally supposed to be the successor to JHOVE, but it didn’t get enough funding to cover all the formats that JHOVE covers, so both are used, and the confusion of names is unfortunate. OPF has both in its portfolio. It doesn’t appear to have forked JHOVE to its Github repository yet, but I’m sure that’s coming soon.

My own Github repository for JHOVE should now be considered archival. Go forth and prosper, JHOVE.

Posted in News. Tags: , , , , . Comments Off on A new home for JHOVE

Pono’s file format

I’ve been seeing weirdly intense hostility to the Pono music player and service. A Business Insider article implies that it’s a scheme by Apple to make you buy your music all over again at higher prices. Another article complains that it will hold “only” 1,872 tracks and protests that “the Average person” (their capitalization) doesn’t hear any improvement. I wonder if some of these people are outraged because they’re confusing Pono with Bono and thinking this is the new copy-proof file format which he said Apple is working on.

In fact, Pono isn’t using any new format and isn’t introducing DRM. Its files are in the well-known FLAC format. FLAC stands for “Free Lossless Audio Codec.” The term technically refers only to the codec, not the container, but it’s usually delivered in a “Native FLAC” container. It can also be delivered in an Ogg container, providing better metadata support and slightly larger files.

The “lossless” part of the name refers to FLAC’s compression. MP3 uses lossy compression, which removes some information, sacrificing a little audio quality to make the file smaller. FLAC delivers larger files, giving better quality and a larger file size for the same sampling rate and bit resolution. According to CNET, “Pono’s recordings will range from CD-quality 16-bit/44.1kHz to 24-bit/192kHz “ultra-high resolution.” 96 kilohertz (dividing 192 by 2 per the Nyquist theorem) is way beyond the threshold of human hearing, so it’s understandable that people are skeptical about whether it offers any benefit over a lower sampling rate. Frequencies that high are normally filtered out.

FLAC is non-proprietary and DRM-free, and it has an open source reference implementation. Someone could put FLAC into a DRM container, but then why not use a proprietary encoding? Using FLAC is a step forward from the patent-encumbered MP3, with license requirements that effectively lock out free software.

iTunes doesn’t support FLAC files, so the Business Insider claim that Pono is Apple’s way of making you buy music over again is idiotic. It’s like saying Windows 8 is an Apple scheme to make you buy new software.

As the number of gigabytes you can stick in your pocket keeps growing, the need for compression decreases. For many people, amount of music storage takes priority over improved sound quality, but some will pay for a high-end player that gives them the best sound possible. I don’t get why this infuriates so many critics. At any rate, the file format shouldn’t scare anyone.

For more discussion of FLAC as it relates to Pono, see “What is FLAC? The high-def MP3 explained” on CNET’s site; the headline is totally wrong, but the article itself is good.

Posted in commentary. Tags: , , , . Comments Off on Pono’s file format

Article on PDF/A validation with JHOVE

An article by Yvonne Friese does a good job of explaining the limitations of JHOVE in validating PDF/A. At the time that I wrote JHOVE, I wasn’t aware how few people had managed to write a PDF validator independent of Adobe’s code base; if I’d known, I might have been more intimidated. It’s a complex job, and adding PDF/A validation as an afterthought added to the problems. JHOVE validates only the file structure, not the content streams, so it can miss errors that make a file unusable. Finally, I’ve never updated JHOVE to PDF 1.7, so it doesn’t address PDF/A-2 or 3.

I do find the article flattering; it’s nice to know that even after all these years, “many memory institutions use JHOVE’s PDF module on a daily basis for digital long term archiving.” The Open Preservation Foundation is picking up JHOVE, and perhaps it will provide some badly needed updates.

Posted in commentary. Tags: , , , . Comments Off on Article on PDF/A validation with JHOVE
Follow

Get every new post delivered to your Inbox.

Join 63 other followers