Domain scientists with an interest in the archiving and re-use of phylogenetic data have called for a reporting standard designated "Minimal Information for a Phylogenetic Analysis", or MIAPA (Leebens-Mack, et al. 2006). Ideally the research community would develop, and adhere to, a standard that imposes a minimal reporting burden yet ensures that the reported data can be interpreted and re-used. Such a standard might be adopted by
- pipeline projects that generate phylogenetic data sets for downloading and re-use (e.g., TreeBase, Pandit)
- repositories and databases designed to archive published data (e.g., TreeBase, Dryad)
- journals that publish supplementary material for phylogenetic studies (e.g., MBE, Systematic Biology)
- granting organizations that support phylogenetic studies (e.g., NSF)
- organizations that develop taxonomic nomenclature based on phylogenetic results
Currently MIAPA is aspirational, i.e., no standard has been developed. As a starting point, Leebens-Mack, et al. suggest that a study should report objectives, sequences, taxa, alignment method, alignment, phylogeny inference method, and phylogeny.
The MIAPA concept clearly aligns with the interoperability mandate of the NESCent evolutionary informatics working group, e.g., data re-use (the primary goal of MIAPA) is a desideratum of interoperability. Development of a MIAPA standard could synergize with ongoing projects and long-term goals of the working group. To achieve re-use through compliance with reporting standards, we need to develop technology that makes it easy to comply with the standard, e.g., a nice GUI that makes it easy for users to construct a MIAPA-compliant submission. To support re-use through data-mining or reasoning (on MIAPA-compliant reports), we need a controlled vocabulary, ideally an ontology. Developing such an ontology not only would jump-start the MIAPA project, it also would contribute to our efforts to develop a language to describe Transition Models, and it would represent a step in the direction of our long-term goal of developing a domain-specific language for phylogenetic analysis.
Some thoughts on developing MIAPA
Leebens-Mack, et al. called for further work, attempting to attract attention to this idea in order to stimulate effort. However, there has been no further effort to develop MIAPA. The NESCent evolutionary informatics working group invited Dr. Leebens-Mack to speak at our recent meeting, and there was general agreement with the value of developing a MIAPA standard, and with the importance of ensuring that the interoperability artefacts developed by the group -- nexml and CDAO -- provide a means of MIAPA compliance.
As the working group meeting came to a close, some members began to discuss what the further development of MIAPA would entail (below), and how we could jump-start the project with a knowledge capture exercise (next section).
- What it might mean to have an effective MIAPA standard:
- an explicit (possibly formal) description of the standard, specifying types of data and metadata
- an explicit conformance policy
- a controlled vocabulary for data and metadata
- a file format for MIAPA documents
- a repository to store MIAPA-compliant entries
- What software support might entail
- interactive software to facilitate creation of MIAPA-compliant documents
- a relational mapping of the MIAPA standard to be used in repositories
- a formal taxonomy or ontology of metadata terms
- What logistics might be involved in developing and promulgating the standard
- a working group with external funding
- a consortium with representatives from data resources, publishers, researchers, and programmers
- user testing at scientific conferences
- collaboration with ontology experts at NCBO
- multiple rounds of revision
- workshops (to train users) and hackathons (to develop implementations)
- What would ease the burden on scientists (i.e., this is the goal behind the "minimal" in MIAPA)?
- fewer categories of metadata
- fewer arbitrary restrictions on format
- familiarity of metadata concepts
- flexibility in representation
- software support for annotation
- What makes data reusable?
- standard formats
- capacity for validation
- provenance, ideally, provenance that can be traced automatically via external references
- description of methods sufficient to reproduce results from data
Knowledge Capture Exercise
We imagine a Knowledge-capture-and-user-testing exercise along the following lines of the following experiment described in the abstract of "Fast, Cheap and Out of Control: A Zero Curation Model for Ontology Development" (Good, et al. 2006: File:Good.pdf):
During two days at a conference focused on circulatory and respiratory health, 68 volunteers untrained in knowledge engineering participated in an experimental knowledge capture exercise. These volunteers created a shared vocabulary of 661 terms, linking these terms to each other and to a pre-existing upper ontology by adding 245 hyponym relationships and 340 synonym relationships. While ontology-building has proved to be an expensive and labor-intensive process using most existing methodologies, the rudimentary ontology constructed in this study was composed in only two days at a cost of only 3 t-shirts, 4 coffee mugs, and one chocolate moose. The protocol used to create and evaluate this ontology involved a targeted, web-based interface. The design and implementation of this protocol is discussed along with quantitative and qualitative assessments of the constructed ontology.
Our plan would be to use a conference (ideally, the upcoming 2008 Evolution meeting, though its a bit soon) to gather data and to begin developing infrastructure for a MIAPA standard:
- develop an initial ontology framework
- develop a quick-and-dirty ontology for the MIAPA data and metadata categories
- identify other artefacts (ontologies, taxonomies) that can provide needed terms
- add (to the MIAPA ontology) a larger list of domain-specific terms for
- develop an interactive graphical tool for constructing MIAPA annotations
- use an existing framework such as Phenote
- load vocabulary terms from ontologies identified in step 1
- provide term-completion based on the loaded vocabularies
- provide slots for specific types of MIAPA annotations
- carry out a preliminary round of in-house testing and revision
- identify a target group of potential users, e.g.,
- those that have published a paper with the term "phylogeny"
- those attending a scientific meeting on phylogenetics
- those that use a particular archive or piece of software
- engage the users as participants in testing and knowledge capture
- request users to generate MIAPA-compliant annotations for actual or hypothetical data sets
- provide incentives.