NSF Interop Proposal
- 1 Overview and talking points
- 2 Planning documents
- 3 Initial draft of NSF Proposal
Overview and talking points
Through the work of NESCent's informatics staff, the Evolutionary Informatics working group, and the participants in the recent data interop hackathon, we are in a position to apply for an NSF Interop proposal.
This funding program provides 250 K per year to support interoperability projects that are multidisciplinary and that have a community aspect and a technology aspect. The next deadline (possibly the last deadline for this program) is July 23.
What makes us competitive:
- our past success in developing interop technologies nexml, CDAO and PhyloWS
- the 3-part interop formula of data syntax (nexml), semantics (CDAO) and services (phyloWS)
- our past success in actual demonstration projects that show off interop technology
- our demonstrated commitment to including diverse projects
- our connections with a network of researchers, programmers, and data providers
Key aspects of the INTEROP program
The first priorities are to work out the scope of the project, and major aims that are consistent with
- who we are, what we've done and what we want to do, and
- what is required for a successful NSF interop grant, and what kind of support the program provides
I think we are familiar with the first item, so lets get to the second one: what makes a successful INTEROP proposal? Here are some key distinctions to keep in mind for INTEROP:
- community involvement AND enabling technologies. A successful proposal needs to have both. We need to show that we are ready to respond to a community's needs, and that we have the technical expertise to support standards or conventions that arise in response to community needs. If the community needs a web services standard, we need to be able to develop one. In order to do this, we need to create a community, using workshops and web sites and mailing lists and so on. We have been doing a lot of that, but it needs to be opened up even more. I think we are on solid ground here.
- cross-cutting. A successful proposal needs to address more than one disciplinary area. This may be a challenge for us. We are diverse in terms of ranging from molecular evolution to species diversity, but this is all within the discipline of life sciences. We have a computer scientist, but we might need more. What other disciplines could be involved, e.g., earth sciences, physics, behavior? The program also looks for diversity in the types of data involved. So, if we address phylogenies, taxonomic classes, and comparative data, this is much broader than if we just focus on trees.
- community engagement. We need to do more than just involve a community, we need to be responsive. "Proposals for activites not based on significant community engagement and consensus-building activities are not responsive to this solicitation and will be returned without review". We have developed nexml, CDAO and phyloWS with the aim of serving community interop needs. However, so far these tools are limited in their use. Lets imagine some future point where these are full-fledged community resources, widely supported in the phylogenetics community (like BioPerl is now), with
- many people involved in development (i.e., many "eyes on code")
- documentation and training resources readily available for anyone who wants to learn
- many people trained to use the tools
- many research projects willing to contribute to maintaining and improving these tools
- symposia and satellite conferences at major meetings
The NSF Interop program will provide support for meetings and workshops, along with a modest amount of support for technical staff. The staff support could be used to pay programmers to develop the tools that support nexml, CDAO and phyloWS. We could focus this technical support on
- one or a few integrative projects that we would implement in order to showcase the technologies
- generalized support for many projects carried out individually by members of the collaborative
There are a few advantages to the latter approach. First, we will be more focused on the standards and less on the final product. This will help to distinguish us from projects like the iPlant Tree of Life. Second, working with many projects will ensure that our solutions are generalized, rather than being biased by the choice of a few data types/ providers for a showcase project.
In addition to workshops and meetings aimed at development of standards and tools, we should also aim for some training workshops:
- for data providers (how do I share my data?)
- for data users (what data is available and how do I find / obtain it?). This could be an independent workshop, or integrated into an existing program (Woods Hole Molecular Evolution, Bodega Bay Phylogenetics, Computational Phyloinformatics at NESCent, etc).
People and institutions
PI, Co-PIs, and senior project personnel
This proposal needs a single PI from the coordinating instiution. However, there is no limit to the number of Co-PIs.
Here are the people interested so far:
- Karen Cranston, EOL and Field Museum of Natural History (I would be coming at this from the perspective of both a provider (PhyLoTA) and user (EOL, Treeviz working group) of phylogenetic data. I am officially working with the EOL, and also have a connection to the iPlant Tree of Life group, both of which are going to need these tools.)
- Enrico Pontelli, New Mexico State University, Computer Science
- Rutger Vos, University of British Columbia, Zoology
- Arlin Stoltzfus, University of Maryland Biotechnology Institute
Initial draft of NSF Proposal
suggested titles (must begin with "INTEROP: "):
- INTEROP: Integration and re-use of phylogenetic and comparative data by an expanding research community
- INTEROP: As phyloGood as it phyloGets
the project summary has 3 parts:
- Title, PI, Co-PIs, and senior project personnel
- "a succinct summary of intellectual merit" including scope of activities (communities, data types, technologies), networking activities and mechanisms for participation, and ways of providing technical expertise
- "a description of broader impacts" including interop, participation, education & training
Results from past research
- meeting costs
- staffing costs
- software design and implementation
- use case testing