Blog archive for August, 2008

FasterCures follows up on ‘Ten to Watch in 2008’

August 26th, 2008 by dwentworth

FasterCures President Greg Simon has published a follow-up to the organization’s announcement in January of the FasterCures Ten to Watch in 2008 list, on which we were honored to appear. The post provides updates on the developments, trends and organizations FasterCures identified in the original list, including “Science 2.0,” highlighting innovators in the “use of online platforms for scientific collaboration.” Among them: CollabRx, a company that creates “virtual biotechs” to put drug development in patients’ hands. CollabRx is one of our partners in the Health Commons, a project to make it easier for anyone to pull together the resources for accelerating drug discovery: research, data, materials and services.

You can read the post, Ten to Watch Mid-Year Review, over at the FasterCures blog.

Access to knowledge in science

August 25th, 2008 by dwentworth

The August/September 2008 issue of Intellectual Property Watch Monthly Reporter features the second part of a 2-part series on the Access to Knowledge (A2K) movement, which has among its goals “sharing the benefits of scientific advancement” (see the Treaty on Access to Knowledge [PDF]).

The article, Access To Knowledge “Movement” Seeks Strength In Its Diversity (available only to subscribers), cites the Introduction to Science Commons [PDF], co-authored by James Boyle and John Wilbanks, to show how the effort to change the way knowledge is governed has been reaching beyond IP policy. Scientists confront multiple barriers to accessing existing knowledge throughout the research cycle, including problems with securing access to the physical materials needed to verify results. As the IP Watch piece points out, Science Commons seeks to lower these barriers, and to provide solutions that knit together, so that when a researcher finds one piece of the puzzle — for instance, an article containing a piece of relevant data — she can also find the resources she needs to put the knowledge to use.

[Boyle and Wilbanks say that in] scientific research access to knowledge problems start at the earliest stage. Getting access to journals and physical materials needed for research can be difficult and time consuming – particularly problematic for those working on a limited-term grant, they said. This means research institutions “effectively ‘discard’ minds we might need to solve problems because they do not have full access to the research they need.”

Access to knowledge then, is about ease of networking and data transfer as much as it is about IP rights. Wilbanks and Boyle said there needs to be a connection between efforts to “streamline the legal process for clearing materials and efforts to streamline the practical process of actually fabricating and transferring the materials themselves.”

You can read more about our efforts to streamline the materials transfer process here. For a look at how we’re working to bring together all of the resources for accelerating research, check out the NeuroCommons and Health Commons projects.

What’s open science?

August 22nd, 2008 by dwentworth

That’s the question many of us have been grappling with in the wake of two unforgettable unconferences: BioBarCamp and SciFoo.

Over at Science in the open, Cameron Neylon writes:

During the introduction [at BioBarCamp] many people expressed an interest in “Open Science”, “Open Data”, or some other open stuff, yet it was already pretty clear that many people meant many different things by this. I think for me the most striking outcome of [a session to define it] was that not only is this a radically new concept for many people but that many people don’t have any background understanding of open source software either which can make the discussion totally impenetrable to them. This, in my view strengthens the need for having some clear brands, or standards, that are easy to point to and easy to sign up to (or not).

This is one of the reasons why Science Commons has published a set of principles for open science, which we prepared for our satellite workshop at ESOF 2008 (you can download the PDF or read them online here). We hope not only to help bring more clarity to the discussion, but also to pave the way for integrating all kinds of open science projects in a shared collaborative infrastructure.

For a taste of the conversation happening elsewhere, here are snippets from relevant posts published in the last few weeks:

Cat Allman @ the Google Open Source Blog: “Certain themes recurred [at SciFoo 2008]. One was the need to do a better job of open sourcing data within the science community, including negative results; such sharing would enable collaboration and prevent scientists from ‘reinventing the wheel.'”

Shirley Wu @ One Big Lab: “At BioBarCamp this past weekend (many thanks to John Cumbers and Attila Csordas for organizing!), the future of science became a recurring theme, with an impromptu discussion on open science the first day and spirited sessions on open science, web 2.0, the data commons, change in science, science ‘worship’, and redefining ‘impact‘ and ‘failure‘ the second. Each of these topics could be their own blog series, and, in fact, many of them are. Even if people didn’t always agree on the details, it was clear that everyone there (a biased group, inarguably) agreed that change is necessary, and inevitable. The question is, what will that change look like, and how will we get there?”

Cameron Neylon @ Science in the open: “Helen [Berman] made the point strongly that it had taken 37 years to get the [Protein Data Bank] to where it is today; a gold standard international and publically available repository of a specific form of research data supported by a strong set of community accepted, and enforced, rules and conventions. We don’t want to take another 37 years to achieve the widespread adoption of high standards in data availability and open practice in research more generally.”

Chris Patil @ Ouroboros: “Given a suitable set of one-to-one and one-to-many agreements between the stakeholders [in scientific research], then, the benefits of sharing could come to outweigh any conceivable advantage derived from secrecy. Perhaps ‘open science’ could be defined (for the moment) as the quest to design and optimize these agreements, along with the quest to design the best tools and licenses to empower scientists as they move from the status quo into the next system — because (and this is very important) if it is to ever succeed, open science has to work not because of governmental fiat or because a large number of people suddenly start marching in lockstep to an unnatural tune, but because it works better than competing models.”

Our own Kaitlin Thaney, who organized the ESOF satellite workshop, led an impromptu session on open science at BioBarCamp, and we’re eager to continue the conversation. If you’re interested in helping to keep the ball rolling, let us know.

Boston Globe on the open science “insurgency”

August 21st, 2008 by dwentworth

The Boston Globe today profiles three local organizations that leverage the Web to accelerate scientific research and discovery:  OpenWetWare (OWW), the Journal of Visualized Experiments (JoVE) and Science Commons. The article, Out in the open: some scientists sharing results, tells the story of Barry Canton, a young researcher here at MIT who has joined the “peaceful insurgency” in scientific research by publishing preprint research and raw data, a practice often called open notebook science.

The article also touches briefly on the broader spectrum of endeavors in open science, including the work we do to help realize the vision of fully automated, permission-free access to the available research, data and materials.

If you’ve read the Globe piece and want to learn more about what Science Commons does, here’s a quick tour of our current projects. We focus on:

  • Scholar’s Copyright and Open Access Data — making scientific research “re-useful” by providing free tools for opening and marking research and data for reuse
  • Biological Materials Transfer Agreement Project — facilitating “one-click” access to research materials by streamlining and automating the materials-transfer process, so scientists can more easily replicate, verify and extend research
  • The NeuroCommons — integrating fragmented information sources in the field of neuroscience, in our “proof of concept” project to help researchers to find, analyze and use research, data and materials from disparate sources
  • The Health Commons — building the legal framework for a permission-free marketplace of drug discovery data, materials and services, to make it easier for anyone to pull together the resources for accelerating disease research

If you have any questions, please let us know — we’d like to hear from you. And if you work in scientific research and want to collaborate with us to lift legal and technical barriers to research, you can click here to learn about your options.

Voices from the future of science: Rufus Pollock of the Open Knowledge Foundation

August 18th, 2008 by dwentworth

If there’s a single quote that best captures the ethos of open science, it might be the following bon mot from Rufus Pollock, digital rights activist, economist at the University of Cambridge and a founder of the Open Knowledge Foundation: “The best thing to do with your data will be thought of by someone else.”

It’s also a pithy way to convey both the challenge and opportunity for publishers of scientific research and data. How can we best capitalize on the lessons from the rise of the Web and open source software to accelerate scientific research? What’s the optimal way to package data so it can be used in ways no one anticipates?

I talked to Pollock, who’s been a driving force behind efforts to improve sharing and reuse of data, about where we stand in developing a common legal, technical and policy infrastructure to make open science happen, and what he thinks the next steps should be.

What strategies and concepts can we use from open source to foster open science? Can you give us a big picture description of the role you see the Open Knowledge Foundation playing?

I’d say that in terms of applying lessons from open source, the biggest thing to look at is data. Code and data have so many similarities — indeed, in many ways, the distinction between code and data are beginning to blur. The most important similarity is that both lend themselves naturally to being broken down into smaller chunks, which can then be reused and recombined.

This breaking down into smaller, reusable chunks is something we at the Open Knowledge Foundation refer to as “componentization.” You can break down projects, whether they are data sets or software programs, into pieces of a manageable size — after all, the human brain can only handle so much data — and do it in a way that makes it easier to put the pieces back together again. You might call this the Humpty Dumpty principle. And splitting things up means people can work independently on different pieces of a project, while others can work on putting the pieces back together — that’s where “many minds” come in.

What’s also crucial here is openness: without openness, you have a real problem putting things together. Everyone ends up owning a different piece of Humpty, and it’s a nightmare getting permission to put him back together (to use jargon from economics, you have an anti-commons problem). Similarly, if a data set starts off closed, it’s harder for different people to come along and begin working on bits of it. It’s not impossible to do componentization under a proprietary regime, but it is a lot harder.

With the ability to recombine information as the goal, it’s critical to be explicit about openness — both about what it is, and about what you intend when you make your work available. In the world of software, the key to making open source work is licensing, and I believe the same is true for science. If you want to enable reuse — whether by humans, or more importantly, by machines operated by humans — you’ve got to make it explicit what can be used, and how. That’s why, when we started the Open Knowledge Foundation back in 2004, one of the first things we focused on was defining what “open” meant. That kind of work, along with the associated licensing efforts, can seem rather boring, but it’s absolutely crucial for putting Humpty back together. Implicit openness is not enough.

So, in terms of open science, one of the main things the Open Knowledge Foundation has been doing is conceptual work — for example, providing an explicit definition of openness for data and knowledge in the form of the open knowledge/data definition, and then explaining to people why it’s important to license their data so it conforms to the definition.

So, to return to the main question, I think one of the strategies we should be taking from open source is its approach to the Humpty Dumpty problem. We should be creating and sharing “packages” of data, using the same principles you see at work in Linux distributions — building a Debian of data, if you like. Debian has currently got something like 18,000 software packages, and these are maintained by hundreds, if not thousands, of people — many of whom have never met. We envision the community being able to do the same thing with scientific and other types of data. This way, we can begin to divide and conquer the complexity inherent in the vast amounts of material being produced — complexity I don’t see us being able to manage any other way.

Your Comprehensive Knowledge Archive Network (CKAN) is a registry for open knowledge packages and projects, and people have added more than 100 in the past year. Can you tell us how the project got started? What have the recent updates achieved? And what are your future plans — where do you hope to go next?

If you’ve got an ambitious goal like this one [of radically changing data sharing and production practices], you’ve got to start with a modest approach — asking, “what is the simplest thing we can do that would be useful?” So we began by identifying some of the key things necessary for a knowledge-sharing infrastructure, to figure out what we could contribute. Sometimes what’s needed is conceptual, like our definitions. Sometimes you need a guide for applying concepts, like our principles for open knowledge development. And you need a way to share resources, which is why we started KnowledgeForge, which hosts all kinds of knowledge development projects.

The impetus behind CKAN was to make it easier for people to find open data, as well as to make their data available to others (especially in a way that can be automated). If you use Google to search for data, you’re much more likely to find a page about data than you are to find the data itself. As a scientist, you don’t want to find just one bit of information — you want the whole set. And you don’t want shiny front ends or permission barriers at any point in the process. We’ve been making updates to CKAN so machines can better interact with the data, which makes it so people who want data don’t have to jump as many hurdles to get it. Ultimately, we want people to be able to request data sets and have the software automatically install any additions and updates on their computers.

What are the biggest challenges to making open science work? If you had to lay out a 3-point agenda for the next five years, what would the action items be?

I think that, like with nearly everything else, the social and cultural challenges may be the biggest hurdle. One aspect of making it work is ensuring that more people understand exactly what they can gain from sharing. I think it’s like a snowball:  you might not get much back, initially, from sharing, but over time, you’d be able to see your data sets plugged in with other data sets, and your peers doing the same thing. The results might encourage you to share more.

As for a 3-point agenda:

1.) Open access is very important. In particular, I’d like to see the funders of science mandate not just open access to publications but also, as part of the process, open access to the data. They are paying for the research, so they can provide the incentive to make the results open. Moreover, it should be easier to get open access to the data; you wouldn’t necessarily have the same kind of struggle with publishers.

2.) I think we need more evangelism/advocacy for open science. We’re seeing big shifts in the way we do science, but we’re still on the cusp of a movement to bring open approaches together in a common infrastructure.

3.) We need to make it easier for people to share and manage large data sets. Open science is already working in some respects; is an extraordinary resource, for instance, but we need a better infrastructure for handling the data itself. I also think that many people are put off sharing because they think they don’t know how to manage data. That causes people to hesitate or give up completely. We need to make the process smoother. Sharing your data should be as frictionless as possible.

What do you see as the most important development in open science over the last year?

Without question, the progress we’re making with data licensing. We have the Science Commons Protocol for Implementing Open Access Data, which conforms to the Open Knowledge Definition, and the very first open data licenses that comply with the protocol: the Open Data Commons Public Domain Dedication and License (ODC-PDDL) and the CC0 public domain waiver. We now need to encourage people to start using these waivers — or any other open license that complies.

When I talk to people about what the open science movement is trying to achieve, the most common response I get is, “Well, won’t Google take care of that?” Do you hear that? What’s your response?

I would ask, “Well, what is ‘that’?” You find that many people believe that if you put something online, it’s automatically open, and Google will do the rest. Google is great, but it can’t handle things like community standards or usage rights. And in any case, I’m deeply skeptical of “one ring to rule them all” solutions. What we need is more along the lines of “small pieces, loosely joined.” Of course organizations like Google could help a lot (or hurt!), and they’re certainly an important part of the ecosystem. But at the Open Knowledge Foundation, we like to say that the revolution will be decentralized. No one person, organization or company is going to do everything. Even Google didn’t make the Web standards or create the web pages and hyperlinks that make search engines work. As it stands, Google may be good for finding bits of Humpty, but not for creating or putting him back together.

Have you read Chris Anderson’s piece, The End of Theory: The Data Deluge Makes the Scientific Method Obsolete? If so, what’s your take on it?

I’ll be politic and say that it’s provocative but ultimately unconvincing. There are reasons why we have theory. Imagine a library where you could have any book you want, but there are no rules for searching, so you have to search every book. The knowledge space is just too vast. In economics, just like in science, you need models to isolate the variables you’re interested in. There may be millions of variables, for instance, to explain why you’re a happy person right now. You had a happy childhood, you just listened to a symphony, etc. And the number of possible explanations (or, more formally, “regressions”) grows exponentially with the variables, so you’re creating a situation that’s computationally hard — problems that, using brute force, would take longer than the lifetime of the universe to solve, even with the fastest supercomputers around.

I’d argue that with more data, you need more, not less modeling insight. As the haystack grows, finding the needle by brute force is likely to be a less attractive, not more attractive option. Of course it’s true that more data and more computational power are a massive help in making progress in science or any other area. It’s just that they have to be used intelligently.

On a more personal note, how does being an economist inform your approach/perspective?

Economists study information goods a lot, so I’d say my background has been very influential. Economics 101 tells us that openness is often the most efficient way to do things, especially when there’s the possibility of up-front funding by, for instance, the government. There are clear, massive benefits for society in having a healthy, balanced information commons. Unfortunately, it is often the case that those who benefit from proprietarization have better-paid advocates, better-oiled PR machines, etc.

My hope is that this work that so many of us are doing pro bono, often in our spare time, will slowly increase in impact — and that, at a minimum, we can ensure that all publicly funded scientific research will be open.

Previous posts in this series:

Intelligent Television: The Open Access Documentary Project

August 15th, 2008 by dwentworth

Intelligent Television has posted early clips from a documentary project it’s undertaking with BioMed Central to showcase the benefits of open access (OA) to scientific and medical research. The project will produce a series of videos featuring interviews with activists, publishers and other stakeholders in OA, as well as consumers of OA information in the developed and developing world.

Check out the clips for highlights of interviews with our own John Wilbanks, Vice President of Science at Creative Commons, and Heather Joseph, Executive Director of SPARC.

Microsoft Research launches new tools for knowledge sharing

August 1st, 2008 by dwentworth

Big news:  Microsoft Research has unveiled new add-ins for some of the most popular Microsoft products to make them more useful for the scientific community — including tools for creating, sharing and preserving research in the formats used by scientific publishers and digital archives. The suite of add-ins, described in detail here, includes the Creative Commons Add-in for Office 2007, which lets anyone embed a Creative Commons license directly into their documents.

Using the Creative Commons Add-in,  you can choose from among the licenses available on the CC site to express your intentions regarding the use of your work. The embedded license links directly to its online representation at the CC site, while a machine-readable representation is stored in the Office Open XML document.

The Chronicle of Higher Education, reporting on the launch:

Saying it wants to help scholars and publishers write, edit, and publish academic articles, this week Microsoft Corporation rolled out a set of new software tools to perform those tasks, as well as to navigate thorny copyright issues and find and share scholarly data. …

For example, the Article Authoring Add-in for Word 2007 enables authors to structure and annotate their documents according to formats that publishers and digital archives require. The articles can then be converted easily to formats that facilitate their digital storage and preservation. The company is offering the new software free to licensed users of Word and other Microsoft products.

The tool allows users to create documents in the widely used format developed by the National Library of Medicine’s free digital archive of peer-reviewed biomedical and life-sciences journal literature, PubMed Central. But users will also be able to shape the software to suit other formats because the code for the tool is openly accessible and freely adaptable. …

“We’ve never before addressed what we could put around Office, Excel, SharePoint, and our other programs to make them more useful for science,” said Tony Hey, corporate vice president of Microsoft’s external-research division. “For example, Word was not tailored for scientific papers. But we decided to see, Can we make it more useful in that way?”

He said the company is also responding to the demand for researchers to provide greater access to their findings, and even their research data. Already the National Institutes of Health requires that any publications from research it finances be placed in PubMed Central within one year of publication. The National Science Foundation has a similar requirement, as do Harvard University’s faculties of law and of arts and sciences.

Such developments have increasingly raised concerns about copyrights and fair reuse of archived materials. So to help authors, publishers, and databases embed information about copyrights and licenses in Microsoft Office documents, the company released another free product, called the Creative Commons Add-in for Office 2007.

Science Commons visited the development team working on the add-ins in Seattle last year, and we’re excited to support this initiative.

“There are fundamental shifts taking place in how we manage the flow of scientific knowledge, and they bring demand for new tools that expand our choices for knowledge sharing and collaboration,” says John Wilbanks, Vice President of Science at Creative Commons. “We’re thrilled that Microsoft has taken these important steps to meet that demand.”

Update: Jon Udell has an excellent interview with Tony Hey about the initiative: How Microsoft’s External Research Division works with a new breed of e-scientists.