Science Commons was re-integrated with Creative Commons. This content is no longer maintained and remains only for reference.

Blog archive for March, 2008

Before the boom: 13 percent of cancer literature is free

March 31st, 2008 by dwentworth

Heather Morrison, who’s been tracking the growth of open access to medical literature, has posted baseline figures for the percentage of literature on cancer that’s freely available online in full text format, pre-NIH mandate:

13% of the literature in PubMed on cancer links to Free Fulltext.
By publication date range:
7% – within last 30 days
10% – within the last year
17% – within the last two years
21% – within the last 10 years

Opening access is the foundation for making the medical literature useful in the digital era, facilitating machine-assisted research using Semantic Web technologies — something that will become even more critical once the mandate goes into effect and the percentage of open literature starts rising.

(Hat tip to Gavin Baker @ Open Access News)

What can universities do to promote open access?

March 28th, 2008 by dwentworth

Open access leader Peter Suber answers that question in the characteristically thorough and engaging lecture he gave on March 17th at Harvard’s Berkman Center for Internet & Society. The talk, co-sponsored by the Berkman Center, Science Commons and Harvard’s Center for Research on Computation and Society, gives a tour of five ways that universities can promote open access to research:

  • launching and filling their own OA repositories
  • supporting peer-reviewed OA journals
  • supporting OA monographs from their university presses
  • fine-tuning their promotion and tenure criteria to support excellent research even in unconventional places
  • educating faculty about copyright and OA itself

The Berkman Center has now posted video and audio of the entire lecture and ensuing discussion, and the slides are available here. And if you’re interested in responses, check out Stevan Harnad’s detailed commentary, as well as Suber’s reply at Open Access News.

Science Commons @ BioIT World 2008, April 28-30

March 27th, 2008 by dwentworth

Here at Science Commons, we’re working to improve human health — to cure diseases and save lives. And if human health is the goal, and innovation the engine, then we have to start using all the information available, and using the best technologies on that.

In that spirit, Science Commons Principal Scientist Alan Ruttenberg is participating in this year’s BioIT World Conference & Expo in Boston, where he’s conducting a pre-conference workshop entitled Harnessing the Semantic Web for Your Organization. Ruttenberg is the chair of the W3C’s Web Ontology Language (OWL) Working Group, and a coordinating editor of the Open Biomedical Ontologies (OBO) Foundry. The workshop, which takes place from 8:00 a.m.-12:15 p.m. on April 28th, will show how Semantic Web technologies are being used to solve the difficult data integration challenges that are prerequisite to making effective progress in areas such as translational medicine, understanding mechanism of action and efficient management of regulatory documentation.

If you’re going to BioIT World and you’d like to learn more about the Semantic Web approach to accelerating research and discovery, join us. We hope to see you there.

What’s “cyberinfrastructure”?

March 24th, 2008 by dwentworth

One of the biggest challenges we face at Science Commons is explaining what we do — and, much more important, why it matters.

To that end, we are publishing a series of posts to bring more clarity to the terms and phrases we use. To make sure these posts are truly useful, we’ll be asking for your feedback. Got questions? Criticism? We hope you’ll send us an email or add your comments to the post. (Note: the definitions in these posts aren’t meant to be formal; they’re aimed at sparking discussion and helping more people understand our work.)

The first time out, we took on open source knowledge management. This time, we’re tackling “cyberinfrastructure.”

According to the National Science Foundation, cyberinfrastructure is “like the physical infrastructure of roads, bridges, power grids, telephone lines and water systems that support modern society,” but “refers to the distributed computer, information and communication technologies combined with the personnel and integrating components that provide a long-term platform to empower the modern scientific research endeavor.” (People in other countries use different terms for roughly the same concept; in the UK and Australia, for instance, cyberinfrastructure is referred to as “e-science” and “e-research,” respectively.)

It’s important to note that cyberinfrastructure is distinct from the Internet, which is only one of the elements that it comprises.

To understand the difference, consider the insightful piece that Alice Park of TIME wrote last year on why the promising new vaccine for AIDS failed, which looks beyond biological factors to examine the state of scientific research as a whole. Writes Park, “Most research occurs in isolation; there’s little coordination among labs and no network through which data can be shared, making it difficult for scientists to learn from each other’s missteps” (emphasis, mine). She goes on to quote Dr. Alan Bernstein of the Global HIV Vaccine Enterprise, who describes science as an “iterative process” where, regrettably, “there isn’t a lot of iteration going on.”

Of course, there is a network that we can use for sharing scientific data: the Internet. What’s missing here is infrastructure — but not in the purely technical sense. We need more than computers, software, routers and fiber to share scientific information more efficiently; we need a legal and policy infrastructure that supports (and better yet, rewards) sharing.

At Science Commons, we use the term “cyberinfrastructure” — and more often, “collaborative infrastructure” — in this broader sense. Elements of an infrastructure can include everything from software and web protocols to licensing regimes and development policies. Science Commons is working to facilitate the emergence of an open, decentralized infrastructure designed to foster knowledge re-use and discovery — one that can be implemented in a way that respects the autonomy of each collaborator. We believe that this approach holds the most promise as we continue the transition from a world where scientific research is carried out by large teams with supercomputers to a world where small teams — perhaps even individuals — can effectively use the network to find, analyze and build on one another’s data.

If you’d like to learn more about what we’re doing, you’ll find details in Cyberinfrastructure for Knowledge Sharing, a CTWatch Quarterly piece by our own John Wilbanks If you have questions, send us an email. We’d love to hear from you.

Publishing for the future of science

March 17th, 2008 by dwentworth

In the past few weeks, we saw two remarkable stories emerge from research in the life sciences — remarkable not just because they made headlines, but because they give us a tantalizing glimpse of the potential for a new kind of publishing in science.

In the first, as Aaron Rowe at Wired News reported and Cory Doctorow blogged, a pair of researchers from Australia developed a blood test for African sleeping sickness — a relatively simple test that Rowe points out can be conducted without the “fancy equipment found in upscale medical labs.” Notably, the researchers published the findings at PLoS Neglected Tropical Diseases under a Creative Commons Attribution License — making freely available not only the results but the lab protocols for conducting the test itself.

In the second, we discovered that butterflies may remember what they learned as caterpillars — findings that were published (once again) by PLoS, then picked up by multiple media outlets, including New Scientist, National Geographic, Science, Wired News and NPR’s Morning Edition.

So what do these stories have to do with the future of scientific publishing? The PLoS One tagline is “Publishing science, accelerating research,” and for good reason: it is one of the pioneering open access publishers demonstrating the value of moving beyond the paper metaphor in the digital age — so that publishing can serve the progress of science, not hamper it. In the traditional publishing model, we “reward” good science by locking it up with legal and technical restrictions — making it less, not more useful to the people who can make sense of it. PLoS One makes every article it publishes available under the Creative Commons Attribution License — enabling maximum redistribution and reuse of the research while ensuring that the authors retain their copyrights and are properly credited for their work.

At Science Commons, we’re working toward a future where a published “paper” is dynamic — or as UK journalist Richard Poynder put it, “no longer simply an article to be viewed by as many eyeballs as possible,” but “the launch pad” for verifying and extending research. When a researcher clicks through to read an article, for instance, she should be able not only to see how the research was conducted, but also to click to order the research materials she needs to replicate the data.

These articles represent just a snapshot of the brilliant research that’s already being published openly. With the NIH open access mandate going into effect on April 7, we’ll start to see even more of the benefits that new models for scientific publishing can bring.

Response to STM statement on author addenda

March 14th, 2008 by Thinh

The International Association of Scientific, Technical & Medical Publishers (STM) recently released a statement this March called “Statement on journal publishing agreements and copyright agreement ‘addenda.'” It dismisses concerns of scholars, scientists, and universities that publisher copyright agreements leave authors without sufficient rights to share or re-use their own articles as “rhetorical.” The statement suggested that “standard journal agreements” already allow authors to retain rights that various copyright addenda, like the ones offered by Science Commons, SPARC, MIT, and others, were designed to address. Thus, they seem to suggest, the addenda are superfluous at best.

However, despite their insistence that “most” journal publication agreements “typically” allow authors to retain some combination of rights, the reality is that there is no “standard” publication agreement. Publications agreements vary widely in what rights they allow scholars to keep, ranging from full rights of re-use and sharing to sometimes exotic format restrictions (you can distribute the doc or html version but not the pdf) to no rights at all, so that scholars have to purchase copies of their articles if they want to distribute to colleagues. The Sherpa project has a large database showing the variations among journal policies. Unfortunately, even Sherpa’s summaries of these policies do not always reflect the most accurate or up-to-date information, because the journals can change their publication agreements or policies from time to time. Some of these policies are buried in fine print, some are only found on obscure journal web pages, and some are not published anywhere and are only communicated to a scholar when they bother to call the publisher. And of course these policies are subject to change at any time.

Copyright addenda are needed because most authors don’t have a lawyer, much less a whole legal department or law firm (as most publishers have) to parse the legal language of publication agreements for them. They also don’t have the time to search through journal Web sites for hard-to-find policies and to stay up to date with journal policy changes. By attaching a standard addendum, scholars can ensure that they retain those rights that they expect to have without having to be a lawyer themselves. With more private and public funders mandating open access, scholars need now more than ever greater clarity and transparency. Overly general statements about what “typical” or “most” publications agreements allow should hardly be of comfort.

It is, nonetheless, a step in the right direction for journals to acknowledge that authors should be able to retain more rights to their own articles. Authors receive no compensation for their articles, and are often called upon to provide peer review for others without compensation. Journals, of course, provide valuable services, including the coordination of peer review, for which they ought to receive fair compensation. However, this statement by these publishers implicitly acknowledges that the balance has rested too far in favor of restrictive journal policies intended to protect revenue streams, and that this balance has been shifting, and needs to shift further, in favor of authors’ freedom and the public interest.

BioMed Central’s smart push to publish “negative data”

March 13th, 2008 by dwentworth

BioMed Central — a champion among open access publishers — has just announced the launch of BMC Research Notes, a new journal that aims to complete the scientific record by publishing so-called dark data. That includes information about failed experiments, “disappointing” follow-up results and findings that traditional publishers deem scientifically sound but not headline-worthy.

Why would we want to publish these kinds of materials? As the announcement explains:

Small studies and confirmatory studies produce a valuable body of work – most science progresses by small advances rather than headline breakthroughs. Replication of results is essential and these studies are included in meta-analyses and systematic reviews. If the results of a study confirm what is already known or a new method or software tool offers an alternative approach rather than a major advance then we want authors to state this rather than being tempted to make exaggerated claims. Negative results may seem disappointing, but support for the null hypothesis is important and publishing such studies is essential to avoid publication bias.

This is a very smart move, and a brilliant demonstration of why we need to make the leap beyond the traditional publishing model: it isn’t serving the progress of science. We can’t move forward if we continue to use a model that systematically forces scientists to double back, re-explore blind alleys and repeat one another’s work.

Bravo to BioMed for continuing to make the case for publishing that makes sense for science.

(If you’d like to read more about BioMed Central publications, here’s our post on BMC Proceedings, where we first took note of BMC Research Notes.)