Blog archive for the ‘weblog’ Category

Science Commons @ BioIT World 2008, April 28-30

March 27th, 2008 by dwentworth

Here at Science Commons, we’re working to improve human health — to cure diseases and save lives. And if human health is the goal, and innovation the engine, then we have to start using all the information available, and using the best technologies on that.

In that spirit, Science Commons Principal Scientist Alan Ruttenberg is participating in this year’s BioIT World Conference & Expo in Boston, where he’s conducting a pre-conference workshop entitled Harnessing the Semantic Web for Your Organization. Ruttenberg is the chair of the W3C’s Web Ontology Language (OWL) Working Group, and a coordinating editor of the Open Biomedical Ontologies (OBO) Foundry. The workshop, which takes place from 8:00 a.m.-12:15 p.m. on April 28th, will show how Semantic Web technologies are being used to solve the difficult data integration challenges that are prerequisite to making effective progress in areas such as translational medicine, understanding mechanism of action and efficient management of regulatory documentation.

If you’re going to BioIT World and you’d like to learn more about the Semantic Web approach to accelerating research and discovery, join us. We hope to see you there.

What’s “cyberinfrastructure”?

March 24th, 2008 by dwentworth

One of the biggest challenges we face at Science Commons is explaining what we do — and, much more important, why it matters.

To that end, we are publishing a series of posts to bring more clarity to the terms and phrases we use. To make sure these posts are truly useful, we’ll be asking for your feedback. Got questions? Criticism? We hope you’ll send us an email or add your comments to the post. (Note: the definitions in these posts aren’t meant to be formal; they’re aimed at sparking discussion and helping more people understand our work.)

The first time out, we took on open source knowledge management. This time, we’re tackling “cyberinfrastructure.”

According to the National Science Foundation, cyberinfrastructure is “like the physical infrastructure of roads, bridges, power grids, telephone lines and water systems that support modern society,” but “refers to the distributed computer, information and communication technologies combined with the personnel and integrating components that provide a long-term platform to empower the modern scientific research endeavor.” (People in other countries use different terms for roughly the same concept; in the UK and Australia, for instance, cyberinfrastructure is referred to as “e-science” and “e-research,” respectively.)

It’s important to note that cyberinfrastructure is distinct from the Internet, which is only one of the elements that it comprises.

To understand the difference, consider the insightful piece that Alice Park of TIME wrote last year on why the promising new vaccine for AIDS failed, which looks beyond biological factors to examine the state of scientific research as a whole. Writes Park, “Most research occurs in isolation; there’s little coordination among labs and no network through which data can be shared, making it difficult for scientists to learn from each other’s missteps” (emphasis, mine). She goes on to quote Dr. Alan Bernstein of the Global HIV Vaccine Enterprise, who describes science as an “iterative process” where, regrettably, “there isn’t a lot of iteration going on.”

Of course, there is a network that we can use for sharing scientific data: the Internet. What’s missing here is infrastructure — but not in the purely technical sense. We need more than computers, software, routers and fiber to share scientific information more efficiently; we need a legal and policy infrastructure that supports (and better yet, rewards) sharing.

At Science Commons, we use the term “cyberinfrastructure” — and more often, “collaborative infrastructure” — in this broader sense. Elements of an infrastructure can include everything from software and web protocols to licensing regimes and development policies. Science Commons is working to facilitate the emergence of an open, decentralized infrastructure designed to foster knowledge re-use and discovery — one that can be implemented in a way that respects the autonomy of each collaborator. We believe that this approach holds the most promise as we continue the transition from a world where scientific research is carried out by large teams with supercomputers to a world where small teams — perhaps even individuals — can effectively use the network to find, analyze and build on one another’s data.

If you’d like to learn more about what we’re doing, you’ll find details in Cyberinfrastructure for Knowledge Sharing, a CTWatch Quarterly piece by our own John Wilbanks If you have questions, send us an email. We’d love to hear from you.

Publishing for the future of science

March 17th, 2008 by dwentworth

In the past few weeks, we saw two remarkable stories emerge from research in the life sciences — remarkable not just because they made headlines, but because they give us a tantalizing glimpse of the potential for a new kind of publishing in science.

In the first, as Aaron Rowe at Wired News reported and Cory Doctorow blogged, a pair of researchers from Australia developed a blood test for African sleeping sickness — a relatively simple test that Rowe points out can be conducted without the “fancy equipment found in upscale medical labs.” Notably, the researchers published the findings at PLoS Neglected Tropical Diseases under a Creative Commons Attribution License — making freely available not only the results but the lab protocols for conducting the test itself.

In the second, we discovered that butterflies may remember what they learned as caterpillars — findings that were published (once again) by PLoS, then picked up by multiple media outlets, including New Scientist, National Geographic, Science, Wired News and NPR’s Morning Edition.

So what do these stories have to do with the future of scientific publishing? The PLoS One tagline is “Publishing science, accelerating research,” and for good reason: it is one of the pioneering open access publishers demonstrating the value of moving beyond the paper metaphor in the digital age — so that publishing can serve the progress of science, not hamper it. In the traditional publishing model, we “reward” good science by locking it up with legal and technical restrictions — making it less, not more useful to the people who can make sense of it. PLoS One makes every article it publishes available under the Creative Commons Attribution License — enabling maximum redistribution and reuse of the research while ensuring that the authors retain their copyrights and are properly credited for their work.

At Science Commons, we’re working toward a future where a published “paper” is dynamic — or as UK journalist Richard Poynder put it, “no longer simply an article to be viewed by as many eyeballs as possible,” but “the launch pad” for verifying and extending research. When a researcher clicks through to read an article, for instance, she should be able not only to see how the research was conducted, but also to click to order the research materials she needs to replicate the data.

These articles represent just a snapshot of the brilliant research that’s already being published openly. With the NIH open access mandate going into effect on April 7, we’ll start to see even more of the benefits that new models for scientific publishing can bring.

Response to STM statement on author addenda

March 14th, 2008 by Thinh

The International Association of Scientific, Technical & Medical Publishers (STM) recently released a statement this March called “Statement on journal publishing agreements and copyright agreement ‘addenda.'” It dismisses concerns of scholars, scientists, and universities that publisher copyright agreements leave authors without sufficient rights to share or re-use their own articles as “rhetorical.” The statement suggested that “standard journal agreements” already allow authors to retain rights that various copyright addenda, like the ones offered by Science Commons, SPARC, MIT, and others, were designed to address. Thus, they seem to suggest, the addenda are superfluous at best.

However, despite their insistence that “most” journal publication agreements “typically” allow authors to retain some combination of rights, the reality is that there is no “standard” publication agreement. Publications agreements vary widely in what rights they allow scholars to keep, ranging from full rights of re-use and sharing to sometimes exotic format restrictions (you can distribute the doc or html version but not the pdf) to no rights at all, so that scholars have to purchase copies of their articles if they want to distribute to colleagues. The Sherpa project has a large database showing the variations among journal policies. Unfortunately, even Sherpa’s summaries of these policies do not always reflect the most accurate or up-to-date information, because the journals can change their publication agreements or policies from time to time. Some of these policies are buried in fine print, some are only found on obscure journal web pages, and some are not published anywhere and are only communicated to a scholar when they bother to call the publisher. And of course these policies are subject to change at any time.

Copyright addenda are needed because most authors don’t have a lawyer, much less a whole legal department or law firm (as most publishers have) to parse the legal language of publication agreements for them. They also don’t have the time to search through journal Web sites for hard-to-find policies and to stay up to date with journal policy changes. By attaching a standard addendum, scholars can ensure that they retain those rights that they expect to have without having to be a lawyer themselves. With more private and public funders mandating open access, scholars need now more than ever greater clarity and transparency. Overly general statements about what “typical” or “most” publications agreements allow should hardly be of comfort.

It is, nonetheless, a step in the right direction for journals to acknowledge that authors should be able to retain more rights to their own articles. Authors receive no compensation for their articles, and are often called upon to provide peer review for others without compensation. Journals, of course, provide valuable services, including the coordination of peer review, for which they ought to receive fair compensation. However, this statement by these publishers implicitly acknowledges that the balance has rested too far in favor of restrictive journal policies intended to protect revenue streams, and that this balance has been shifting, and needs to shift further, in favor of authors’ freedom and the public interest.

BioMed Central’s smart push to publish “negative data”

March 13th, 2008 by dwentworth

BioMed Central — a champion among open access publishers — has just announced the launch of BMC Research Notes, a new journal that aims to complete the scientific record by publishing so-called dark data. That includes information about failed experiments, “disappointing” follow-up results and findings that traditional publishers deem scientifically sound but not headline-worthy.

Why would we want to publish these kinds of materials? As the announcement explains:

Small studies and confirmatory studies produce a valuable body of work – most science progresses by small advances rather than headline breakthroughs. Replication of results is essential and these studies are included in meta-analyses and systematic reviews. If the results of a study confirm what is already known or a new method or software tool offers an alternative approach rather than a major advance then we want authors to state this rather than being tempted to make exaggerated claims. Negative results may seem disappointing, but support for the null hypothesis is important and publishing such studies is essential to avoid publication bias.

This is a very smart move, and a brilliant demonstration of why we need to make the leap beyond the traditional publishing model: it isn’t serving the progress of science. We can’t move forward if we continue to use a model that systematically forces scientists to double back, re-explore blind alleys and repeat one another’s work.

Bravo to BioMed for continuing to make the case for publishing that makes sense for science.

(If you’d like to read more about BioMed Central publications, here’s our post on BMC Proceedings, where we first took note of BMC Research Notes.)

White paper released to help universities comply with NIH’s Public Access policy

February 29th, 2008 by Kaitlin Thaney

Come April 7, NIH-funded researchers will be mandated to archive their work in PubMed Central no later than 12 months post-publication. To help universities prepare for this, Science Commons, SPARC and ARL have jointly released a white paper that explores the copyright-related issues involved with the new NIH Public Access Policy. The paper, “Complying with the National Institutes of Health Public Access Policy: Copyright Considerations and Options,” arms university provosts, researchers and administrators with the information needed to fully comply with the mandate, announced this past January.

From SPARC’s press release:

[S]aid Heather Joseph, executive director of SPARC: “The sooner we can get effective implementing mechanisms in place, the sooner researchers, institutions, and the public can put PubMed Central to work. With April implementation drawing near, this paper will be a great tool to help administrators jumpstart the local planning process.”

[…] “Congress and the NIH recognize that the Internet makes a difference,” said John Wilbanks, Vice President of Science Commons. “Faculty authors can no longer sign away their copyrights in a business-as-usual manner when doing so means that their work will never be openly accessible over the Internet. This white paper is a step in making sure authors and universities understand how to move forward with a solid legal footing.”

The press release and the white paper can be accessed in their entirety on SPARC’s Web site, as well as in the Science Commons Reading Room. Many thanks to Michael Carroll for authoring this wonderful resource.

Beyond open access

February 27th, 2008 by dwentworth

In the introduction to his interview [PDF] with our own John Wilbanks, UK journalist Richard Poynder succinctly captures the Science Commons perspective on open access — that making research freely accessible online is only the beginning of making it useful for scientists:

John Wilbanks, VP of Science Commons, has an even broader view of the role the Internet has to play in science. Like Murray-Rust, Wilbanks believes it is essential for research papers to be machine-readable. Likewise, he believes we need to develop an appropriate legal infrastructure to facilitate this. He also believes it is essential that science databases are freely available, and that these databases are interoperable — not just with one another, but with research literature.

In addition, Wilbanks believes the Internet should be viewed as a platform for facilitating the free circulation and sharing of the physical tools of science — cell lines, antibodies, plasmids etc. In a sense, he wants to see these tools embedded into research papers — so if a reader of an Open Access paper wants more detailed information on, say, a cell line, they should be able to click on a link and pull up information from a remote database. […]

The end game, explains Wilbanks, is to make the research process as seamless and frictionless as possible. This implies that the scholarly paper is no longer simply an article to be viewed by as many eyeballs as possible, but also the raw material for multiple machines and software agents to data mine, a front-end to hundreds of databases, and the launch pad for an ecommerce system designed to speed up the process of research.

In this light, Open Access is not an end in itself, but the necessary precondition for a complete revolution in the way that science is done…

Precisely.

The interview is part of a series stretching back to 2001, which includes talks with the great Peter Murray-Rust, Peter Suber, BioMed Central founder Vitek Tracz and many others leading the charge for open access. Highly recommended.

Netsquared hosts data mashup challenge

February 25th, 2008 by Kaitlin Thaney

If you’re interested in open data or open notebook science, listen up. Netsquared is putting its money where its mouth is and offering a $100,000 cash prize for the best data mashups for social change.

From their announcement:

NetSquared, a project of TechSoup, has created the Challenge because they believe you have great ideas for how data can create insight, and they want to create a platform to facilitate those kinds of mashups being built. Plus, they¹ve got cash prizes to award to the folks who come up with the most innovative mashups for social change. You can find out more here.

Interested? Just follow these three steps.

First, apply. Applications will be accepted until March 14.

Step two: NetSquared will help connect you to the tech support needed to get your Mashup project off the ground.

Finally, step three: start designing.

Voting will begin the week of March 17th by the NetSquared Community, looking for the most innovative mashups in line with their guidelines. The top 20 projects will be announced on March 24, and the winners will be offered a chance to attend the NetSquared Conference in San Jose, Calif. this May. The top twenty projects will receive a share of a $100 K prize. Shares will be determined by voting at the event.

For ideas, take a look at some of their favorite mashups in this area – Maplight.org, ChicagoCrimes.org, ActiveTrails, and Tunisian Prison Map projects. To find out how you can apply, visit their Web site.

A commons-sense approach to winning the drug discovery lottery

February 23rd, 2008 by dwentworth

In a new piece [free reg. req.] this week from GenomeWeb Daily News, Aled Edwards — director and CEO of the Structural Genomics Consortium — describes the drug discovery process as a “lottery,” and argues that increasing the chances for discovery will require that people in “academia, industry, and funding bodies collaborate and keep new structural data accessible to all researchers who might be interested in using it.”

The sentiment echoes those of Science Commons’ own John Wilbanks, who earlier this year wrote a post on the Nature Network comparing drug discovery to a game of roulette. It’s a game, says Wilbanks, that people win by “betting on every square, then patenting the one that wins and extracting high rents from it.” The biggest problem in this scenario, he argues, isn’t the existence of patents, but the sheer complexity of the human body, and how much we still have to learn about it:

Human bodies make microprocessors look like children’s toys in terms of complexity. …Complexity is the problem both in terms of our understanding of bodies and drugs and in terms of reworking the models around discovery. This system regularly and utterly defeats the best efforts of many entrepreneurs and policy reformers to change things for the better.

So what’s the solution? According to Wilbanks, it’s a “commons approach,” which entails precisely the kind of collaboration that Edwards advocates:

It requires open access to content, journals and databases both. It requires that database creators think about their products as existing in a network, and provide hooks for the network, not just query access. It requires that funders pay for biobanks to store research tools. It requires that pharmaceutical companies take a hard look at their private assets and build some trust in entities that make sharing possible. It requires that scientists share their stuff (this is the elephant in the lab, frankly). It requires that universities track sharing as a metric of scientific and societal impact.

It is not easy. But it is, in a way, a very simple change. It just requires the flipping of a switch, from a default rule of “sharing doesn’t matter” to one of “sharing matters enormously.” It is as easy, and as hard, as the NIH mandate on open access. It’s a matter of willpower.

Edwards points out that governments and academic institutions spend “hundreds of billions of dollars” each year on activities related to drug development, and biotech and pharmaceutical companies “spend another $50 billion.” Yet the pace of discovery remains static — and according to Edwards, may even be slowing down.

Clearly, the current approach isn’t working. We at Science Commons are encouraged that more people are coming to understand that it’s time for a new approach to tilt the odds in our favor — so that we can save not only time and money, but also human lives.

As Harvard goes…

February 15th, 2008 by dwentworth

so goes the University of Oregon — and, we hope, many other institutions of higher learning in the US and internationally.

As the wonderful Peter Suber and Gavin Baker have been reporting extensively at Open Access News, Harvard’s decision to adopt an open access policy is causing a tremendous stir of excitement in the media and blogosphere. We at Science Commons are especially excited to learn that one day after the Harvard vote, the University of Oregon adopted a resolution in support of open access — including recommending that when faculty members sign a copyright transfer agreement for their work, they include an addendum to retain their rights, such as our Science Commons addenda.

We agree with BioMed Central president Matt Cockerill: the failure of traditional scientific publishing to make full use of the Internet’s potential is an issue that’s no longer of interest only to “librarians or activists.” If you work at a university and would like to help speed the pace of discovery by promoting open access to scientific research, we encourage you to take a look at our Scholar’s Copyright Addendum Engine (SCAE) and talk to your administrators about making it available to researchers on your university site.