LIBER Quarterly, a peer-reviewed journal managed by LIBER (the Association of European Research Libraries), has just published a special issue on research data and new forms of scholarly communication.
[...] researchers have realized that the current scholarly communication model, based exclusively on articles, is inherently limited and inefficient, even when all articles are in digital form and accessible through the Web. Communication is effective if and only if the recipient of the information, who is often not known beforehand, can comprehend, scrutinize, challenge and reproduce the findings presented.
The project re3data.org has received another grant: The German Research Foundation (DFG) has extended the funding of re3data.org – a registry of research data repositories – for another two years. Congrats!
Until the end of 2015 re3data.org aims to implement new functionalities and to integrate more research data repositories. These repositories will be indexed to offer researchers, funding organizations and libraries all over the world an easy-to-use overview of the heterogeneous research data repository landscape.
Mendeley, a desktop and web program for managing and sharing research papers recently announced a collaboration with labfolder – a Berlin-based startup. labfolder is a digital lab notebook which helps scientists to keep their notes and data organized. The linking of these two tools allows the citation and embedding of scientific literature into experimental raw data, and the exporting and sharing of experiment descriptions in Mendeley.
For those interested in labfolder, I embedded the product video below. (Sorry for the advertising. I only mention the collaboration, because it shows that data availability and interlinking data and publications gets increasingly important)
The current e-infrastructure for research data management in the field of social sciences in Germany has extended by an important component. Up to now, we faced a fragmented e-infrastructure for documenting, storing, hosting and curating research data in social sciences: On the one hand there are well-established research data centres e.g. for large household survey data. On the other hand appropriate research data infrastructure components for small and medium-sized research projects for instance were, with a few exceptions, almost not available yet. Read the rest of this entry »
In one of my previous blog posts I introduced the PKP/IQSS OJS-Dataverse integration project. After a really short developmental period the project now is happy to announce that the first version has been released! Congrats!
The plugin has been developed by PKP (Public Knowledge Project) in collaboration with Harvard’s Institute for Quantitative Social Science (IQSS). Funded by a $1 million Alfred P. Sloan Foundation grant, the OJS-DVN project developed the plugin for journals that are using the Open Journal System (OJS). a journal management and publishing system .
The following blog post is an interesting point of view in the discussion on open science. It originally appeared on Digging Digitally and is reposted under a CC-BY license.
Feel free to comment!
The Open Movement has made impressive strides in the past year, but do these strides stand for reform or are they just symptomatic of the further expansion and entrenchment of neoliberalism? Eric Kansa argues that it is time for the movement to broaden its long-term strategy to tackle the needs for wider reform in the financing and organization of research and education and oppose the all-pervasive trend of universities primarily serving the needs of commerce. Read the rest of this entry »
Knowledge Exchange (KE) – a cooperation between five national funding organisations (DFG, Surf, DEFF, CSC and JISC) – has been founded in 2005 to improve the digital infrastructure for information and communication technology as it relates to the research and university library sectors. Since 2005 KE is very active in multiple areas. These areas are clearly intended to encourage open access to the tools of science and scholarship for the higher education and research communities. They also contribute toward building an integrated e-infrastructure and exploring new developments in the future of publishing. There is a specific focus is on the development of storage, accessibility and quality assurance of digitally published research data. Another area of activity is directed at exploring effective investment in research tools (like interoperability standards; research data; research tools and sustainable business models for Open Access. Read the rest of this entry »
Posted: December 20th, 2013 | Author:Sven | Filed under:EDaWaX | Comments Off
The year draws to a close – a very good reason to sum up some of our activities in 2013:
First of all, the EDaWaX-project team wants to thank our projects partners, our cooperations and funders but also all our readers for a very successful year.
Our first funding phase has come to an end and we are really happy about all the things we could achieve in 2013:
Currently, Europe’s eighth Framework Programme takes form: On December 2013 the European Council has adopted Horizon 2020 programme for research and innovation for the years 2014 to 2020.
Horizon 2020, which has a budget of around 77 billion euros, will underpin the objectives of the Europe 2020 strategy for growth and jobs, as well as the goal of strengthening the scientific and technological bases by contributing to achieving a European Research Area in which researchers, scientific knowledge and technology circulate freely. Read the rest of this entry »
A week ago our project held its final evaluation workshop. We presented the main results of some of our work packages and also introduced a beta version of our pilot application for the management of publication-related research data in journals.
In preparation of the workshop we invited more than 30 editors of scholarly journals and almost a dozen scientists from 15 journals accepted our invitation. Read the rest of this entry »
Our project currently has published the results of our work package 3 in which we analyzed the role of research data centres with regard to management of publication-related research data. This working paper presents the results of a survey among these scientific infrastructure service providers.
By conducting a desk research and an online survey, we found out that almost three quarters of all responding research data centres, archives and libraries generally store externally generated research data – what also applies to publication-related data.
Almost 75% of all respondents also store and host the code of computation (the syntax of statistical analyses). If self-compiled software components have been used to generate research outputs, only 40% of all respondents accept these software components for storing and hosting.
Eight in ten institutions also stated that they are taking specific actions for digital long-term preservation of their data. In regard to the documentation of stored and hosted research data almost 70% of all respondents claimed to use the metadata schema of the Data Documentation Initiative (DDI); Dublin Core was used by 30 percent (multiple answers were permitted). Almost two thirds also used persistent identifiers to facilitate citation of these datasets. Three in four respondents also stated to support researchers in creating metadata for their data. Application programming interfaces (APIs) for uploading or searching datasets currently have not been implemented by any of the respondents yet. Little widespread is the use of semantic technologies like RDF.
The European Commission (EC) held a public consultation on open research data. For that purpose the Commission invited stakeholders from various branches and researchers, the industry, funders, libraries, publishers, infrastructure developers and other stakeholders joined the meeting on 2 July in Brussels.
The commission posed five questions to structure the debate. These questions included basic questions like “how research data can be defined?”. But a lion’s share of the questions dealt with the “openness” of data: What types of data should be openly available? When and how does openness need to be limited?
In addition other important questions from the perspective of infrastructure service providers were mentioned. How should research data be stored and made accessible? How should the issue of data re-use be addressed? And finally a question I personally characterize as a very important topic: How can we enhance data awareness and a culture of data sharing?
As you might have noticed, currently I don’t have much time to publish new articles on the blog. The reason is that our project is currently publishing a lot of the results we achieved in the course of the last two years.
Experts say research data management should be an integral part of university curricula
Panel of experts recommends the integration of research data management into the university curricula of young researchers. The ZBW – Leibniz Information Centre for Economics and the German Data Forum initiated a debate on the topic at the annual meeting of the Verein für Socialpolitik, the most prestigious professional association of German-speaking economists, held in Düsseldorf from 4 to 7 September 2013.Read the rest of this entry »
This post describes the reasons for this decision and tries to give some insights in CKAN, its features and technology. We’ll also discuss these features both in regard to our special use case and to the suitability for research data management in general.