Their sample consists of articles from 2005 from 13 economic journals (including the top five journals). In addition to standard mean comparisons Wohlrabe and Birkmaier also use a negative-binomial regression model with several covariates to control for potential selection effects and quality bias. For their analysis they used citation data from three different databases, namely the Web of Science, RePEc and Google Scholar.
The results they retrieved are very interisting and might light the debate on open access publishing in academia.
We are very happy to announce that our research funding organisation, the German Research Foundation (DFG), has granted another two years of funding for our project.
In their final report, based on the good results of the project’s first funding phase, the reviewers concluded that EDaWaX’s planning for expanding the pilot application and for undertaking a detailed analysis of journals in business studies should be supported with “high”, respectively “highest priority.” Read the rest of this entry »
There are many good reasons why we should replicate scientific findings. In his article “Open Access Economics Journals and the Market for Reproducible Economic Research“, the economist B.D. McCullough (2009) lists some of the reasons why replicable research is crucial for science:
„[…]replication ensures that the method used to produce the results is known. Whether the results are correct or not is another matter, but unless everyone knows how the results were produced, their correctness cannot be assessed. Replicable research is subject to the scientific principle of verification; non-replicable research cannot be verified. Second, and more importantly, replicable research speeds scientific progress. We are all familiar with Newton’s quote, ‘If I have seen a little further it is by standing on the shoulders of giants.’ […] Third, researchers will have an incentive to avoid sloppiness. […] Fourth, the incidence of fraud will decrease.“ (p.118)
In a suggestion published a few days ago, the general meeting of the German’s Rector Conference (Hochschulrektorenkonferenz (HRK), a voluntary association of currently 268 state and state-recognised universities and other higher education institutions (HEI) in Germany at which more than 94 per cent of all students in Germany are registered), has advised university directorates to take the necessary steps to support research data management, crosslinking and long-term preservation of and access to research data. For these important tasks suitable infrastructure components are required – a task the German’s Rector Conference also suggests the university directorates to be responsible for.
LIBER Quarterly, a peer-reviewed journal managed by LIBER (the Association of European Research Libraries), has just published a special issue on research data and new forms of scholarly communication.
[...] researchers have realized that the current scholarly communication model, based exclusively on articles, is inherently limited and inefficient, even when all articles are in digital form and accessible through the Web. Communication is effective if and only if the recipient of the information, who is often not known beforehand, can comprehend, scrutinize, challenge and reproduce the findings presented.
The project re3data.org has received another grant: The German Research Foundation (DFG) has extended the funding of re3data.org – a registry of research data repositories – for another two years. Congrats!
Until the end of 2015 re3data.org aims to implement new functionalities and to integrate more research data repositories. These repositories will be indexed to offer researchers, funding organizations and libraries all over the world an easy-to-use overview of the heterogeneous research data repository landscape.
Mendeley, a desktop and web program for managing and sharing research papers recently announced a collaboration with labfolder – a Berlin-based startup. labfolder is a digital lab notebook which helps scientists to keep their notes and data organized. The linking of these two tools allows the citation and embedding of scientific literature into experimental raw data, and the exporting and sharing of experiment descriptions in Mendeley.
For those interested in labfolder, I embedded the product video below. (Sorry for the advertising. I only mention the collaboration, because it shows that data availability and interlinking data and publications gets increasingly important)
The current e-infrastructure for research data management in the field of social sciences in Germany has extended by an important component. Up to now, we faced a fragmented e-infrastructure for documenting, storing, hosting and curating research data in social sciences: On the one hand there are well-established research data centres e.g. for large household survey data. On the other hand appropriate research data infrastructure components for small and medium-sized research projects for instance were, with a few exceptions, almost not available yet. Read the rest of this entry »
In one of my previous blog posts I introduced the PKP/IQSS OJS-Dataverse integration project. After a really short developmental period the project now is happy to announce that the first version has been released! Congrats!
The plugin has been developed by PKP (Public Knowledge Project) in collaboration with Harvard’s Institute for Quantitative Social Science (IQSS). Funded by a $1 million Alfred P. Sloan Foundation grant, the OJS-DVN project developed the plugin for journals that are using the Open Journal System (OJS). a journal management and publishing system .
The following blog post is an interesting point of view in the discussion on open science. It originally appeared on Digging Digitally and is reposted under a CC-BY license.
Feel free to comment!
The Open Movement has made impressive strides in the past year, but do these strides stand for reform or are they just symptomatic of the further expansion and entrenchment of neoliberalism? Eric Kansa argues that it is time for the movement to broaden its long-term strategy to tackle the needs for wider reform in the financing and organization of research and education and oppose the all-pervasive trend of universities primarily serving the needs of commerce. Read the rest of this entry »
Knowledge Exchange (KE) – a cooperation between five national funding organisations (DFG, Surf, DEFF, CSC and JISC) – has been founded in 2005 to improve the digital infrastructure for information and communication technology as it relates to the research and university library sectors. Since 2005 KE is very active in multiple areas. These areas are clearly intended to encourage open access to the tools of science and scholarship for the higher education and research communities. They also contribute toward building an integrated e-infrastructure and exploring new developments in the future of publishing. There is a specific focus is on the development of storage, accessibility and quality assurance of digitally published research data. Another area of activity is directed at exploring effective investment in research tools (like interoperability standards; research data; research tools and sustainable business models for Open Access. Read the rest of this entry »
Posted: December 20th, 2013 | Author:Sven | Filed under:EDaWaX | Comments Off
The year draws to a close – a very good reason to sum up some of our activities in 2013:
First of all, the EDaWaX-project team wants to thank our projects partners, our cooperations and funders but also all our readers for a very successful year.
Our first funding phase has come to an end and we are really happy about all the things we could achieve in 2013:
Currently, Europe’s eighth Framework Programme takes form: On December 2013 the European Council has adopted Horizon 2020 programme for research and innovation for the years 2014 to 2020.
Horizon 2020, which has a budget of around 77 billion euros, will underpin the objectives of the Europe 2020 strategy for growth and jobs, as well as the goal of strengthening the scientific and technological bases by contributing to achieving a European Research Area in which researchers, scientific knowledge and technology circulate freely. Read the rest of this entry »
A week ago our project held its final evaluation workshop. We presented the main results of some of our work packages and also introduced a beta version of our pilot application for the management of publication-related research data in journals.
In preparation of the workshop we invited more than 30 editors of scholarly journals and almost a dozen scientists from 15 journals accepted our invitation. Read the rest of this entry »
Our project currently has published the results of our work package 3 in which we analyzed the role of research data centres with regard to management of publication-related research data. This working paper presents the results of a survey among these scientific infrastructure service providers.
By conducting a desk research and an online survey, we found out that almost three quarters of all responding research data centres, archives and libraries generally store externally generated research data – what also applies to publication-related data.
Almost 75% of all respondents also store and host the code of computation (the syntax of statistical analyses). If self-compiled software components have been used to generate research outputs, only 40% of all respondents accept these software components for storing and hosting.
Eight in ten institutions also stated that they are taking specific actions for digital long-term preservation of their data. In regard to the documentation of stored and hosted research data almost 70% of all respondents claimed to use the metadata schema of the Data Documentation Initiative (DDI); Dublin Core was used by 30 percent (multiple answers were permitted). Almost two thirds also used persistent identifiers to facilitate citation of these datasets. Three in four respondents also stated to support researchers in creating metadata for their data. Application programming interfaces (APIs) for uploading or searching datasets currently have not been implemented by any of the respondents yet. Little widespread is the use of semantic technologies like RDF.