SDPRA 2021

The First Workshop & Shared Task on Scope Detection of the Peer Review Articles

Collocated with PAKDD 2021

NEWS

Workshop

  • Workshop Overview

    For years, peer review has been the formal part of scientific communication that validates a scientific research article’s quality. A particular research article goes through discrete filtering steps to get published in a reputed journal or conference. The first step in the peer review process is the editor’s initial screening(s). The editor’s job, who is also an expert in the particular field, decides whether an article should be rejected without further review or forwarded to expert reviewers for meticulous evaluation. Acceptance of paper depends heavily on the reviewers. It’s becoming more common for people to share their reviews on social media, especially when reviewers reject their work on spurious grounds.

    Some of the common reasons for rejection are due to paper's language and writing/formatting style, results are not better than SOTA, does not use a particular method (like GPT-3 or XLNET), method is too simple (seriously? Isn’t that a good thing?), too narrow or outdated or out of scope, completely new topic (source: #AcadTwitter)


    To demystify and improve such an obscure process, we propose the 1st Workshop & Shared task on Scope Detection of the Peer Review Articles to address these gaps. We seek to reach the broader NLP and AI/ML community to pool the distributed efforts to improve peer review of the scholarly documents and build downstream applications. SDPRA 2021 will comprise a research track and a Shared Task



    Call for Papers

    Topics of Interest: Papers are invited on substantial, original and unpublished research on all aspects of Data Mining, with a particular focus on Scholarly Articles. The areas of interest include, but are not limited to:

    • Analysis of scholarly documents
    • Search & Retrieval
    • Discourse modeling and argument mining
    • Bibliometrics, scientometrics, and altmetrics approaches and applications
    • Scholarly Document Summarization
    • Citation Network
    • Novelty Detection
    • Scope Detection
    • Peer Review
    • Fairness-aware data mining
    • Negative Results

    Submission Instructions:
    • Each submitted paper should include an abstract up to 200 words and be no longer than 20 single-spaced pages with 10pt font size (including references, appendices, etc.). Authors are strongly encouraged to use Springer LNCS/LNAI manuscript submission guidelines for their submissions.
    • All papers must be submitted electronically through the paper submission system in PDF format only. If required supplementary material may be submitted as a separate PDF file, but reviewers are not obligated to consider this, and your manuscript should, therefore, stand on its own merits without any supplementary material. Supplementary material will not be published in the proceedings.


    Springer will publish the proceedings of the conference as a volume of the LNAI series, and selected excellent papers will be invited for publications in special issues of high-quality journals, including Knowledge and Information Systems (KAIS) and International Journal of Data Science and Analytics.

    Submitting a paper to the workshop means that the authors agree that at least one author should attend the workshop to present the paper, if the paper is accepted. For no-show authors, their affiliations will receive a notification.



  • Important Dates for Workshop:
    • November 15, 2020: Call for Regular Papers
    • January 21, 2021: Submission Deadline
    • February 22, 2021: Author Notification
    • March 8, 2021: Camera Ready submission
    • May 11, 2021: Workshop
  • Important Note: In response to the ongoing pandemic, the PAKDD2021 conference is held in online mode. All timings are as per Indian Standard Time (IST) (UTC + 05:30).

Shared Task

  • Shared Task Overview

    Description: This shared task focuses on identifying topics or category of scientific articles, which in turn can help us in efficiently storing large number of articles, retrieving related papers, and in building personal recommendation systems. For this task we collected a total of 35000 abstracts scientific article(computer science) from different corpora. Given an abstract of a paper, the objective of the shared task is to classify it into one of the 7 predefined domains.

    Category Train Validation Test
    Computation and Language (CL)
    2740 1866 1194
    Cryptography and Security (CR)
    2660 1835 1105
    Distributed and Cluster Computing (DC)
    2042 1355 803
    Data Structures and Algorithms (DS)
    2737 1774 1089
    Logic in Computer Science (LO)
    1811 1217 772
    Networking and Internet Architecture (NI)
    2764 1826 1210
    Software Engineering (SE)
    2046 1327 827
    TOTAL 16800 11200 7000


    Evaluation: The official evaluation metric for the shared task is weighted-average F1 score


    System description paper: All team/participants will be invited to submit their models as short papers to be included in the proceedings. Based on the reviewers' comments, we will decide which papers to be accepted.

    Code Reproducibility: To improve code reproducibility and transparency in scientific community, all shared task participants should submit their system to our Github Repository


    Submission details: TBA

Contact Us

Email
  sdpra2021@gmail.com

Follow Us