To members of the ICWSM community,
We are very excited to announce the final program for the 2013 ICWSM. This has been a banner year for ICWSM with significantly increased submissions to and interest in the conference. We are delighted to see this level of interest and hope that the relevance and stature of the scholarship presented at ICWSM continue to grow.
Our primary goal for ICWSM this year was to create a program with high-quality, interesting papers that represent excellent computational and social science approaches to understanding and explicating social media practices, outcomes, and implications. In regards to the reviewing process, we wanted to ensure that approximately 20 percent of the submitted papers were accepted, to keep with past years’ acceptance rates, and that all papers received a fair review. In the end, there was some variation in the depth of the reviews, but we believe that the process was fair. In the spirit of openness, we wanted to articulate our processes to the community at large.
First, here are some of the numbers at a glance:
We received many more papers this year compared to last year, requiring us to find additional reviewers in a very short time frame. To give you a sense of the numbers, last year there were 232 full papers submitted, with 47 accepted. This year, we had 349 full papers submitted, with 72 accepted (21% acceptance rate). This is an increase of 50% in one year.
The numbers for our program and senior program committee were such that we could handle last year’s load reasonably with a little extra flexibility. Handling 50% more was something we did not plan on. When it became clear that our reviewing pool couldn’t realistically handle these numbers, we asked for recommendations from our existing SPC and PC members and tried to vet these new suggestions as best we could.
Second, how we solicited program and senior program committee members:
We drew from several sources of inspiration in identifying potential program committee members. First were the lists from previous years. Second were colleagues that we knew to be working in this space and third were recommendations from the committee itself (in particular the senior program committee). We cross-checked every member before inviting them to ensure that every member had published in this space and was knowledgeable. In the case of graduate students (where we only took near PhDs), they had to show promise, previous publications, and a personal recommendation from someone we trusted.
Third, how papers were reviewed:
Each paper was assigned initially to three regular reviewers (PC members) and a meta-reviewer (SPC member). As you might expect with more than 120 PC members and a very tight turnaround time, the reviews were not of uniform quality or thoroughness; not all were submitted on time, and some didn’t arrive at all. (In cases where reviewers were absent or provided reviews that were too cursory, we noted this and will be sharing these names with next year’s program chairs.)
When reviews were short or missing, or there was considerable disagreement or limited expertise among reviewers, we and the SPCs nagged reviewers and recruited replacements. In a few cases, SPCs created full reviews themselves. In cases where the SPC thought the outcome was clear and there was sufficient feedback to authors with two reviews, we asked them to submit meta-reviews based on those two reviews.
SPC meta-reviews were asked to make a recommendation of acceptance or rejection. In their private notes to the committee, they sometimes gave more nuanced assessments, such as "borderline accept" or "accept if there's room."
We recognized that SPCs were not all perfectly calibrated with each other on acceptance criteria, and each saw only a small sample of papers. Thus, we did not automatically take the SPC’s recommendation. In addition, in a few cases, the SPCs did not come through with a meta-review. Thus, we chairs went through all the submissions.
Two of us looked independently at each paper's reviews (and, when needed, the paper itself) and labeled the paper as reject, accept, or discuss. In cases of missing meta-reviews, we sometimes added meta-reviews ourselves. In a few cases, we thought the reviews were consistent and clear enough that no meta-review was necessary to explain a final decision. In some cases we added notes to existing meta-reviews, always clearly labeled as a note from the PC chairs.
If both PC chair evaluators gave the same label of accept or reject, that became a final decision. If there was disagreement between the two PC chairs or either one marked the paper as "discuss," then those two chairs, and sometimes all four of us, discussed the paper until we came to a consensus about its decision.
Program chairs used EasyChair to note any conflicts of interest (such as colleagues at the same institution or papers that they were co-authors on), and everything about those papers were made invisible to those program chairs by the system.
Fourth, why there was no “accept as poster/short paper” category:
In past years, in addition to the approximately 20% of papers that were accepted, the next tier of less strong submissions were asked to change their 8-page papers into a 4-page paper. We considered this unwise because there was not time or human resources to re-review the newly shortened paper, and authors complained about having to do this. Managing this process was simply not practical. Furthermore, there were often papers that did not have 4 great pages and 4-6 pages to be cut, but rather a full paper that just needed a bit more work. This year, papers that needed more work before they were ready to be accepted in the conference were rejected.
Fifth, how we decided on oral vs. spotlight presentation
ICWSM has always been a single-track conference. Because of this, it was not possible to have all 72 accepted full papers presented in a conventional 20-minute oral presentation format. We have scheduled 26 conventional oral presentations and 46 papers presented in a spotlight format. The spotlight papers will have lightning, 2-minute presentations in the main lecture hall and a 45-minute poster presentation from 3:15-4:00 each day. Only 15 or 16 posters will be presented at each spotlight poster session, so spotlight poster presenters can expect good audiences.
We explicitly did not select oral vs. spotlight format based on reviewer assessments of paper quality. Instead, it was based on our judgment of the portion of the audience likely to be interested in the results and the best format for the audience to absorb it. When we thought a subset of the audience would be especially interested in details of complex models, we opted for a poster format, which would allow for interactive presentation and questioning. When we thought there was a general interest storyline that could best be absorbed through a narrative presentation, we opted for an oral format. The final assignments to sessions also took into account diversity of topics, and how relevant a paper’s contribution might be to the breadth of the ICWSM community.
There were many in-between cases where we had to make tough calls that could have gone either way. Having both formats allows us to have a conference that caters both to the general interest in weblogs and social media as well as the niche audiences for specific techniques and advances.
Finally, how you can help strengthen ICWSM for next year and beyond:
This conference is growing. This is a good thing that we want to continue, but the community needs to decide how the conference should evolve. For instance, should ICWSM remain a single track conference? We look forward to discussing these issues at the Town Hall at the conference. Please bring your feedback, suggestions, and energy as we talk about next year’s conference and how we can allow the conference to grow while maintaining its reputation for high-quality, interesting, relevant scholarship.
Bernie, Ian, Nicole & Paul