We use cookies to ensure the functionality of our website, to personalize content and advertising, to provide social media features, and to analyze our traffic. If you allow us to do so, we also inform our social media, advertising and analysis partners about your use of our website. You can decide for yourself which categories you want to deny or allow. Please note that based on your settings not all functionalities of the site are available.
Further information can be found in our privacy policy.
Recent Comments
Cannot agree more for your last paragraph, whilst I think it is still quite tricky in terms of the quality of the publications. Especially in the recent AI and ML domain, a good "alchemy" may lead you a deep learning paper in NeurIPS (Rank A*), but the reality is that some seeming trivial but "provable" statistical work can only get you a AIstats (Rank B, now A in Core but in most list is B or C). In my opinion, some of them from AIstats contribute more to the community in a touchable way than certain NeurIPS which only has empty theatrics. However, on the job market, with a NeurIPS paper you can even beat another competitor with 2 or 3 AIstats in most of times.
Thanks for your comment, Jiang. You are right, sometimes the journal impact factor/conference tier cannot reflect the true "quality" of a publication. However, as a reader, such as a researcher looking for new ideas, and an employer seeking competitive applicants, it is the easiest for them to judge the quality of a publication by its impact factor or tier. Like "publish or perish", it is not a perfect evaluation system either; as you said, research from lower-tier publications can contribute more to the community than the high-tier ones do in the long run. On the other hand, I think the impact factor/tier reflects the "average" level of all the publications from that journal/conference, so while some NeurIPS papers may be less "useful" than Alstats ones, on average, the NeurIPS papers should have higher quality due to its lower acceptance rate.