In 2019, in the United States alone, the number of published papers related to AI and machine learning was close to 25,000, compared to about 10,000 in 2015. NeurIPS 2019One of the world's largest conferences on machine learning and computational neuroscience, with thousands of attendees publishing nearly 2,000 accepted papers.
There is no doubt that this momentum reflects an increase in advocacy and funding [and corresponding competition] in the AI research community. But some scholars believe that relentless pursuit of progress may cause more harm than good.
recently tweetZachary Lipton, an assistant professor at Carnegie Mellon University, was co-appointed by the Tepper School of Business and the Machine Learning Department to propose a one-year suspend for the entire community. He said it might encourage "thinking" instead of "sprinting" / Send Spam ".
"The paper avalanche is actually hurting those who don't [high citation rates and good academic standing]," he said. "The level of noise in the field pushes things to a serious level that serious people no longer take "It makes sense to own the paper" … [because] the noise level is so high [even in accepted papers. "
Timnit Gebru, co-director of technology for Google's Ethical Artificial Intelligence team tweet Before the AAAI Artificial Intelligence Conference in New York City earlier this month. "I'm currently involved in too many things related to meetings and services-I can't even keep up with everything. She said:" In addition to reviewing and hosting the area, there are logistics … organizations, etc. "People in academia say that you have more time to do research in the industry, but that's not the case for me at all … reading, coding, and trying to understand it is like what I do in my spare time, not mine Free time. main responsibility. "
There is preliminary evidence that research resulting from austerity policies may mislead the public and hinder future work. In 2018 Meta-analysis Shared by Lipton and Jacob Steinhardt, members of the statistics departments at the University of California, Berkeley and the Berkeley Artificial Intelligence Lab, the two asserted a worrying trend in machine learning scholarships, including:
- Unable to distinguish between interpretation and speculation, or to determine the source of experience gains
- Use of fuzzy or impressed rather than clarified mathematics
- Abuse of language, for example by overloading established technical terms
They attribute this part to the rapid expansion of the community and the sparse number of reviewers that followed. They say that “often misplaced” incentives between scholarships and short-term success metrics, such as attending a leading academic conference, are also likely to be the culprits.
Lipton and Stanha wrote: "In other areas, the uncontrolled decline in scholarships has led to crises." "The exposition, the strict scientific and theoretical requirements for scientific progress and the promotion of fruitful discourse with the general public are all Important. In addition, as practitioners apply [machine learning] in key areas such as health, law, and autonomous driving, calibration awareness of the capabilities and limitations of [machine learning] systems will help us deploy responsibly [ Machine learning]. "
Indeed, Google AI researchers' preprints proven The system's ability to detect cancer on mammograms may be better than human experts. But as a recent Wired editorial Point outSome people consider mammography screening to be a defective medical intervention. As Google promises, artificial intelligence systems can improve results, but at the same time exacerbate problems such as overtesting, overdiagnosis, and overtreatment.
In another example, researchers at Microsoft Research Asia and Beijing University of Aeronautics and Astronautics developed an AI model that can read and comment on news articles in a humane way, but the paper describing the model did not mention its Possible abuse. The failure to resolve ethical issues has sparked strong opposition, prompting the research team to upload the latest papers addressing those issues.
Lipton and Steinhardt wrote: "As the impact of machine learning expands, the audience for research papers increasingly includes students, journalists and policy makers, and these considerations also apply to a wider audience." The exchange of more accurate information, better [machine learning] scholarships can speed up research, reduce the onboarding time for new researchers, and play a more constructive role in public discourse. "
Lipton and Steinhardt outlined in their co-authored report some suggestions that may help correct current trends. They said that researchers and publishers should come up with questions such as "If the author did a bad job, can I accept the paper? And emphasize the meta-survey that removes exaggerated claims. On the author's side, they recommend hone a method" Way "and" why ", not its performance, and perform error analysis, ablation studies, and robustness checks during the study.
thanks for reading,
AI staff writer