قالب وردپرس درنا توس
Home / SmartTech / AI Weekly: Why a slow machine learning movement could be a good thing

AI Weekly: Why a slow machine learning movement could be a good thing



In 2019, almost 25,000 articles on AI and machine learning were published in the United States alone, compared to around 10,000 in 2015. And NeurIPS 2019, one of the world's largest conferences for machine learning and computational neuroscience, was briefly presented Question that the momentum reflects an increase in public relations and funding – and, accordingly, competition – within the AI ​​research community. However, some scientists suspect that the tireless drive for progress could do more harm than good.

In a recent tweet Zachary Lipton, assistant professor at Carnegie Mellon University, was jointly appointed at the Tepper School of Business, and the machine learning department proposed a one-year paper moratorium for the entire community that could promote "thinking" without "sprinting / rushing / spamming" to meet deadlines.

"The paper avalanche actually hurts people who didn't [high citation counts and nice academic positions]," he said. "The noise level of the field drives things to a point where serious people no longer see" having paper "as meaningful at all … [The] The mere fact of having paper has become a useless signal because of the noise level is even that high among the accepted papers. “

Timnit Gebru, the technical leader of the ethical artificial intelligence team at Google, repeated this feeling in a tweet ahead of the AAAI conference on artificial intelligence in New York City earlier this month. "I'm currently involved in too many conference and service-related things. I can't even keep up with everything." In addition to the review and the area presidency, there is also logistics … organization, etc., "she said. “Academics say you have more time to do research in the industry, but I didn't do that at all… Reading, coding, and understanding feel more like an activity that I do in my spare time than in my spare time primary responsibility. “

There is preliminary evidence that the crisis has led to research that could mislead the public and hinder future work. In a 2018 meta-analysis by Lipton and Jacob Steinhardt, who is a member of the statistics faculty at the University of California at Berkeley and the Berkeley Artificial Intelligence Lab, the two claim that troubling trends have emerged in the science of machine learning, including: [19659007] Failure to distinguish between explanation and speculation and to identify the sources of empirical gains.

  • The use of mathematics that obscures or impresses rather than illustrates the language.
  • The misuse of language, for example by overloading established technical terms.
  • They attribute this in part to the rapid expansion of the community and the resulting thinness of the reviewer pool. The "often misaligned" incentives between science and short-term success measures – such as admission to a leading academic conference – should also be to blame, they say.
    "In other areas, an uncontrolled decline in science has led to the crisis," wrote Lipton and Steinhardt. “Stricter presentation, science and theory are essential for both scientific progress and the promotion of productive discourse with the general public. Since practitioners apply [machine learning] in critical areas such as health, law and autonomous driving, a calibrated awareness of the capabilities and limits of [machine learning] systems helps us to use [machine learning] responsibly. “

    Indeed, a preprint paper by Google AI researchers demonstrated a system that could outperform human experts in finding cancer in mammograms. However, a recent Wired editorial pointed out that mammography screenings are viewed by some as faulty medical intervention. AI systems like the one promised by Google could improve results, but at the same time exacerbate the problems of over-testing, over-diagnosis and over-treatment.

    In a separate case, researchers from Microsoft Research Asia and Beihang University developed an AI model that could be read and commented on in a human-like manner, but the paper describing the model did not mention its possible abuse. This failure to address ethical issues triggered a backlash that prompted the research team to upload an updated paper to address the concerns.

    “As the impact of machine learning increases and the audience for research papers increasingly includes students, journalists and politicians. These considerations also apply to this broader audience, ”write Lipton and Steinhardt. "By communicating more precise information more precisely, a better [machine learning] grant could accelerate the pace of research, shorten the entry time for new researchers, and play a more constructive role in public discourse."

    In their co-authored report, Lipton and Steinhardt outline several suggestions that could help correct the current trend. They say researchers and publishers should provide better incentives by asking questions such as, "Could I have accepted this paper if the authors had done a worse job?" and by highlighting meta-polls that eliminate over-the-top claims. On the author side, they recommend deepening the "how" and "why" of an approach in contrast to its performance and carrying out error analyzes, ablation studies and robustness tests in the course of research.

    For AI reporting, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner – and be sure to subscribe to the weekly AI newsletter and bookmark our AI channel.

    Thank you for reading,

    Kyle Wiggers

    AI staff writer


    Source link