Facebook announced today that Microsoft has expanded its participation in PyTorch, the machine learning framework of the social network, to take responsibility for the development and maintenance of the PyTorch build for Windows. The goal is to reconcile the Windows experience with other platforms such as Linux. In the past, PyTorch was left behind on Windows due to lack of test coverage, complicated installation experience and lack of functionality.
PyTorch, which Facebook publicly released in January 2017, is an open source machine learning library based on Torch, a scientific computer framework and a scripting language based on the Lua programming language. While TensorFlow has existed a bit longer (since November 201
“According to the latest Stack Overflow developer survey, Windows remains the primary operating system for the developer community (46% Windows vs. 28% MacOS). Microsoft is pleased to bring its Windows know-how to the table and to make optimal use of PyTorch under Windows, ”wrote Facebook and Microsoft in a joint blog post.
Microsoft may have released a preview earlier this year that adds 2 graphics card computing support to the Windows Subsystem for Linux (WSL) 2, with over 3.5 million active developers running Linux-based tools on Windows each month. It explicitly brought support for AI and machine learning applications and enabled PyTorch training workloads for all hardware in the Windows ecosystem, including Nvidia cards with CUDA cores.
Facebook says it will work with Microsoft to further improve the quality of the PyTorch build for Windows, especially by bringing the test coverage up to date. Microsoft will also manage relevant binary files and libraries (such as TorchVision, TorchText and TorchAudio) and support the PyTorch community on GitHub as well as the PyTorch Windows discussion forums.
“We will continue to improve the Windows experience based on community feedback and requests. So far, the feedback from the community has indicated distributed training support and a better installation experience with pip as the next areas for improvement, ”write Facebook and Microsoft.
In related news, Facebook also said it moved mixed-precision features into the PyTorch core that Windows supports. While PyTorch trains with 32-bit floating point arithmetic (FP32) by default, Facebook notes that this is not essential to achieve full accuracy for many deep learning models. This new mixed precision functionality, developed by Nvidia in 2017 and combining the single precision format (FP32) with the half precision format (e.g. FP16), manages the same accuracy as FP32 training with additional performance benefits Nvidia graphics cards (such as shorter training time and less memory requirement).
PyTorch 1.6 – the latest version – can automatically convert certain graphics card operations from FP32 accuracy to mixed accuracy. Facebook claims that on an Nvidia V100 card, it delivers 1.5 to 5.5 times the acceleration compared to FP32, while converging with the same final accuracy.