Imagine the following: star clusters, nebulae, and other interstellar phenomena produced from a whole fabric unattended from a computer. It may sound like the description for a futuristic holodeck, but researchers from the Institute of Perception and the Institute of Astronomy at Edinburgh University have designed such a system using artificial intelligence (AI).
In an article published on the preprint server Arxiv.org ("Creating New Worlds: High Resolution Synthetic Galaxies and Chained Generative Adversary Networks"), they describe an AI model that can produce high – resolution images of synthetic galaxies, the the distributions of real galaxies.
At the core of the machine learning architecture of the team are generative adversary networks (GANs), two-part neural networks consisting of generators which produce samples, and discriminators which try to generate between the generated samples and to distinguish real samples. There is no question of characterizing GANs as prodigies of AI algorithms. They were used to discover new medicines, create compelling photos of burgers and butterflies, and even create artificial brain cancer scans.
The proposed galaxy-generating system consisted of two five-layered GANs: Stage-I-GAN and Stage-II-GAN. The first generated low resolution (64×64 pixels) images while the second converted them to higher resolution (128×128 pixels) images using a technique called super-resolution. In practice, the researchers found that the Stage II GAN hallucinated the missing pixels, aiming for realism rather than accuracy.
To "encourage" the generator in the Stage II GAN to spit out images of synthetic galaxies that resemble their upscaled ones. Actual counterparts introduced a dual-objective feature to the newspaper's authors, which calculated an error metric between resolution-enhanced images and real galaxies. The result was a larger number of generated samples that retained "rarer" galaxy properties, such as spiral arms.
Researchers trained the AI system on a PC with a single Nvidia GTX 1060 GPU and fed it full-color images of stars and planetary bodies from the Galaxy Zoo 2 dataset, a crowdsourced astronomy project. In evaluating the results, four properties were considered: ellipticity or degree of deviation from circularity; Elevation angle from the horizontal; Total flow; and the size measurement of the semi-major axis (one half of the longest diameter of the ellipse).
In the end, the model produced "physically realistic" images of galaxies that were very similar to real things, the researchers wrote. They claim that their system could be used to augment databases with real samples that serve as a data source for deep learning models, such as those that classify and segment galaxy images that require a large number of training samples.
"Generative models capable of producing physically realistic galaxy images have many practical applications," they wrote. "[Our] work demonstrates the potential of GAN architectures as a valuable tool for modern astronomy."