Big fishes in artificial intelligence, from Google’s Google Brain unit, Microsoft, Stanford, Montreal’s Institute for Learning Algorithms, and Cambridge, have been looking for ways to develop more intelligent machine learning methods to understand networks. Late last week, a team of scholars from the companies reported about a neural network model which can figure out the architecture of various networks without needing information from the whole network.

Petar Veličković et al. wrote a paper, titled as “Deep Graph Infomax”, in which they have suggested a new method of deciphering hidden network parts. Deep Graph Infomax allocates overall information regarding all of Reddit – a social network, to figure out the structure of “local” areas of Reddit. In other words, it is like taking a large picture and then moving backwards to discover smaller clues about the bigger picture.

A network is considered to be any group of things that are linked by connections. In the case of Reddit, neural network work was used to forecast the “community structure” of its network. However, scaling Reddit’s local areas is a very big task, with zillions of posts, it is almost impossible to collect all of the posts and their respective links from a still start. The solution demands a masterpiece in joining together the many advances in neural networks.

The researchers altered a previous work, done by one of them, known as “Deep Infomax”. Deep Infomax had the intention of advancing image recognition, not the comprehension of networks. It used a process called as “mutual information”, in which information was shared between high-level “representations” and small patches of a picture. This way, Deep Infomax was able to outperform other techniques of image recognition.

The researchers continued with the Deep Infomax methodology and transformed it to be used with network representations rather than images. They taught a convolutional neural network (CNN) model which also re-developed the “labels” which are generally provided by humans to train such AI networks.

The scholars mentioned that Deep Graph Infomax is in competition with other graphs, which are not seen before, analyzing programs, called inductive analysis. Every “node” in the newly created AI model “has access to structural properties of the entire graph” of the network, whereas other methods have knowledge about only part of the network.

Fascinatingly, by discarding classic methods of network analysis, which are referred to as a  “random walk”, the authors claim that their method is more refined.

“The random-walk objective is known to over-emphasize proximity information at the expense of structural information.” In this case, there is unfairness in the random walk, which the Artificial Intelligence (AI) researchers would like to remove.

On the other hand, “Deep Graph Infomax” develops it so each network node is “mindful of the global structural properties of the graph.”

There is a major point made by the report: artificial neural networks that can take in information about the larger image and can match it with smaller details, can attain improved “representation”.  Representations are basically the higher level knowledge about a network. The major contribution of the work is about the latest quest to allow AI to learn higher level details than the sheer focused links.