Author : Steph-Yves Louis
Advisor : Dr. Jianjun Hu
Date : May 2nd
Time: 1 - 3 pm
Place : Virtual
The usage of graph to represent one's data in machine learning has grown in popularity in both academia and the industry due to its inherent benefits. With its flexible nature and immediate translation to real life observed objects, graph representation had a considerable contribution in advancing the state-of-the-art performance of machine learning in materials.
In this dissertation, we discuss how machines can learn from graph encoded data and provide excellent results through graph neural networks (GNN). Notably, we focus our adaptation of graph neural networks on three tasks: predicting crystal materials properties, nullifying the negative impact of inferior graph node points when learning, and generating crystal structures from material formula. In the first topic, we propose and evaluate a molecule-appropriate adaptation of the original graph-attention (GAT) model for materials property prediction. With the changes of including the encoded bonds formed by atomic elements and adding a final global-attention layer, our experiments show that our approach (GATGNN) achieves great performance and provides interpretable explanation of each atom.
For the second topic, we analyze the learning process of various well-known GNNs and identify a common issue of propagating noisy information. Aiming to reduce the spread of particularly harmful information, we propose a simple, memory-efficient, and highly scalable method called NODE-SELECT. Our results demonstrate that the combination of hard attention coefficients, binary learnable selection parameter, and v parallel arrangement of the layers significantly reduce the negative impact of noise data propagation within a GNN.
In the third topic, we extend the development of our GATGNN method and apply it to simulate electrodes reaction for predicting voltages. Finally, in our last topic, we propose a conditional generative method, named StructR-Diffusion for generating crystal structures. In this approach, we employ both GNN, stable diffusion, and graph-transformers to learn 3-dimentional space positioning of the elements within a unit-vector. Various statistical tests, physical attribute predictions, and visual inspections show that our proposed graph convolutional network model has a good generative capability. Our efficient model proves that it can generate diverse structures that are optimized even prior to DFT relaxations.