The system is being upgraded and no submissions will be accepted during this time.

Show simple item record

dc.contributor.authorMahmud, Antor
dc.date.accessioned2024-10-02T12:28:48Z
dc.date.available2024-10-02T12:28:48Z
dc.identifier.urihttp://hdl.handle.net/10464/18953
dc.description.abstractIn traditional federated learning (FL) frameworks for knowledge graph embeddings (KGE), individual clients train their local KGE models independently, and a central server collects and aggregates (e.g., by averaging) these models to produce a global one. This process ensures data privacy throughout the FL training process, as the server does not require direct access to clients’ data. However, the performance of traditional FL global aggregation algorithms is significantly challenged by the non-identical distribution of data across clients’ knowledge graphs. To tackle this issue, we introduce AlignNet, a novel supervised contrastive learning (CL) approach that helps align both entity and relation embeddings across clients in federated settings. AlignNet works by pulling similar embeddings closer together while pushing dissimilar ones further apart, using only the existence of entities and relations without accessing the underlying data or detailed associations. This alignment process ensures robustness and better generalization across diverse clients, while still maintaining privacy. Our experiments on benchmark datasets, show that AlignNet consistently outperforms current FL methods, especially with more complex models and datasets. We found that AlignNet effectively reduces the variability and noise introduced by the FL process. While traditional FL setups tend to lose performance as more clients join the aggregation process, AlignNet improves as the number of clients increases. This makes AlignNet a strong choice for large-scale federated settings with many clients and diverse data. Overall, our results show that AlignNet is a scalable and reliable solution for federated KGE, making it an excellent fit for real-world applications like healthcare, finance, and distributed IoT networks, where handling data diversity and maintaining performance at scale are crucial.en_US
dc.language.isoengen_US
dc.publisherBrock Universityen_US
dc.subjectFederated Learningen_US
dc.subjectKnowledge Graphen_US
dc.subjectContrastive Learningen_US
dc.subjectArtificial Intelligenceen_US
dc.subjectNon-IID Dataen_US
dc.titleFederated Learning on Knowledge Graphs via Contrastive Alignmenten_US
dc.typeElectronic Thesis or Dissertationen_US
dc.degree.nameM.Sc. Computer Scienceen_US
dc.degree.levelMastersen_US
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.degree.disciplineFaculty of Mathematics and Scienceen_US
refterms.dateFOA2024-10-02T12:28:50Z


Files in this item

Thumbnail
Name:
Brock_Mahmud_Antor_2024.pdf
Size:
4.388Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record