Automated Assessment of ER Model Using the Domain Knowledge

Category: Natural Language Processing Research Publications
Date: August 7, 2019

Muhammad Javed, Yuqing Lin

It is a challenging task to develop a complete and correct Entity Relationship Model (ERM) from the requirements. Quality of artifacts in the later stages of software development (e.g. logical database design, physical database design, and the final product) dependents on the conceptual models. Domain Knowledge (DK) plays a key role while extracting the artifacts from requirements. It is agreed upon that most errors in the early stages of software development are due to the lack of adequate DK. In this paper, we are proposing an automated assessment approach, which focuses on some major issues of ERM such as completeness and correctness. The automatically extracted ERM is used as a reference for the assessment of a manually developed model from the same set of requirements. We trained the Skip-gram model of word2vec for extracting the DK, which is used for assisting in errors detection and ERM’s labels matching. As a case study, we considered models from the business domain. Inputs of this automated technique are reference model, DK, and the model to be evaluated. The output is a list of errors (indicating the sources) and suggestions. The results show the proposed approach is having a noticeable improvement over the existing approaches.

Our vision is to lead the way in the age of Artificial Intelligence, fostering innovation through cutting-edge research and modern solutions. 

Quick Links
Contact

Phone:
+92 51 8912223

Email:
info@neurog.ai