keyboard_arrow_up
Top 10 read research articles in the field of Database Management Systems @ 2024

Cloud Database Data Base as a Service

    Waleed Al Shehri, Department of Computing, Macquarie UniversitySydney, NSW 2109, Australia

    ABSTRACT

    Cloud computing has been the most adoptable technology in the recent times, and the database has alsomoved to cloud computing now, so we will look into the details of database as a service and its functioning.This paper includes all the basic information about the database as a service. The working of database as aservice and the challenges it is facing are discussed with an appropriate. The structure of database incloud computing and its working in collaboration with nodes is observed under database as a service. This paper also will highlight the important things to note down before adopting a database as a service provides that is best amongst the other. The advantages and disadvantages of database as a service will let you to decide either to use database as a service or not. Database as a service has already been adopted bymany e-commerce companies and those companies are getting benefits from this service.

    KEYWORDS

    Database, cloud computing, Virtualization, Database as a Service (DBaaS).


    For More Details :
    http://airccse.org/journal/ijdms/papers/5213ijdms01.pdf


    Volume Link :
    https://airccse.org/journal/ijdms/current2013.html




Comparative Study of Data Warehouse Design Approaches: A Survey)

    Rajni Jindal 1 and Shweta Taneja 2, 1Associate Professor, Dept. of Computer Engineering, 2Research Scholar, Dept. of Computer Engineering, India

    ABSTRACT

    The process of developing a data warehouse starts with identifying and gathering requirements, designingthe dimensional model followed by testing and maintenance. The design phase is the most important activity in the successful building of a data warehouse. In this paper, we surveyed and evaluated the literature related to the various data warehouse design approaches on the basis of design criteria and propose a generalized object oriented conceptual design framework based on UML that meets all types of user needs.

    KEYWORDS

    Data warehouse design, Multidimensional modelling, Unified Modelling Language


    For More Details :
    http://airccse.org/journal/ijdms/papers/3211ijdms08.pdf


    Volume Link :
    https://airccse.org/journal/ijdms/current2011.html




A Critical Study of Selected Classification Algorithms for Liver Disease Diagnosis

    Bendi Venkata Ramana1 , Prof. M.Surendra Prasad Babu2 , Prof. N. B. Venkateswarlu3, 1Associate Professor, Dept.of IT, AITAM, 2Dept. of CS&SE, Andhra University, 3 Professor, Dept. of CSE, AITAM, India/p>

    ABSTRACT

    Patients with Liver disease have been continuously increasing because of excessive consumption of alcohol, inhale of harmful gases, intake of contaminated food, pickles and drugs. Automatic classification tools may reduce burden on doctors. This paper evaluates the selected classification algorithms for the classification of some liver patient datasets. The classification algorithms considered here are Naïve Bayes classifier, C4.5, Back propagation Neural Network algorithm, and Support Vector Machines. These algorithms are evaluated based on four criteria: Accuracy, Precision, Sensitivity and Specificity.

    KEYWORDS

    Classification Algorithms, Data Mining, Liver diagnosis


    For More Details :
    https://airccse.org/journal/ijdms/papers/3211ijdms07.pdf


    Volume Link :
    https://airccse.org/journal/ijdms/current2011.html




NOSQL Implementation of a Conceptual Data Model: UML Class Diagram to a Document-oriented Model

    A.Benmakhlouf, University Hassan 1st, BP 577, 26000 Settat, Morocco

    ABSTRACT

    The relational databases have shown their limits to the exponential increase in the volume of manipulated and processed data. New NoSQL solutions have been developed to manage big data. These approaches are an interesting way to build no-relational databases that can support large amounts of data. In this work, we use conceptual data modeling (CDM), based on UML class diagrams, to create a logical structure of a NoSQL database, taking account the relationships and constraints that determine how data can be stored and accessible. The NoSQL logical data model obtained is based on the Document-Oriented Model (DOM). to eliminate joins, a total and structured nesting is done on the collections of the documentoriented database. Rules of passage from the CDM to the Logical Oriented-Document Model (LODM) are also proposed in this paper to transform the different types of associations between class. An application example of this NoSQL BDD design method is realised to the case of an organization working in the e-commerce business sector.

    KEYWORDS

    Big data, No-Relational, conceptual data modelling, The NoSQL logical data model, nested document oriented model.


    For More Details :
    https://aircconline.com/ijdms/V10N2/10218ijdms01.pdf


    Volume Link :
    https://airccse.org/journal/ijdms/current2018.html




A Study on Challenges and Opportunities in Master Data Management

    Tapan kumar Das1 and Manas Ranjan Mishra2, 1 SITE, VIT University, 2 IBM India Pvt .Ltd, India

    ABSTRACT

    This paper aims to provide a data definition of one master data for cross application consistency. Theconcepts related to Master data management in broader spectrum has been discussed. The currentchallenges companies are facing while implementing the MDM solutions are outlined. We have taken acase study to highlight why Master Data Management is imperative for the enterprises in optimizing theirbusiness Also we have identified some of the long term benefits for the enterprises on implementing MDM.

    KEYWORDS

    Data quality, Information system, Unstructured, Transactional data flood.


    For More Details :
    http://airccse.org/journal/ijdms/papers/3211ijdms09.pdf


    Volume Link :
    https://airccse.org/journal/ijdms/current2011.html




High Capacity data hiding using LSBSteganography and Encryption

    Shamim Ahmed Laskar and Kattamanchi Hemachandran, Department of Computer Science Assam University, Silchar,Assam, India

    ABSTRACT

    The network provides a method of communication to distribute information to the masses. With the growthof data communication over computer network, the security of information has become a major issue.Steganography and cryptography are two different data hiding techniques. Steganography hides messagesinside some other digital media. Cryptography, on the other hand obscures the content of the message. We propose a high capacity data embedding approach by the combination of Steganography andcryptography. In the process a message is first encrypted using transposition cipher method and then theencrypted message is embedded inside an image using LSB insertion method. The combination of these twomethods will enhance the security of the data embedded. This combinational methodology will satisfy therequirements such as capacity, security and robustness for secure data transmission over an open channel. A comparative analysis is made to demonstrate the effectiveness of the proposed method by computing Mean square error (MSE) and Peak Signal to Noise Ratio (PSNR). We analyzed the data hiding techniqueusing the image performance parameters like Entropy, Mean and Standard Deviation. The stego imagesare tested by transmitting them and the embedded data are successfully extracted by the receiver. The mainobjective in this paper is to provide resistance against visual and statistical attacks as well as highcapacity.

    KEYWORDS

    Steganography, Cryptography, plain text, encryption, decryption, transposition cipher, Least Significant Bit, Human Visual System , Mean square error and Peak Signal to Noise Ratio


    For More Details :
    http://airccse.org/journal/ijdms/papers/4612ijdms05.pdf


    Volume Link :
    https://airccse.org/journal/ijdms/current2012.html




Top Newsql Databases and Features Classification

    Ahmed Almassabi1 , Omar Bawazeer and Salahadin Adam2, 1Department of Computer Science, Najran University, 2Department of Information and Computer Science, King Fahad University of Petroleum and Mineral, Saudi Arabia

    ABSTRACT

    Versatility of NewSQL databases is to achieve low latency constrains as well as to reduce cost commodity nodes. Out work emphasize on how big data is addressed through top NewSQL databases considering their features. This NewSQL databases paper conveys some of the top NewSQL databases [54] features collection considering high demand and usage. First part, around 11 NewSQL databases have been investigated for eliciting, comparing and examining their features so that they might assist to observe high hierarchy of NewSQL databases and to reveal their similarities and their differences. Our taxonomy involves four types categories in terms of how NewSQL databases handle, and process big data considering technologies are offered or supported. Advantages and disadvantages are conveyed in this survey for each of NewSQL databases. At second part, we register our findings based on several categories and aspects: first, by our first taxonomy which sees features characteristics are either functional or non-functional. A second taxonomy moved into another aspect regarding data integrity and data manipulation; we found data features classified based on supervised, semi-supervised, or unsupervised. Third taxonomy was about how diverse each single NewSQL database can deal with different types of databases. Surprisingly, Not only do NewSQL databases process regular (raw) data, but also they are stringent enough to afford diverse type of data such as historical and vertical distributed system, real-time, streaming, and timestamp databases. Thereby we release NewSQL databases are significant enough to survive and associate with other technologies to support other database types such as NoSQL, traditional, distributed system, and semirelationship to be as our fourth taxonomy-based. We strive to visualize our results for the former categories and the latter using chart graph. Eventually, NewSQL databases motivate us to analyze its big data throughput and we could classify them into good data or bad data. We conclude this paper with

    KEYWORDS

    NewSQL, NoSQL, RDBMs. FF, Non-FF, and Big data


    For More Details :
    https://aircconline.com/ijdms/V10N2/10218ijdms02.pdf


    Volume Link :
    https://airccse.org/journal/ijdms/current2018.html




Algorithm for Relational Database Normalization Up to 3NF

    Moussa Demba, Aljouf University Sakaka, Kingdom of Saudi Arabia

    ABSTRACT

    When an attempt is made to modify tables that have not been sufficiently normalized undesirable side effects may follow. This can be further specified as an update, insertion or deletion anomaly depending on whether the action that causes the error is a row update, insertion or deletion respectively. If a relation R has more than one key, each key is referred to as a candidate key of R. Most of the practical recent works on database normalization use a restricted definition of normal forms where only the primary key (an arbitrary chosen key) is taken into account and ignoring the rest of candidate keys. In this paper, we propose an algorithmic approach for database normalization up to third normal form by taking into account all candidate keys, including the primary key. The effectiveness of the proposed approach is evaluated on many real world examples.

    KEYWORDS

    Relational database, Normalization, Normal forms, functional dependency, redundancy


    For More Details :
    http://airccse.org/journal/ijdms/papers/5313ijdms03.pdf


    Volume Link :
    https://airccse.org/journal/ijdms/current2013.html




Mapping Common Errors in Entity Relationship Diagram Design of Novice Designers

    Rami Rashkovits1 and Ilana Lavy2, 1Department of Management Information Systems, 2 Department of Information Systems, Israel

    ABSTRACT

    Data modeling in the context of database design is a challenging task for any database designer, even more so for novice designers. A proper database schema is a key factor for the success of any information systems, hence conceptual data modeling that yields the database schema is an essential process of the system development. However, novice designers encounter difficulties in understanding and implementing such models. This study aims to identify the difficulties in understanding and implementing data models and explore the origins of these difficulties. This research examines the data model produced by students and maps the errors done by the students. The errors were classified using the SOLO taxonomy. The study also sheds light on the underlying reasons for the errors done during the design of the data model based on interviews conducted with a representative group of the study participants. We also suggest ways to improve novice designer's performances more effectively, so they can draw more accurate models and make use of advanced design constituents such as entity hierarchies, ternary relationships, aggregated entities, and alike. The research findings might enrich the data body research on data model design from

    KEYWORDS

    Database, Conceptual Data Modelling, Novice Designers


    For More Details :
    https://aircconline.com/ijdms/V13N1/13121ijdms01.pdf


    Volume Link :
    https://airccse.org/journal/ijdms/current2021.html




An Infectious Disease Prediction Method Based on K-Nearest Neighbor Improved Algorithm

    Yaming Chen1 , Weiming Meng2 , Fenghua Zhang3 ,Xinlu Wang4 and Qingtao Wu5, 1,2,4 Computer Science and Technology, Henan University of Science and Technology, 3Computer Technology, Henan University of Science and Technology, 5Professor, Henan University of Science and Technology, China

    ABSTRACT

    With the continuous development of medical information construction, the potential value of a large amount of medical information has not been exploited. Excavate a large number of medical records of outpatients, and train to generate disease prediction models to assist doctors in diagnosis and improve work efficiency.This paper proposes a disease prediction method based on k-nearest neighbor improvement algorithm from the perspective of patient similarity analysis. The method draws on the idea of clustering, extracts the samples near the center point generated by the clustering, applies these samples as a new training sample set in the K-nearest neighbor algorithm; based on the maximum entropy The K-nearest neighbor algorithm is improved to overcome the influence of the weight coefficient in the traditional algorithm and improve the accuracy of the algorithm. The real

    KEYWORDS

    Data Mining, KNN, Clustering, Maximum Entropy


    For More Details :
    https://aircconline.com/ijdms/V11N1/11119ijdms02.pdf


    Volume Link :
    https://airccse.org/journal/ijdms/current2019.html








menu
Reach Us

emailsecretary@IJSPTM.org


emailIJSPTMsecretary@yahoo.com

close