COLOR CONVERTING OF ENDOSCOPIC IMAGES USING DECOMPOSITION THEORY AND PRINCIPAL COMPONENT ANALYSIS
Keivan Ansari1,2, Alexandre Krebs1, Yannick Benezeth1 and Franck Marzani1 1,Université de Bourgogne, France 2Institute for Color Science and Technology, Iran
Endoscopic color imaging technology has been a great improvement to assist clinicians in making better decisions since the initial introduction. In this study, a novel combined method, including quadratic objective functions for the dichromatic model by Krebs et al. and Wyszecki`s spectral decomposition theory and the well-known principal component analysis technique is employed. New algorithm method working for color space converting of a conventional endoscopic color image, as a target image, with a Narrow Band Image (NBI), as a source image. The images of the target and the source are captured under known illuminant/sensor/filters combinations, and matrix Q of the decomposition theory is computed for such combinations. The intrinsic images which are extracted from the Krebs technique are multiplied by the matrix Q to obtain their corresponding fundamental stimuli. Subsequently, the principal component analysis technique was applied to the obtained fundamental stimuli in order to prepare the eigenvectors of the target and the source. Finally, the first three eigenvectors of each matrix were then considered as the converting mapping matrix. The results precisely seem that the color gamut of the converted target image gets closer to the NBI image color gamut.
Color Converting, Endoscopic Imaging, Dichromatic Model, Principal Component Analysis, Decomposition Theory.
For More Details :
https://aircconline.com/csit/papers/vol9/csit91812.pdf
UNDERSTANDING HOW COLOUR CONTRAST IN HOTEL & TRAVEL WEBSITE AFFECTS EMOTIONAL PERCEPTION, TRUST, AND PURCHASE INTENTION OF VISITORS
Pimmanee Rattanawicha and Sutthipong Yungratog Chulalongkorn Business School, Chulalongkorn University, Bangkok, Thailand
To understand how colour contrast in e-Commerce websites, such as hotel & travel websites, affects (1) emotional perception (i.e. pleasant, arousal, and dominance), (2) trust, and (3) purchase intention of visitors, a two-phase empirical study is conducted. In the first phase of this study, 120 volunteer participants are asked to choose the most appropriate colour from a colour wheel for a hotel & travel website. The colour “Blue Cyan”, the most chosen colour from this phase of study, is then used as the foreground colour to develop three hotel & travel websites with three different colour contrast patterns for the second phase of the study. A questionnaire is also developed from previous studies to collect emotional perception, trust, and purchase intention data from another group of 145 volunteer participants. It is found from data analysis that, for visitors as a whole, colour contrast has significant effects on their purchase intention. For male visitors, colour contrast significantly affects their trust and purchase intention. Moreover, for generation X and generation Z visitors, colour contrast has effects on their emotional perception, trust, and purchase intention. However, no significant effect of colour contrast is found in female or generation Y visitors.
Colour Contrast, e-Commerce, Website Design
For More Details :
https://aircconline.com/csit/papers/vol9/csit91706.pdf
NONNEGATIVE MATRIX FACTORIZATION UNDER ADVERSARIAL NOISE
Peter Ballen Department of Computer and Information Science, University of Pennsylvania, Philadelphia, USA
Nonnegative Matrix Factorization (NMF) is a popular tool to estimate the missing entries of a dataset under the assumption that the true data has a low-dimensional factorization. One example of such a matrix is found in movie recommendation settings, where NMF corresponds to predicting how a user would rate a movie. Traditional NMF algorithms assume the input data is generated from the underlying representation plus mean-zero independent Gaussian noise. However, this simplistic assumption does not hold in real-world settings that contain more complex or adversarial noise. We provide a new NMF algorithm that is more robust towards these nonstandard noise patterns. Our algorithm outperforms existing algorithms on movie rating datasets, where adversarial noise corresponds to a group of adversarial users attempting to review-bomb a movie.
Nonnegative Matrix Factorization, Matrix Completion, Recommendation, Adversarial Noise, Outlier Detection, Linear Model
For More Details :
https://aircconline.com/csit/papers/vol9/csit91601.pdf
PROPOSING A HYBRID APPROACH FOR EMOTION CLASSIFICATION USING AUDIO AND VIDEO DATA
Reza Rafeh1, Rezvan Azimi Khojasteh2, Naji Alobaidi3 1Centre for Information Technology, Waikato Institute of Technology, New Zealand 2Islamic Azad University, Hamedan, Iran 3Unitec Institute of Technology, Auckland, New Zealand
Emotion recognition has been a research topic in the field of Human Computer Interaction (HCI) during recent years. Computers have become an inseparable part of human life. Users need human-like interaction to better communicate with computers. Many researchers have become interested in emotion recognition and classification using different sources. A hybrid approach of audio and text has been recently introduced. All such approaches have been done to raise the accuracy and appropriateness of emotion classification. In this study, a hybrid approach of audio and video has been applied for emotion recognition. The innovation of this approach is selecting the characteristics of audio and video and their features as a unique specification for classification. In this research, the SVM method has been used for classifying the data in the SAVEE database. The experimental results show the maximum classification accuracy for audio data is 91.63% while by applying the hybrid approach the accuracy achieved is 99.26%.
Emotion Classification, Emotions Analysis, Emotion Detection, SVM, Speech Emotion Recognition
For More Details :
https://aircconline.com/csit/papers/vol9/csit91403.pdf
A FACIAL RECOGNITION-BASED VIDEO ENCRYPTION APPROACH TO PREVENT FAKEDEEP VIDEOS
Alex Liang1, Yu Su2 and Fangyan Zhang3 1St. Margaret's Episcopal School, San Juan Capistrano, 2Department of Computer Science, California State Polytechnic University, Pomona, 3ASML, San Jose, CA
Deepfake is a kind of technique which forges video with a certain purpose. It is in urgent demand that one approach can defect if a video is deepfaked or not. It also can reduce a video to be exposed to slanderous deepfakes and content theft. This paper proposes a useful tool which can encrypt and verify a video through proper corresponding algorithms and defect it accurately. Experiment in the paper shows that the tool has realized our goal and we can put it into practice.
Video Encryption, Video Verification, Encryption Algorithm, Decryption algorithm
For More Details :
https://aircconline.com/csit/papers/vol9/csit91317.pdf
AN IMAGE CLASSIFICATION-BASED APPROACH TO AUTOMATE VIDEO PLAYING DETECTION AT SYSTEM LEVEL
Eric Liu1, Samuel Walcoff2, Qi Lu3 and Yu Sun4 1Aracadia High School, Arcadia, CA 2University of California, Santa Cruz Santa Cruz, CA 3 University of California, Irvine 4California State Polytechnic University, Pomona, CA
Tech distraction has become a critical issue on people’s work and study productivity, particularly with the growing amount of digital content from the social media site such as Youtube. Although browser-based plug-ins are available to help block and monitor the sites, they do not work for all scenarios. In this paper, we present a system-level video playing detection engine that captures screenshots and analyze the screenshot image using deep learning, in order to predict whether the image has videos in it or not. A mobile app has also been developed to enable parents to control the video playing detection remotely.
Machine learning, Tech distraction, Image classification
For More Details :
https://aircconline.com/csit/papers/vol9/csit91215.pdf
AUTOMATIC EXTRACTION OF FEATURE LINES ON 3D SURFACE
Zhihong Mao, Ruichao Wang and Yulin Zhou Division of Intelligent Manufacturing, Wuyi University, China
Many applications in mesh processing require the detection of feature lines. Feature lines convey the inherent features of the shape. Existing techniques to find feature lines in discrete surfaces are relied on user-specified thresholds, inaccurate and time-consuming. We use an automatic approximation technique to estimate the optimal threshold for detecting feature lines. Some examples are presented to show our method is effective, which leads to improve the feature lines visualization.
Feature Lines; Extraction; Meshes
For More Details :
https://aircconline.com/csit/papers/vol9/csit90901.pdf
A SURVEY OF STATE-OF-THE-ART GANBASED APPROACHES TO IMAGE SYNTHESIS
Shirin Nasr Esfahani1 and Shahram Latifi2 1,2 UNLV, Las Vegas, USA
In the past few years, Generative Adversarial Networks (GANs) have received immense attention by researchers in a variety of application domains. This new field of deep learning has been growing rapidly and has provided a way to learn deep representations without extensive use of annotated training data. Their achievements may be used in a variety of applications, including speech synthesis, image and video generation, semantic image editing, and style transfer. Image synthesis is an important component of expert systems and it attracted much attention since the introduction of GANs. However, GANs are known to be difficult to train especially when they try to generate high resolution images. This paper gives a thorough overview of the state-of-the-art GANs-based approaches in four applicable areas of image generation including Text-to-Image-Synthesis, Image-to-Image-Translation, Face Aging, and 3D Image Synthesis. Experimental results show state-of-the-art performance using GANs compared to traditional approaches in the fields of image processing and machine vision.
Conditional generative adversarial networks (cGANs), image synthesis, image-to-image translation, text-to-image synthesis, 3D GANs.
For More Details :
https://aircconline.com/csit/papers/vol9/csit90906.pdf
BLIND IMAGE QUALITY ASSESSMENT USING SINGULAR VALUE DECOMPOSITION BASED DOMINANT EIGENVECTORS FOR FEATURE SELECTION
Besma Sadou1, Atidel Lahoulou2*, Toufik Bouden1, Anderson R. Avila3 , Tiago H. Falk3 , Zahid Akhtar4 1University of Jijel, Algeria 2University of Jijel, Algeria 3University of Québec, Montreal, Canada 4University of Memphis, USA
In this paper, a new no-reference image quality assessment (NR-IQA) metric for grey images is proposed using LIVE II image database. The features used are extracted from three well-known NR-IQA objective metrics based on natural scene statistical attributes from three different domains. These metrics may contain redundant, noisy or less informative features which affect the quality score prediction. In order to overcome this drawback, the first step of our work consists in selecting the most relevant image quality features by using Singular Value Decomposition (SVD) based dominant eigenvectors. The second step is performed by employing Relevance Vector Machine (RVM) to learn the mapping between the previously selected features and human opinion scores. Simulations demonstrate that the proposed metric performs very well in terms of correlation and monotonicity.
Natural Scene Statistics (NSS), Singular Value Decomposition (SVD), dominant eigenvectors, Relevance Vector Machine (RVM).
For More Details :
https://aircconline.com/csit/papers/vol9/csit90919.pdf
VULNERABILITY ANALYSIS OF IP CAMERAS USING ARP POISONING
Thomas Doughty1, Nauman Israr2 and Usman Adeel3 1,2,3,Teesside University, Middlesbrough, UK
Internet Protocol (IP) cameras and Internet of Things (IoT) devices are known for their vulnerabilities, and Man in the Middle attacks present a significant privacy and security concern. Because the attacks are easy to perform and highly effective, this allows attackers to steal information and disrupt access to services. We evaluate the security of six IP cameras by performing and outlining various attacks which can be used by criminals. A threat scenario is used to describe how a criminal may attack cameras before and during a burglary. Our findings show that IP cameras remain vulnerable to ARP Poisoning or Spoofing, and while some cameras use Digest Authentication to obfuscate passwords, some vendors and applications remain insecure. We suggest methods to prevent ARP Poisoning, and reiterate the need for good password policy.
Security, Camera, Internet of Things, Passwords, Sniffing, Authentication
For More Details :
https://aircconline.com/csit/papers/vol9/csit90712.pdf
BRAIN COMPUTER INTERFACE FOR BIOMETRIC AUTHENTICATION BY RECORDING SIGNAL
Abd Abrahim Mosslah1. Reyadh Hazim Mahdi2 and Shokhan M. AlBarzinji3 1University of Anbar, Anbar- Iraq 2University of Mustansiriyah, Baghdad,Iraq 3College of Computer Science and Information Technology, University of Anbar
Electroencephalogram(EEG) is done in several ways, which are referred to as brainwaves, which scientists interpret as an electromagnetic phenomenon that reflects the activity in the human brain, this study is used to diagnose brain diseases such as schizophrenia, epilepsy, Parkinson's, Alzheimer's, etc. It is also used in brain machine interfaces and in brain computers. In these applications wireless recording is necessary for these waves. What we need today is Authentication? Authentication is obtained from several techniques, in this paper we will check the efficiency of these techniques such as password and pin. There are also biometrics techniques used to obtain authentication such as heart rate, fingerprint, eye mesh and sound, these techniques give acceptable authentication. If we want to get a technology that gives us integrated and efficient authentication, we use brain wave recording. The aim of the technique in our proposed paper is to improve the efficiency of the reception of radio waves in the brain and to provide authentication.
Related work, EEG brain signal, Brain wave, Overall projcet outline, System requirements
For More Details :
https://aircconline.com/csit/papers/vol9/csit90613.pdf
METHOD FOR THE DETECTION OF CARRIERIN-CARRIER SIGNALS BASED ON FOURTHORDER CUMULANTS
Vasyl Semenov1, Pavel Omelchenko1 and Oleh Kruhlyk1 1Department of Algorithms, Delta SPE LLC, Kiev, Ukraine
The method for the detection of Carrier-in-Carrier signals based on the calculation of fourthorder cumulants is proposed. In accordance with the methodology based on the “Area under the curve” (AUC) parameter, a threshold value for the decision rule is established. It was found that the proposed method provides the correct detection of the sum of QPSK signals for a wide range of signal-to-noise ratios. The obtained AUC value indicates the high efficiency of the proposed detection method. The advantage of the proposed detection method over the “radiuses” method is also shown.
Carrier-in-Carrier, Cumulants, QPSK
For More Details :
https://aircconline.com/csit/papers/vol9/csit90503.pdf
A DFG PROCESSOR IMPLEMENTATION FOR DIGITAL SIGNAL PROCESSING APPLICATIONS
Ali Shatnawi, Osama Al-Khaleel and Hala Alzoubi Jordan University of Science and Technology, Irbid, Jordan
This paper proposes a new scheduling technique for digital signal processing (DSP) applications represented by data flow graphs (DFGs). Hardware implementation in the form of a specialized embedded system, is proposed. The scheduling technique achieves the optimal schedule of a given DFG at design time. The optimality criterion targeted in the proposed algorithm is the maximum throughput than can be achieved by the available hardware resources. Each task is presented in a form of an instruction to be executed on the available hardware. The architecture is composed of one or multiple homogeneous pipelined processing elements, designed to achieve the maximum possible sampling rate for several DSP applications. In this paper, we present a processor implementation of the proposed architecture. It comprises one processing element where all tasks are processed sequentially. The hardware components are built on an FPGA chip using Verilog HDL. The architecture requires a very small area size, which is represented by the number of slice registers and the number of slice lookup tables (LUTs). The proposed scheduling technique is shown to outperform the retiming technique, which is proposed in the literature, by 19.3%.
Data Flow Graphs, Task Scheduling, Processor Design, Hardware Description Language
For More Details :
https://aircconline.com/csit/papers/vol9/csit90402.pdf
OCCLUSION HANDLED BLOCK-BASED STEREO MATCHING WITH IMAGE SEGMENTATION
Jisu Kim, Cheolhyeong Park, Ju O Kim and Deokwoo Lee Keimyung University, Republic of Korea
This paper chiefly deals with techniques of stereo vision, particularly focuses on the procedure of stereo matching. In addition, the proposed approach deals with detection of the regions of occlusion. Prior to carrying out stereo matching, image segmentation is conducted in order to achieve precise matching results. In practice, in stereo vision, matching algorithm sometimes suffers from insufficient accuracy if occlusion is inherent with the scene of interest. Searching the matching regions is conducted based on cross correlation and based on finding a region of the minimum mean square error of the difference between the areas of interest defined in matching window. Middlebury dataset is used for experiments, comparison with the existed results, and the proposed algorithm shows better performance than the existed matching algorithms. To evaluate the proposed algorithm, we compare the result of disparity to the existed ones.
Occlusion, Stereo vision, Segmentation, Matching
For More Details :
https://airccj.org/CSCP/vol9/csit90303.pdf
ORDER PRESERVING STREAM PROCESSING IN FOG COMPUTING ARCHITECTURES
Vidyasankar Memorial University of Newfoundland, St. John’s, Newfoundland, Canada
A Fog Computing architecture consists of edge nodes that generate and possibly pre-process (sensor) data, fog nodes that do some processing quickly and do any actuations that may be needed, and cloud nodes that may perform further detailed analysis for long-term and archival purposes. Processing of a batch of input data is distributed into sub-computations which are executed at the different nodes of the architecture. In many applications, the computations are expected to preserve the order in which the batches arrive at the sources. In this paper, we discuss mechanisms for performing the computations at a node in correct order, by storing some batches temporarily and/or dropping some batches. The former option causes a delay in processing and the latter option affects Quality of Service (QoS). We bring out the tradeoffsbetween processing delay and storage capabilities of the nodes, and also between QoS and the storage capabilities.
Fog computing, Order preserving computations, Quality of Service
For More Details :
https://airccj.org/CSCP/vol9/csit90104.pdf