Abstract:Dissemination of hate speech in social media networking platforms such as twitter has been on a steady increase as internet usage increases throughout the world. There is a plethora of research works on machine learning algorithms for automated hate speech detection. Natural language presents many ambiguities, which may not be logical for machines to understand. For instance, the context of the discussion determines the semantics of its interpretation. Consequently, there has been a lot of work on this problem. In recent years, deep learning has shown some promising results but require vast amounts of data for training. The major limitation of classical algorithms, on the other hand, emanates from high variance. This challenge can be addressed by harnessing the strengths of different methods using an ensemble. In this paper, we present a Voting ensemble method that takes the advantages of the Logistic Regression (LR), Support Vector Machine (SVM) and Decision Trees (DT) as base classifiers for the task of hate speech detection. The aim of this research paper is to show the superior performance of the voting ensemble as compared to ten state-of-the-art machine learning algorithms. Experimental results show that the voting ensemble outperformed both the deep learning and classical algorithms using eight popular performance evaluation metrics.
Abstract:The ability of many bacteria adhering on the host surfaces and forming biofilms has major implications in a wide variety of industries including the food industry, where biofilms may create a persistent source of contamination. In the same environmental condition, the multiple bacterial species can closely interact with each other and may easily enhance their drug resistance capability, which finally increases the multi-drug resistant (MDR) attribute of the species. The present study examined whether the mixed-species biofilm possesses any impact on the enhancement of the antibiotic resistance of the planktonic or single cell bacterial isolates present in the fish samples. In this regard, Cyprinus rubrofuscus (Koi), Heteropneustes fossilis (Shing) and Mystus vittatus (Tengra) fishes were collected and subjected to form an in vitro biofilm by shaking condition into wise bath. The drug resistant pattern was determined by Kirby Bauer technique. All the samples exhibited huge array (up to 107 cfu/ml or g) of bacteria such as E. coli, Klebsiella spp., Vibrio spp., Salmonella spp., Proteus spp. and Staphylococcus spp. The isolates from both the bulk samples and their corresponding biofilms were subjected to antibiogram assay using antibiotics such as Ampicillin (10 µg), Erythromycin (15μg), Streptomycin (STP 10μg), Oxacillin (10 µg), Nalidixic acid (30 µg). Before biofilm formation, few of the isolates were found to be sensitive and few were resistant against the antibiotics. But when the species were isolated from the biofilm the sensitive one acquired drug resistance and resistant strain unveiled more resistance towards the same antibiotics. The present study revealed extensive bacterial contamination in fish samples among those some were resistant against the supplied drugs. After the formation of multi-species bio-film, the isolates became more resistant against the same drugs that is alarming for consumers and major obstacles in order to maintain sustainable health.
Abstract:The process of accomplishing the perception of spoken words turned out to be a task of an extreme difficulty. The way the spoken words are generated, regardless if they are produced separately or in different contexts of larger utterance, is based on a large involvement of the neural network. The attempts of automatic elaboration of different texts - as for example, automatic translation or machine summarization - demands an enhancing of the coherence of the output text and its readability, realised by a post-processing phase. On the other side, among other different computational models, evolutionary computing, behaving similar to the natural processes, transforms (evolves) a collection of subjects or data toward an acceptable solution for a given problem. The present paper intent to drag the attention of the reader to the role of computer applications in processing the natural language that interacts, changes and continuously evolves in various directions. The article examines the possibilities of engaging computer applications, such as genetic algorithms, in processing the natural language.
Abstract:The orally disintegrating tablets have become very popular recently due to their easy manufacture and high patient compliance. In the preparation of orally disintegrating tablets, excipients which are suitable for direct compression are preferred. Mannitol is one of the most used excipients because of its direct compaction forms and cheaper cost. Powder flow properties and tabletting properties of the excipients used are significant for the final product quality. Thirty-four different formulations were prepared, and the powder properties and tablet characteristics of two different particle sizes of Mannitol (Parteck M 100 and Parteck M 200) were compared. Powder properties of the formulation, bulk density, tapped density, the compressibility index % and Hausner ratio were analysed. Parteck M 200 formulations which have bigger particle size showed better powder flow properties than Parteck M 100. Tablet characterization tests disintegration time, friability %, hardness and dissolution tests were performed. Parteck M 100 formulations with a smaller particle size were provided tablets which have less disintegration time and higher dissolution rate. The istatistical analysis of findings show how excipients particle size affect the powder flow properties and tablet characteristics.
Abstract:The introduction of machine learning (ML) has enabled financial institutions to use historical credit card data to learn the patterns with an aim of distinguishing between fraudulent and legitimate transactions. ML methods use large volumes of data as models for learning. In real life, the amount of legitimate transactions recorded highly outweigh the amount of fraudulent transactions, which is known as an imbalanced distribution. The failure to handle the imbalanced transactions compromises the integrity and predictive abilities of machine learning system resulting in potential high financial loss. Therefore, the aim of this paper was to outline the key difference in performance in detecting fraud when using the data-point technique. An experiment was conducted using the ML fraud detection model and Over-Sampling with SMOTe. The performance was evaluated using the standard metrics and Area-Under the Precision-Recall Curve (AUPRC) to assert whether or not the accuracy has improved in detecting positive classes. The results showed that the precision score improved for Support Vector Machines (SVM), Logistic Regression, Decision Tree and Random forest for the positive class after over-sampling was applied on the imbalance dataset. The findings motivate for further research on the Data-Point approach as a solution to combat the misclassification problem in order to improve the accurate of machine learning fraud detection models.
Abstract:Differential equations describing motor’s dynamic performance are considered in the development of transfer function equations and their Laplace transforms obtained for additional investigation. For the sake of reliability, typical data for two DC Motor models are incorporated within Matlab/Simulink program to demonstrate the elementary time domain performance of the system. A lead lag compensator applied as control method to control the speed of DC Motor. The nonlinear programming optimization algorithm “fmincon; computes a constrained minimum of an objective function of a number of variables starting at a preliminary estimate” used in the optimization routine to reach finest locations for the added pole and zero to attain the optimal parameters for the designed lead lag compensator. A standard test step signal used as a desired input speed to study the effectiveness of the proposed controller based on the performance of the system. In order to validate the controller tracking for speed variation a reference signal with unit step speed changes are included in the simulation studies and the obtained results as the Performance of Optimized Lead Lag compensator for speed changes tracking are presented in figures and tables. Matlab and Simulink are used to carry out the simulation runs and the achieved results were scrutinized and discussed then conclusions deduced that virtuous outcomes reached with the optimal controller’s parameters.
Abstract:The purpose of the current study is to examine and understand consumers’ decision-making process of purchasing environmentally-friendly electric vehicles (EVs). We employed Theory of planned behavior (TPB) as our base model and included additional significant determinants such as price perception, range anxiety, interpersonal trust, and government incentives based on the gratifying the purpose of the current research. Moreover, the role of product knowledge as a moderator was also explored on the relationships between attitude and intention, subjective norm and intention, and perceived behavioral control and consumers’ intention to buy EVs as the level of consumers’ product knowledge can influence their intention to buy environmentally-friendly EV. We are expecting to collect a comprehensive number of responses (around 300-350) to explore the proposed research framework and hypotheses. We are confident that findings of the current study can contribute to the literature of an innovative product such as EV adoption. Additionally we are hoping to bridge the gap in the literature of exploring product knowledge as a moderating factor in the TPB model.
Abstract:Traffic logjam and road fatalities have become an intrinsic problem in our modern society lately despite the global road safety sustainable development goal of stabilizing the increased level of road traffic fatalities by the year 2020. The challenges with road traffic accidents as a developing nation is due to rapid advancements in social-economic, cultural and resource management. The main objective of this article is to analyse car accident rate in South Africa using multiple correspondence analysis and determine the major factors that contributed to South African road accidents occurrence in 2016 and 2017. Multiple correspondence analysis, dimension and qualitative variable analysis resolved the major contributing factor based on Coefficient of determination and affirmed that with n=360, the variable that contributed most to road accident occurrence in South Africa in the years considered were speed, location and occasion. Result validation established 95% confidence ellipse for each level of the variable used. This study provides useful and informative analysis that can aid informed decision making by concerned private and public authorities, stakeholders and insurance companies to reduce road accident occurrence and fatality in South Africa.
Abstract:Channel zapping delays are inconveniences experienced by subscribers, a major challenge in IPTV channels switching system and classed an inherent necessity on user\'s quality of experience (QoE) list. Rolls of techniques that tried to combat this challenge have been proposed and implemented, but a meta-analysis of the best of various current methods is presented in this study. The extraction of the articles was designed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). All the included articles were searched using the following database: Google Scholar, and Web of Science. All statistical analyses were performed through STATA version 15 using the random-effects model. The overall pooled estimated delay component was presented in forest plots. In all, thirteen studies were included in the meta-analysis and the overall pooled estimated was 51.0% (95% CI (11.88%, 220.8%)). In addition, the subgroup analysis affirmed that the estimated delay component was found in studies conducted using ‘All’ (221144.86% (95% CI: 36806.32% to 405483.41%), and using ‘network’, was 20.17% (95% CI: -27.14% to 67.49%). Experimental studies have shown that virtual elimination of IPTV zapping delay is possible for a relevant chunk of channel switching requests, emphatically, our analysis elucidates studies that captured ‘network’ delay components as the best techniques. They are capable of proper delay management and are able to reduce delay in channel switching than studies with ‘all’ delay components.
Abstract:This paper shows the influence of histogram segmentation on quality of data discretization. The possibility of classifying the histograms into types has been researched, as well as the influence of certain histogram type on the discretization. An entropy algorithm is shown, as well as the MD algorithm. A data set reduct obtained on the basis of the rough set theory was observed with respect to the histogram type. The reduct contains attributes which enable description of the entire database and generate the decision rules. The position of reduct attributes’ cuts was observed in relation to the multimodal histogram segmentation. Precision of the classification rules, obtained from the reduct, can be estimated based on consistency. Interaction between the data histograms, reduct cuts and the consistency of classification rules has been researched. The reduct attributes have more irregular histogram than attributes out of the reduct. Histograms of reduct attributes have direct impact to the classification rules consistency. This article presents a model for segmentation threshold determination based on the entropy algorithm. Closely related FixedPoints algorithm enabling the cuts selection is constructed. Application on the selected database shows the benefits of selection of cuts relies on histogram segmentation.