A New Outlier Detection Algorithm Based on Fast Density Peak Clustering Outlier Factor

Outlier detection is an important field in data mining, which can be used in fraud detection, fault detection, and other fields. This article focuses on the problem where the density peak clustering algorithm needs a manual parameter setting and time complexity is high; the first is to use the k nearest neighbors clustering algorithm to replace the density peak of the density estimate, which adopts the KD-Tree index data structure calculation of data objects k close neighbors. Then it adopts the method of the product of density and distance automatic selection of clustering centers. In addition, the central relative distance and fast density peak clustering outliers were defined to characterize the degree of outliers of data objects. Then, based on fast density peak clustering outliers, an outlier detection algorithm was devised. Experiments on artificial and real data sets are performed to validate the algorithm, and the validity and time efficiency of the proposed algorithm are validated when compared to several conventional and innovative algorithms.

RETAD

Due to the rapid advancement of wireless sensor and location technologies, a large amount of mobile agent trajectory data has become available. Intelligent city systems and video surveillance all benefit from trajectory anomaly detection. The authors propose an unsupervised reconstruction error-based trajectory anomaly detection (RETAD) method for vehicles to address the issues of conventional anomaly detection, which include difficulty extracting features, are susceptible to overfitting, and have a poor anomaly detection effect. RETAD reconstructs the original vehicle trajectories through an autoencoder based on recurrent neural networks. The model obtains moving patterns of normal trajectories by eliminating the gap between the reconstruction results and the initial inputs. Anomalous trajectories are defined as those with a reconstruction error larger than anomaly threshold. Experimental results demonstrate that the effectiveness of RETAD in detecting anomalies is superior to traditional distance-based, density-based, and machine learning classification algorithms on multiple metrics.

An Efficient Association Rule Mining-Based Spatial Keyword Index

Spatial keyword query has attracted the attention of many researchers. Most of the existing spatial keyword indexes do not consider the differences in keyword distribution, so their efficiencies are not high when data are skewed. To this end, this paper proposes a novel association rule mining based spatial keyword index, ARM-SQ, whose inverted lists are materialized by the frequent item sets mined by association rules; thus, intersections of long lists can be avoided. To prevent excessive space costs caused by materialization, a depth-based materialization strategy is introduced, which maintains a good balance between query and space costs. To select the right frequent item sets for answering a query, the authors further implement a benefit-based greedy frequent item set selection algorithm, BGF-Selection. The experimental results show that this algorithm significantly outperforms the existing algorithms, and its efficiency can be an order of magnitude higher than SFC-Quad.

Combining BPSO and ELM Models for Inferring Novel lncRNA-Disease Associations

It has been widely known that long non-coding RNA (lncRNA) plays an important role in gene expression and regulation. However, due to a few characteristics of lncRNA (e.g., huge amounts of data, high dimension, lack of noted samples, etc.), identifying key lncRNA closely related to specific disease is nearly impossible. In this paper, the authors propose a computational method to predict key lncRNA closely related to its corresponding disease. The proposed solution implements a BPSO based intelligent algorithm to select possible optimal lncRNA subset, and then uses ML-ELM based deep learning model to evaluate each lncRNA subset. After that, wrapper feature extraction method is used to select lncRNAs, which are closely related to the pathophysiology of disease from massive data. Experimentation on three typical open datasets proves the feasibility and efficiency of our proposed solution. This proposed solution achieves above 93% accuracy, the best ever.

Iterative and Semi-Supervised Design of Chatbots Using Interactive Clustering

Chatbots represent a promising tool to automate the processing of requests in a business context. However, despite major progress in natural language processing technologies, constructing a dataset deemed relevant by business experts is a manual, iterative and error-prone process. To assist these experts during modelling and labelling, the authors propose an active learning methodology coined Interactive Clustering. It relies on interactions between computer-guided segmentation of data in intents, and response-driven human annotations imposing constraints on clusters to improve relevance.This article applies Interactive Clustering on a realistic dataset, and measures the optimal settings required for relevant segmentation in a minimal number of annotations. The usability of the method is discussed in terms of computation time, and the achieved compromise between business relevance and classification performance during training.In this context, Interactive Clustering appears as a suitable methodology combining human and computer initiatives to efficiently develop a useable chatbot.

Boat Detection in Marina Using Time-Delay Analysis and Deep Learning

An autonomous acoustic system based on two bottom-moored hydrophones, a two-input audio board and a small single-board computer was installed at the entrance of a marina to detect entering/exiting boat. Windowed time lagged cross-correlations are calculated by the system to find the consecutive time delays between the hydrophone signals and to compute a signal which is a function of the boats' angular trajectories. Since its installation, the single-board computer performs online prediction with a signal processing-based algorithm which achieved an accuracy of 80 %. To improve system performance, a convolutional neural network (CNN) is trained with the acquired data to perform real-time detection. Two classification tasks were considered (binary and multiclass) to both detect a boat and its direction of navigation. Finally, a trained CNN was implemented in a single-board computer to ensure that prediction can be performed in real time.

Efficient Open Domain Question Answering With Delayed Attention in Transformer-Based Models

Open Domain Question Answering (ODQA) on a large-scale corpus of documents (e.g. Wikipedia) is a key challenge in computer science. Although Transformer-based language models such as Bert have shown an ability to outperform humans to extract answers from small pre-selected passages of text, they suffer from their high complexity if the search space is much larger. The most common way to deal with this problem is to add a preliminary information retrieval step to strongly filter the corpus and keep only the relevant passages. In this article, the authors consider a more direct and complementary solution which consists in restricting the attention mechanism in Transformer-based models to allow a more efficient management of computations. The resulting variants are competitive with the original models on the extractive task and allow, in the ODQA setting, a significant acceleration of predictions and sometimes even an improvement in the quality of response.

A Method for Generating Comparison Tables From the Semantic Web

This paper presents Versus, which is the first automatic method for generating comparison tables from knowledge bases of the Semantic Web. For this purpose, it introduces the contextual reference level to evaluate whether a feature is relevant to compare a set of entities. This measure relies on contexts that are sets of entities similar to the compared entities. Its principle is to favor the features whose values for the compared entities are reference (or frequent) in these contexts. The proposal efficiently evaluates the contextual reference level from a public SPARQL endpoint limited by a fair-use policy. Using a new benchmark based on Wikidata, the experiments show the interest of the contextual reference level for identifying the features deemed relevant by users with high precision and recall. In addition, the proposed optimizations significantly reduce the number of required queries for properties as well as for inverse relations. Interestingly, this experimental study also show that the inverse relations bring out a large number of numerical comparison features.

Concept of Temporal Pretopology for the Analysis for Structural Changes

Pretopology is a mathematical model developed from a weakening of the topological axiomatic. It was initially used in economic, social and biological sciences and next in pattern recognition and image analysis. More recently, it has been applied to the analysis of complex networks. Pretopology enables to work in a mathematical framework with weak properties, and its nonidempotent operator called pseudo-closure permits to implement iterative algorithms. It proposes a formalism that generalizes graph theory concepts and allows to model problems universally. In this paper, authors will extend this mathematical model to analyze complex data with spatiotemporal dimensions. Authors define the notion of a temporal pretopology based on a temporal function. They give an example of temporal function based on a binary relation, and construct a temporal pretopology. They define two new notions of temporal substructures which aim at representing evolution of substructures. They propose algorithms to extract these substructures. They experiment the proposition on 2 data and two economic real data.

A New Approach for Fairness Increment of Consensus-Driven Group Recommender Systems Based on Choquet Integral

It has been witnessed in recent years for the rising of Group recommender systems (GRSs) in most e-commerce and tourism applications like Booking.com, Traveloka.com, Amazon, etc. One of the most concerned problems in GRSs is to guarantee the fairness between users in a group so-called the consensus-driven group recommender system. This paper proposes a new flexible alternative that embeds a fuzzy measure to aggregation operators of consensus process to improve fairness of group recommendation and deals with group member interaction. Choquet integral is used to build a fuzzy measure based on group member interactions and to seek a better fairness recommendation. The empirical results on the benchmark datasets show the incremental advances of the proposal for dealing with group member interactions and the issue of fairness in Consensus-driven GRS.