The hippocampus pre-orders movements for skilled action sequences
Hierarchical cortical entrainment orchestrates the multisensory processing of biological motion
A theory of temporal self-supervised learning in neocortical layers
BDCC, Vol. 8, Pages 44: Topic Modelling: Going beyond Token Outputs
BDCC, Vol. 8, Pages 44: Topic Modelling: Going beyond Token Outputs
Big Data and Cognitive Computing doi: 10.3390/bdcc8050044
Authors: Lowri Williams Eirini Anthi Laura Arman Pete Burnap
Topic modelling is a text mining technique for identifying salient themes from a number of documents. The output is commonly a set of topics consisting of isolated tokens that often co-occur in such documents. Manual effort is often associated with interpreting a topic’s description from such tokens. However, from a human’s perspective, such outputs may not adequately provide enough information to infer the meaning of the topics; thus, their interpretability is often inaccurately understood. Although several studies have attempted to automatically extend topic descriptions as a means of enhancing the interpretation of topic models, they rely on external language sources that may become unavailable, must be kept up to date to generate relevant results, and present privacy issues when training on or processing data. This paper presents a novel approach towards extending the output of traditional topic modelling methods beyond a list of isolated tokens. This approach removes the dependence on external sources by using the textual data themselves by extracting high-scoring keywords and mapping them to the topic model’s token outputs. To compare how the proposed method benchmarks against the state of the art, a comparative analysis against results produced by Large Language Models (LLMs) is presented. Such results report that the proposed method resonates with the thematic coverage found in LLMs and often surpasses such models by bridging the gap between broad thematic elements and granular details. In addition, to demonstrate and reinforce the generalisation of the proposed method, the approach was further evaluated using two other topic modelling methods as the underlying models and when using a heterogeneous unseen dataset. To measure the interpretability of the proposed outputs against those of the traditional topic modelling approach, independent annotators manually scored each output based on their quality and usefulness as well as the efficiency of the annotation task. The proposed approach demonstrated higher quality and usefulness, as well as higher efficiency in the annotation task, in comparison to the outputs of a traditional topic modelling method, demonstrating an increase in their interpretability.
Characterising time-on-task effects on oscillatory and aperiodic EEG components and their co-variation with visual task performance.
Anatomical circuits for flexible spatial mapping by single neurons in posterior parietal cortex
A supervised contrastive learning-based model for image emotion classification
Abstract
Images play a vital role in social media platforms, which can more vividly reflect people’s inner emotions and preferences, so visual sentiment analysis has become an important research topic. In this paper, we propose a Supervised Contrastive Learning-based model for image emotion classification, which consists of two modules of low-level feature extraction and deep emotional feature extraction, and feature fusion is used to enhance the overall perception of image emotions. In the low-level feature extraction module, the LBP-U (Local Binary Patterns with Uniform Patterns) algorithm is employed to extract texture features from the images, which can effectively capture the texture information of the images, aiding in the differentiation of images belonging to different emotion categories. In the deep emotional feature extraction module, we introduce a Supervised Contrastive Learning approach to improve the extraction of deep emotional features by narrowing the intra-class distance among images of the same emotion category while expanding the inter-class distance between images of different emotion categories. Through fusing the low-level and deep emotional features, our model comprehensively utilizes features at different levels, thereby enhancing the overall emotion classification performance. To assess the classification performance and generalization capability of the proposed model, we conduct experiments on the publicly FI (Flickr and Instagram) Emotion dataset. Comparative analysis of the experimental results demonstrates that our proposed model has good performance for image emotion classification. Additionally, we conduct ablation experiments to analyze the impact of different levels of features and various loss functions on the model’s performance, thereby validating the superiority of our proposed approach.
Multiple hypergraph convolutional network social recommendation using dual contrastive learning
Abstract
Due to the strong representation capabilities of graph structures in social networks, social relationships are often used to improve recommendation quality. Most existing social recommendation models exploit pairwise relations to mine latent user preferences. However, since user interactions are relatively complex with possibly higher-order relationships, their performance in real-world applications is limited. Furthermore, user behavior data in many practical recommendation scenarios tend to be noisy and sparse, which may lead to suboptimal representation performance. To address this issue, we propose a dual objective contrastive learning multiple hypergraph convolution model for social recommendation (DCMHS). Specifically, our model first constructs hypergraphs with different social relationships. Then, we construct hypergraph encoders to obtain higher-order user representations through hypergraph convolution. Aiming to avoid aggregation loss caused by aggregating user embeddings under different views into one, we construct neighbor identification and semantic identification contrastive learning objectives to iteratively refine the user representation. In addition, we optimize the negative sampling process using the global embedding of items. The results of experiments conducted on real-world datasets demonstrate the effectiveness of the proposed DCMHS, and the ablation study validates the rationality of different components of the model.