Research on Multi-Parameter Prediction of Rabbit Housing Environment Based on Transformer

The rabbit breeding industry exhibits vast economic potential and growth opportunities. Nevertheless, the ineffective prediction of environmental conditions in rabbit houses often leads to the spread of infectious diseases, causing illness and death among rabbits. This paper presents a multi-parameter predictive model for environmental conditions such as temperature, humidity, illumination, CO2 concentration, NH3 concentration, and dust conditions in rabbit houses. The model adeptly distinguishes between day and night forecasts, thereby improving the adaptive adjustment of environmental data trends. Importantly, the model encapsulates multi-parameter environmental forecasting to heighten precision, given the high degree of interrelation among parameters. The model's performance is assessed through RMSE, MAE, and MAPE metrics, yielding values of 0.018, 0.031, and 6.31% respectively in predicting rabbit house environmental factors. Experimentally juxtaposed with Bert, Seq2seq, and conventional transformer models, the method demonstrates superior performance.

Deep Transfer Learning Based on LSTM Model for Reservoir Flood Forecasting

In recent years, deep learning has been widely used as an efficient prediction algorithm. However, this algorithm has strict requirements on the size of training samples. If there are not enough samples to train the network, it is difficult to achieve the desired effect. In view of the lack of training samples, this article proposes a deep learning prediction model integrating migration learning and applies it to flood forecasting. The model uses random forest algorithm to extract the flood characteristics, and then uses the transfer learning strategy to fine-tune the parameters of the model based on the model trained with similar reservoir data; and is used for the target reservoir flood prediction. Based on the calculation results, an autoregressive algorithm is used to intelligently correct the error of the prediction results. A series of experimental results show that our proposed method is significantly superior to other classical methods in prediction accuracy.

A Fuzzy Portfolio Model With Cardinality Constraints Based on Differential Evolution Algorithms

Uncertain information in the securities market exhibits fuzziness. In this article, expected returns and liquidity are considered as trapezoidal fuzzy numbers. The possibility mean and mean absolute deviation of expected returns represent the returns and risks of securities assets, while the possibility mean of expected turnover represents the liquidity of securities assets. Taking into account practical constraints such as cardinality and transaction costs, this article establishes a fuzzy portfolio model with cardinality constraints and solves it using the differential evolution algorithm. Finally, using fuzzy c-means clustering algorithm, 12 stocks are selected as empirical samples to provide numerical calculation examples. At the same time, fuzzy c-means clustering algorithm is used to cluster the stock yield data and analyse the stock data comprehensively and accurately, which provides a reference for establishing an effective portfolio.

An Evaluation of the Financial Impact on Business Performance of the Adoption of E-Business via Blockchain Technology

Investors can learn a lot about the health of a firm by looking at its FP (financial performance). For investors, it offers a glimpse into the company's financial health and performance, as well as a forecast for the stock's performance in the future. Certain criteria, including liquidity, ownership, maturity, and size, have been linked to financial success. Blockchain provides several benefits in the logistics business, including increased trust in the system owing to improved transparency and traceability and cost savings by removing manual and paper-based administration. The study uses the FP-BCT technique, a new approach to measuring company performance. However, e-business helps expand data exchange, aspects, and data quantity. Improving processing capabilities impacts the macroeconomic and financial environments, reducing economic activity, ensuring timely implementation of information, and decreasing costs.

Accountancy for E-Business Enterprises Based on Cyber Security

E-businesses (EBEs) may commit legal offenses due to perpetrating cybercrime while doing the commercial activity. According to the findings, various obstacles might deter cybercrime throughout accounting. The study examined the present laws for accounting policy elements and determined those aspects that should be included in the administrative document for e-business enterprise accounting policies. E-businesses must avoid cyber-crime (CC), which has a detrimental influence on the company's brand and diminishes client loyalty to ensure their success. According to the study's findings, the use of information and control functions of accounting can help prevent cyber-crime in the bookkeeping system by increasing the content of individual internal rules. The authors intended to make online payments for EBE-CC as safe, easy, and fast as possible. However, the internet is known for making its users feel anonymous. E-commerce (EC) transactions are vulnerable to cybercrime, resulting in considerable money and personal information losses.

Effect Analysis of Nursing Intervention on Lower Extremity Deep Venous Thrombosis in Patients

In the modern era, nursing intervention is an increased commitment to patient quality and protection that allows nurses to make evidence-based healthcare decisions. The challenging characteristic of patients such as high deep venous thrombosis (DVT) and respiratory embolisms (RE) are significant health conditions that lead to post-operative severe injury and death. In this article, hybrid machine learning (HML) is used for senile patients with lower extremity fractures during the perioperative time and the clinical effectiveness of early stages nursing protocol for deep venous thrombosis of patients and nurses. A three-dimensional shape model of the user interface is shown the examined vessels, which have compression measurements mapped to the surface as colors and virtual image plane representation of DVT. The measures of comprehension have been validated using HML model segmentation experts and contrasted with paired f-tests to reduce the incidence of lower extremity deep venous thrombosis in patients and nurses.

Clustering of COVID-19 Multi-Time Series-Based K-Means and PCA With Forecasting

The COVID-19 pandemic is one of the current universal threats to humanity. The entire world is cooperating persistently to find some ways to decrease its effect. The time series is one of the basic criteria that play a fundamental part in developing an accurate prediction model for future estimations regarding the expansion of this virus with its infective nature. The authors discuss in this paper the goals of the study, problems, definitions, and previous studies. Also they deal with the theoretical aspect of multi-time series clusters using both the K-means and the time series cluster. In the end, they apply the topics, and ARIMA is used to introduce a prototype to give specific predictions about the impact of the COVID-19 pandemic from 90 to 140 days. The modeling and prediction process is done using the available data set from the Saudi Ministry of Health for Riyadh, Jeddah, Makkah, and Dammam during the previous four months, and the model is evaluated using the Python program. Based on this proposed method, the authors address the conclusions.

Top-K Pseudo Labeling for Semi-Supervised Image Classification

In this paper, a top-k pseudo labeling method for semi-supervised self-learning is proposed. Pseudo labeling is a key technology in semi-supervised self-learning. Briefly, the quality of the pseudo label generated largely determined the convergence of the neural network and the accuracy obtained. In this paper, the authors use a method called top-k pseudo labeling to generate pseudo label during the training of semi-supervised neural network model. The proposed labeling method helps a lot in learning features from unlabeled data. The proposed method is easy to implement and only relies on the neural network prediction and hyper-parameter k. The experiment results show that the proposed method works well with semi-supervised learning on CIFAR-10 and CIFAR-100 datasets. Also, a variant of top-k labeling for supervised learning named top-k regulation is proposed. The experiment results show that various models can achieve higher accuracy on test set when trained with top-k regulation.

Spatiotemporal Data Prediction Model Based on a Multi-Layer Attention Mechanism

Spatiotemporal data prediction is of great significance in the fields of smart cities and smart manufacturing. Current spatiotemporal data prediction models heavily rely on traditional spatial views or single temporal granularity, which suffer from missing knowledge, including dynamic spatial correlations, periodicity, and mutability. This paper addresses these challenges by proposing a multi-layer attention-based predictive model. The key idea of this paper is to use a multi-layer attention mechanism to model the dynamic spatial correlation of different features. Then, multi-granularity historical features are fused to predict future spatiotemporal data. Experiments on real-world data show that the proposed model outperforms six state-of-the-art benchmark methods.

Estimating the Number of Clusters in High-Dimensional Large Datasets

Clustering is a basic primer of exploratory tasks. In order to obtain valuable results, the parameters in the clustering algorithm, the number of clusters must be set appropriately. Existing methods for determining the number of clusters perform well on low-dimensional small datasets, but how to effectively determine the optimal number of clusters on large high-dimensional datasets is still a challenging problem. In this paper, the authors design a method for effectively estimating the optimal number of clusters on large-scale high-dimensional datasets that can overcome the shortcomings of existing estimation methods and accurately and quickly estimate the optimal number of clusters on large-scale high-dimensional datasets. Extensive experiments show that it (1) outperforms existing estimation methods in accuracy and efficiency, (2) generalizes across different datasets, and (3) is suitable for high-dimensional large datasets.