Research on Multi-Parameter Prediction of Rabbit Housing Environment Based on Transformer

The rabbit breeding industry exhibits vast economic potential and growth opportunities. Nevertheless, the ineffective prediction of environmental conditions in rabbit houses often leads to the spread of infectious diseases, causing illness and death among rabbits. This paper presents a multi-parameter predictive model for environmental conditions such as temperature, humidity, illumination, CO2 concentration, NH3 concentration, and dust conditions in rabbit houses. The model adeptly distinguishes between day and night forecasts, thereby improving the adaptive adjustment of environmental data trends. Importantly, the model encapsulates multi-parameter environmental forecasting to heighten precision, given the high degree of interrelation among parameters. The model's performance is assessed through RMSE, MAE, and MAPE metrics, yielding values of 0.018, 0.031, and 6.31% respectively in predicting rabbit house environmental factors. Experimentally juxtaposed with Bert, Seq2seq, and conventional transformer models, the method demonstrates superior performance.

Deep Transfer Learning Based on LSTM Model for Reservoir Flood Forecasting

In recent years, deep learning has been widely used as an efficient prediction algorithm. However, this algorithm has strict requirements on the size of training samples. If there are not enough samples to train the network, it is difficult to achieve the desired effect. In view of the lack of training samples, this article proposes a deep learning prediction model integrating migration learning and applies it to flood forecasting. The model uses random forest algorithm to extract the flood characteristics, and then uses the transfer learning strategy to fine-tune the parameters of the model based on the model trained with similar reservoir data; and is used for the target reservoir flood prediction. Based on the calculation results, an autoregressive algorithm is used to intelligently correct the error of the prediction results. A series of experimental results show that our proposed method is significantly superior to other classical methods in prediction accuracy.

A Fuzzy Portfolio Model With Cardinality Constraints Based on Differential Evolution Algorithms

Uncertain information in the securities market exhibits fuzziness. In this article, expected returns and liquidity are considered as trapezoidal fuzzy numbers. The possibility mean and mean absolute deviation of expected returns represent the returns and risks of securities assets, while the possibility mean of expected turnover represents the liquidity of securities assets. Taking into account practical constraints such as cardinality and transaction costs, this article establishes a fuzzy portfolio model with cardinality constraints and solves it using the differential evolution algorithm. Finally, using fuzzy c-means clustering algorithm, 12 stocks are selected as empirical samples to provide numerical calculation examples. At the same time, fuzzy c-means clustering algorithm is used to cluster the stock yield data and analyse the stock data comprehensively and accurately, which provides a reference for establishing an effective portfolio.

An Evaluation of the Financial Impact on Business Performance of the Adoption of E-Business via Blockchain Technology

Investors can learn a lot about the health of a firm by looking at its FP (financial performance). For investors, it offers a glimpse into the company's financial health and performance, as well as a forecast for the stock's performance in the future. Certain criteria, including liquidity, ownership, maturity, and size, have been linked to financial success. Blockchain provides several benefits in the logistics business, including increased trust in the system owing to improved transparency and traceability and cost savings by removing manual and paper-based administration. The study uses the FP-BCT technique, a new approach to measuring company performance. However, e-business helps expand data exchange, aspects, and data quantity. Improving processing capabilities impacts the macroeconomic and financial environments, reducing economic activity, ensuring timely implementation of information, and decreasing costs.

Accountancy for E-Business Enterprises Based on Cyber Security

E-businesses (EBEs) may commit legal offenses due to perpetrating cybercrime while doing the commercial activity. According to the findings, various obstacles might deter cybercrime throughout accounting. The study examined the present laws for accounting policy elements and determined those aspects that should be included in the administrative document for e-business enterprise accounting policies. E-businesses must avoid cyber-crime (CC), which has a detrimental influence on the company's brand and diminishes client loyalty to ensure their success. According to the study's findings, the use of information and control functions of accounting can help prevent cyber-crime in the bookkeeping system by increasing the content of individual internal rules. The authors intended to make online payments for EBE-CC as safe, easy, and fast as possible. However, the internet is known for making its users feel anonymous. E-commerce (EC) transactions are vulnerable to cybercrime, resulting in considerable money and personal information losses.

Effect Analysis of Nursing Intervention on Lower Extremity Deep Venous Thrombosis in Patients

In the modern era, nursing intervention is an increased commitment to patient quality and protection that allows nurses to make evidence-based healthcare decisions. The challenging characteristic of patients such as high deep venous thrombosis (DVT) and respiratory embolisms (RE) are significant health conditions that lead to post-operative severe injury and death. In this article, hybrid machine learning (HML) is used for senile patients with lower extremity fractures during the perioperative time and the clinical effectiveness of early stages nursing protocol for deep venous thrombosis of patients and nurses. A three-dimensional shape model of the user interface is shown the examined vessels, which have compression measurements mapped to the surface as colors and virtual image plane representation of DVT. The measures of comprehension have been validated using HML model segmentation experts and contrasted with paired f-tests to reduce the incidence of lower extremity deep venous thrombosis in patients and nurses.

Clustering of COVID-19 Multi-Time Series-Based K-Means and PCA With Forecasting

The COVID-19 pandemic is one of the current universal threats to humanity. The entire world is cooperating persistently to find some ways to decrease its effect. The time series is one of the basic criteria that play a fundamental part in developing an accurate prediction model for future estimations regarding the expansion of this virus with its infective nature. The authors discuss in this paper the goals of the study, problems, definitions, and previous studies. Also they deal with the theoretical aspect of multi-time series clusters using both the K-means and the time series cluster. In the end, they apply the topics, and ARIMA is used to introduce a prototype to give specific predictions about the impact of the COVID-19 pandemic from 90 to 140 days. The modeling and prediction process is done using the available data set from the Saudi Ministry of Health for Riyadh, Jeddah, Makkah, and Dammam during the previous four months, and the model is evaluated using the Python program. Based on this proposed method, the authors address the conclusions.

Estimating the Number of Clusters in High-Dimensional Large Datasets

Clustering is a basic primer of exploratory tasks. In order to obtain valuable results, the parameters in the clustering algorithm, the number of clusters must be set appropriately. Existing methods for determining the number of clusters perform well on low-dimensional small datasets, but how to effectively determine the optimal number of clusters on large high-dimensional datasets is still a challenging problem. In this paper, the authors design a method for effectively estimating the optimal number of clusters on large-scale high-dimensional datasets that can overcome the shortcomings of existing estimation methods and accurately and quickly estimate the optimal number of clusters on large-scale high-dimensional datasets. Extensive experiments show that it (1) outperforms existing estimation methods in accuracy and efficiency, (2) generalizes across different datasets, and (3) is suitable for high-dimensional large datasets.

A New Outlier Detection Algorithm Based on Fast Density Peak Clustering Outlier Factor

Outlier detection is an important field in data mining, which can be used in fraud detection, fault detection, and other fields. This article focuses on the problem where the density peak clustering algorithm needs a manual parameter setting and time complexity is high; the first is to use the k nearest neighbors clustering algorithm to replace the density peak of the density estimate, which adopts the KD-Tree index data structure calculation of data objects k close neighbors. Then it adopts the method of the product of density and distance automatic selection of clustering centers. In addition, the central relative distance and fast density peak clustering outliers were defined to characterize the degree of outliers of data objects. Then, based on fast density peak clustering outliers, an outlier detection algorithm was devised. Experiments on artificial and real data sets are performed to validate the algorithm, and the validity and time efficiency of the proposed algorithm are validated when compared to several conventional and innovative algorithms.

RETAD

Due to the rapid advancement of wireless sensor and location technologies, a large amount of mobile agent trajectory data has become available. Intelligent city systems and video surveillance all benefit from trajectory anomaly detection. The authors propose an unsupervised reconstruction error-based trajectory anomaly detection (RETAD) method for vehicles to address the issues of conventional anomaly detection, which include difficulty extracting features, are susceptible to overfitting, and have a poor anomaly detection effect. RETAD reconstructs the original vehicle trajectories through an autoencoder based on recurrent neural networks. The model obtains moving patterns of normal trajectories by eliminating the gap between the reconstruction results and the initial inputs. Anomalous trajectories are defined as those with a reconstruction error larger than anomaly threshold. Experimental results demonstrate that the effectiveness of RETAD in detecting anomalies is superior to traditional distance-based, density-based, and machine learning classification algorithms on multiple metrics.