July 19 ~ 20, 2025, Toronto, Canada
Roy Hisar Martahan Simanungkalit, Sapto Jumono, Muhammad Fachruddin Arrozi Adhikara, Agus Munandar & Jaka Suharna
Department of Economics and Business, Esa Unggul University, Jakarta, Indonesia
Green Digital Finance (GDF) combines digital technology with green finance goals, leveraging technologies such as blockchain and artificial intelligence to support sustainable investment and financial practices. This article presents a systematic review of the literature on GDF, focusing on recent developments, challenges, and opportunities. The research method uses Systematic Literature Review (SLR) with PRISMA guidance, including identification, screening, eligibility, and inclusion of articles from various academic databases. The results of the SLR show a significant increase in research on GDF over the past five years, with the implementation of digital technologies such as Blockchain, big data, fintech, and AI being the most frequently used. The study also highlights challenges such as lack of adequate regulation, resistance to technological change, and security concerns. In conclusion, the GDF has great potential to improve ecological sustainability, but it requires the establishment of an appropriate regulatory framework, comprehensive education initiatives, and multi-sectoral collaboration.
Green Digital Finance (GDF), Digital Technology, Green Finance, Blockchain, Big Data, Artificial Intelligence (AI), Green Investment, Environmental Sustainability, Systematic Review of the Literature (SLR)
Fuyao Ling1, Andrew Park2, 1No.2 High School of East China Normal University, 555 Chenhui Rd, Pudong, Shanghai, China, 201203, 2California State Polytechnic University, Pomona, CA, 91768
BridgeGap is a mobile platform designed to connect Bridge players based on real-time behavioral insights. The project addresses the challenge of partner matching and community-building in modern Bridge by leveraging in-game data such as bidding frequency, collaboration scores, and player performance [1]. The system includes three major components: matchmaking, gameplay session management, and a visual profile with radar chart analytics. Firebase services are used for user authentication, live data updates, and stat storage. Two experiments were conducted: one to validate behavioral preference detection, and another to test engagement in competitive scenarios. Results showed high accuracy in classifying player types and valuable insights for user retention. Compared to traditional studies using surveys and interviews, BridgeGap’s method is more scalable and immediate, offering real-time personalization and data-driven matchmaking [2]. Despite limitations in subjective metric definition and small sample size, BridgeGap proves to be a powerful step toward a smarter, more connected Bridge community.
Bridge Game, Matchmaking System, Gameplay Analytics, Real-Time Data
Zhen Zhang, Bradley Department of Electrical and Computer Engineering, Virginia Polytechnic Institute and State University, Blacksburg, Virginia, USA
This study proposes a behavior-specific filtering method to improve behavior classification accuracy in Precision Livestock Farming (PLF). While traditional filtering methods, such as wavelet denoising, achieved an accuracy of 91.58%, they apply uniform processing to all behaviors. In contrast, the proposed behavior-specific filtering method combines Wavelet Denoising with a Low Pass Filter, tailored to active and inactive pig behaviors, and achieved a peak accuracy of 94.73%. These results highlight the effectiveness of behavior-specific filtering in enhancing animal behavior monitoring, supporting better health management and farm efficiency
Meri¸c Demir¨ors, Ahmet Murat Ozbayo˘glu, and Toygar Akg¨un ¨, TOBB University of Economics and Technology, Ankara, Turkey
The proliferation of 5G technologies and the vast deployment of Internet of Things (IoT) devices have heightened the demand for optimal spectrum utilization, necessitating robust spectrum management strategies. In this context, an efficient energy detector employing wideband spectrum sensing within a 5G environment is essential for identifying underutilized frequency bands suitable for cognitive radio applications across multiple sub-bands. While cooperative spectrum sensing (CSS) can enhance the detection capabilities of energy detectors amidst noise uncertainty, its performance often deteriorates under low signal-to-noise ratio (SNR) conditions. This study proposes an improved CSS framework that combines Maximal Ratio Combining (MRC) with the K-out-of-N fusion rule to address noise uncertainty in a complex Gaussian environment across multiple sub-bands in cooperative wideband spectrum sensing. Comparative performance analysis confirms that this integrated approach enhances detection probability and maintains a low false alarm rate across various low SNR scenarios, significantly outperforming traditional cooperative and non-cooperative wideband spectrum sensing methods. These results highlight the potential for advancing cognitive radio technologies by optimizing detection algorithms to improve performance under challenging conditions.
Signal-Noise Ratio, Maximal Ratio Combining, Wideband Spectrum Sensing, Energy Detection, K-out-of-N fusion rule
Blessing C. Dike andCajetan M. Akujuobi, Center of Excellence for Communication Systems Technology Research, ECE Dept. Prairie View A&M University Prairie View, Texas, USA
Developments in the field of generative-AI have made it extremely difficult to distinguish artificially generated content from real content. As a result, their reliable detection has become more important. The topic of this research is detecting speeches that are generated by future generative-AI models in unknown languages. It focuses on answering ”With what information does a model distinguish fake audio from real audio, does it learn how spoken languages sound, or does it learn a specific trait of generated speech waves?” Multiple models are trained on various datasets to detect synthetic audio signals generated by generative-AI models. After multiple sessions of trains and tests, the best test accuracy scores for different test sessions are 94.92% for known language from unknown generative-AI model, 98. 44% for an unknown language from known generative-AI model, and 95. 18% for an unknown language from unknown generative-AI model.
CNN, Bispectrum
Xinyuan Qi1, Garret Washburn2, 1Aquinas International Academy, 3200 Guasti Rd, Suite 100, Ontario, CA 91761, 2California State Polytechnic University, Pomona, CA, 91768
As long as online cloth shopping has existed, the issue of seeing how a clothing item looks on a potential customer has always followed. The method proposed in this paper is the FlashFit mobile application and smart-mirror [1]. FlashFit is a system which allows the user to upload images of their own clothing items or ones they’ve found online and have dynamically resized and fitted onto their image via a recording from the mobile app or in real time with the smart-mirror. The major technologies utilized to create the FlashFit system include Mediapipe for human landmark recognition, the Flutter framework for cross-platform mobile app creation, and a raspberry pi for the smart-mirror [2]. During the development process, the major challenges we faced included the resizing of the clothing item given the position of the user in the camera view as well as the creation of png images from user uploaded clothing item images. Additionally, we employed multiple different experiments within this paper to ensure the FlashFit systems consistency, in which the app performed very well. In result, the FlashFit system is a modern solution to the online clothing market’s inability to have fitting rooms and is reliable compared to other solutions currently available.
Virtual Fitting Room, Human Landmark Recognition, Smart Mirror, Online Clothing Retail.
Matthew Zhang1, Carlos Gonzalez2, 1Northwood High School, 4515 Portola Pkwy, Irvine, CA 92620, 2California State Polytechnic University, Pomona, CA, 91768
Prolonged computers use fuels a growing epidemic of poor posture and related musculoskeletal issues, impacting quality of life and productivity. Addressing this, we propose a lightweight, real-time posture monitoring system designed for continuous background operation [1]. Utilizing Googles MediaPipe for pose detection and a heuristicbased scoring algorithm, our program analyzes key metrics like neck and torso angles [2]. The core challenge was objectively defining "good" vs. "bad" posture, which we addressed empirically with weighted metrics and an optimal threshold of 60.0. Experiments, using a 10,000-pose dataset, demonstrated 83.33% accuracy, with torso and neck angles proving most influential. This tool provides personalized end-of-day reports, leveraging AI (e.g., OpenAIs ChatCompletion API) to offer evidence-based recommendations [3]. Unlike specialized hardware or exercise-specific solutions, our camera-based application offers an accessible, continuous, and preventive approach for all prolonged computer users, fostering healthier digital habits.
Posture Monitoring, MediaPipe Pose Detection, Musculoskeletal Health, Heuristic Scoring Algorithm.
Adrian E. Conway, Assured Networking Solutions, Weston, MA 02493, U.S.A
A physical unclonable function (PUF)-based method is presented for continuously authenticating the physical hardware provenance of data that a sensor streams over time to a receiving device. In contrast to existing PUF-based authenticated remote sensing techniques, the method does not make use of any PUF challenge-response pair (CRP) databases or PUF models. The new method, that we call a Sensor Ratchet, is based on the hitherto developed PUF-based CRP Ratchet protocol for continuously mutually authenticating a pair of devices over time. As such, the Sensor Ratchet inherits the lightweight computational requirements and secure properties of the CRP Ratchet. Three variations of the Sensor Ratchet are developed: a simplex form of authenticated data transfer in which a receiver is the initiator, a simplex form in which the sensor is the initiator, and a half-duplex form that additionally transmits physically authenticated information - such as control signals - to a sensor.
Sensor data physical provenance, physical authentication, ratchet protocol, physical unclonable function, physical unclonable protocol
Krishnageetha Karuppasamy, Abinash Borah, Anirudh Paranjothi, and Johnson P Thomas
The Internet of Vehicles (IoV) offers various services for road safety and user comfort. However, they face security vulnerabilities such as false data injections which need to be mitigated for public safety. The security solutions for IoV should have minimal processing delay and be scalable to deal with the large-scale IoV. While classical machine learning techniques have been adopted for malicious node detection in IoV, these solutions face computational challenges and scalability limitations. To deal with these challenges, in this paper, we propose a novel quantum-based MaxCut graph detection mechanism for identifying malicious nodes transmitting false messages in IoV. As validated by the performance evaluation results, the proposed quantum-based detection approach offers significantly lower data processing delay compared to the classical approach, especially as the data size increases.
Internet of vehicles, security, false message detection, quantum
Zexian Yang1, Jonathan Thamrun2, 1Rutgers Preparatory School, 1345 Easton Ave, Somerset, NJ 08873, 2California State Polytechnic University, Pomona, CA, 91768
My project is called the DDOS analysis software. The problem that it is trying to solve is a way of defense against DDOS attacks [1]. Since DDOS attacks are simple to use while being highly destructive to websites or servers that do not utilize any protection methods. My software would be able to detect if an PCAP file(internet packet capture file) is malicious or benign based on a trained ai model [2]. The AI algorithm is called randomforest. The logic behind it is that it sets up multiple trees made of different branches. The branches will result in a true or false output. And based on the output of the branches, the decision tree would end with the result of if the connection is malicious or benign. It contains multiple decision trees that vote on a correct result based on the majority [3]. Another part of this software is the select file methods. The methods can detect the selected PCAP file that the user has chosen, and send it to the algorithm to determine the result. The last part is the user interface, which provides a graphic and beginner friendly way of using the software. I also did an experiment with multiple PCAP files that I already know the result of. To see the overall accuracy of the algorithm. I have found out based on the experiment that the software has issues in determining if the connection is malicious or not if it uses VPNs or used evasion techniques [4]. My idea is something people should start using because it can protect the websites of the users in a cost efficient and simple way.
DDoS Detection, PCAP File Analysis, Random Forest Algorithm, Network Security Tool.
Mohammad Shadeed1 and Majdi Owda2, 1Department of Computer Engineering, University of Fairfax, VA, USA, 2Department of Computer Engineering, Arab American University, West Bank, Palestine
Timestamps have proven to be an important and appropriate source to some extent as a guide for investigators in discovering computer crimes. that computer crimes can be detected in several ways. they criminals always try to hide their gender, place, and date of the crime they committed, previous research focused on the importance of timestamp and in this research, we introduced new elements ($Logfile\ $MFT\$MFTMirr\ Windows event logs) that contain artifacts about timestamp in NTFS file system and provides valuable information during investigations to detect anomalies in NTFS file system timestamp, which is very important, as it can be used as strong proof of a computer crime, This paper deals with the reliability check of artifacts used to detect anomalies in the NTFS file system timestamp and the ability of these artifacts to keep records for users.
NTFS filesystem, Anomaly Detection, $logfile, timestamp, $MFT.
Yiqun Zhao1, Jonathan Sahagun2, 1Lexington High School, 251 Waltham St, Lexington, MA 02421, 2California State Polytechnic University, Pomona, CA, 91768
In hospital settings, efficient wheelchair management is crucial for optimizing patient transport. Traditional methods, including manual tracking and GPS, often fall short because of indoor interference and high maintenance costs. This research presents a solution to this challenge utilizing bluetooth beacon technology to provide real-time localization of wheelchairs within hospital environments. By integrating bluetooth beacons with a centralized mobile application, our system offers precise and continuous updates on wheelchair availability and wheelchair location. This approach addresses common drawbacks of existing systems, such as the range limitation of the RFID system and infrared system, and the inaccuracy of GPS upon indoor use [1]. Through analysis and testing in simulated hospital conditions, the proposed system demonstrates significant improvements in efficiency, accuracy, and user experience. The results of the test suggest a cost-effective alternative for improving resource management and patient care in healthcare facilities.
Healthcare, Wheelchair, Real-Time Tracking
Jiajin Li1, Rodrigo Onate2, 1Marlborough School, 250 South Rossmore Avenue, Los Angeles, CA 90004, 2California State Polytechnic University, Pomona, CA, 91768
Barn management often struggles with poor communication, outdated scheduling, and scattered information, leading to confusion and missed lessons. My app, HorsiTask, solves this problem by offering a clear, all-in-one platform for riders, trainers, and grooms to manage schedules, tasks, and horse care in real time [6]. The app uses Firebase for data syncing and Flutter for the interface. Two key challenges were preventing double bookings and ensuring clean data input [7]. We addressed these through experiments: one tested the scheduling system under stress, and another tested form validation by submitting flawed entries. Both showed that HorsiTask can block invalid inputs and maintain accurate booking records. The most important result was that our system was held up under pressure and caught most errors. HorsiTask improves on past barn solutions by reducing miscommunication and confusion through automation and clear design. Its a tool that makes barn life easier and more efficient for everyone involved.
Horses, Equestrian, Management Application, All-in-One
Shah Mehmood Wagan, Xinli Zhang and Sidra Sidra, Business School, Sichuan University, Chengdu, China
The study focuses on the innovations by artificial intelligence in the workplace that are affecting the productivity of the staff and the overall performance of businesses. It attempts to uncover the mechanism behind technologys technological impact on business operations and labor productivity. A quantitative research technique was used in this study with SmartPLS. It is found in a study that out of 350 small and medium-sized company samples, the first two had the highest adoption rates. The study specifies that perceived value, as well as the ease of use, has a major effect on the adoption of AI solutions. Technology development is one such method, through which increasing the level of work of people in the business raises the companys productivity. The positive experience increases business performance, therefore accuracy of business management to understand the employees of the business and customer satisfaction are positively related these points have been illustrated in this paper. Besides fact that study is mainly based on self-reporting data it may be biased at that point. As next study could thoroughly investigate long-term impacts of technology adoption on productivity in various disciplines of economy. A company can foster productivity of its workforce and boost performance by making available to them friendly AI tools as well as by providing them with training. This investigation contributes to understanding how AI technology can increase organizational performance in a significant way according to theoretical frameworks such as TAM model and Resource-Based View (RBV).
Artificial Intelligence; Employee productivity; Business Performance; Perceived Usefulness; Customer Satisfaction
Mohamed Yacine DJEMA, Hacene FOUCHAL, Olivier FLAUZAC, LAB-I*, University of Reims Champagne-Ardenne, France
Large language models (LLMs) remain vulnerable to adversarial prompting, yet state-of-the-art certified defenses such as Erase-and-Check (EC) are too slow for real-time use because they must re-evaluate hundreds of prompt variants. We investigate whether a single, attribution-guided deletion can approximate EC’s robustness at a fraction of the cost. Two variants are proposed. Method A keeps an external safety filter but replaces EC’s exhaustive search with one SHAP/feature-ablation pass, erasing the k most influential tokens before a single re-check. Method B removes the filter entirely: we compute SHAP scores inside the generator (Vicuna-7B), excise the top-r% tokens once, and re-generate. On the AdvBench suite with Greedy-Coordinate-Gradient suffixes (|α| ≤ 20), Method A detects up to 75% of attacks when 55% of tokens are removed—two forward passes instead of EC’s linear-to-combinatorial explosion—while SHAP consistently outperforms feature ablation. Method B, guided solely by SHAP, cuts harmful completions from 100% to 5% after deleting the top-20% tokens and sustains single-digit harm rates for 15–45% deletion budgets, narrowing EC’s safety gap yet adding negligible latency. An explainer comparison shows SHAP recovers nearly every adversarial token within the top-5% importance ranks, whereas LIME is slightly noisier and feature ablation trails far behind. These findings expose a tunable speed–safety trade-off: attribution-guided, single-pass excision delivers large latency gains with a bounded drop in worst-case guarantees. Careful explainer choice and deletion budgeting are critical, but attribution can transform explainability from a diagnostic tool into the backbone of practical, low-latency LLM defenses.
Large Language Models, LLMs, Adversarial Prompting, Jailbreak Attacks, Explainable AI, Greedy Coordinate Gradient, Safety Certification and Robustness.
Serena Pei, School of Engineering, MIT, Cambridge, MA, USA
We present a lightweight pipeline using Stable Diffusion v1.5 [7] for generating anatomically accurate brain MRI images depicting tumors. Using a public dataset of 1,426 glioma MRI slices from 233 patients [5,2] we condition image generation on both descriptive text prompts (text input) and visually transformed grayscale MRI slices (visual input). We explore three visual transforms: Gaussian-blurring, checkerboard-masked, and edge-mapped. Inspired by ControlNet [9], our method supports dual conditioning during both training and inference but avoids duplicating the U-Net architecture—significantly reducing memory overhead. This enables training on standard GPUs such as a single 15GB T4 in Google Colab. To assess image realism on synthesized images, we use both qualitative inspection and Fréchet Inception Distance (FID). This model is an important step towards building more flexible, privacy-preserving methods for creating high-quality medical images in low-data, low-memory settings— with potential applications in rare disease research and AI-driven healthcare.
Stable Diffusion, ControlNet, Healthcare, Medical Imaging.
L. De Grandis1, 2, F. Granata1, 2, A. Lanza2, D. Costa2, E. Oro3, and M. Ruffolo2, 1University of Modena, 2Altilia.ai, 3National Research Council of Italy
The development of Large Language Models (LLMs) has opened impressive possibilities in text summarization, enabling the automatic generation of increasingly human-like synopses. However, modern documents are increasingly complex in both structure and content, with the inclusion of numerous semantically complex charts and tables. This complexity is particularly pronounced in financial documents, which pose challenges in multimodal fusion, alignment, and coherence, with users expecting information to be integrated from and presented across multiple modalities. This work proposes a pipeline that integrates document understanding (DU) models and prompting strategies as a solution for long financial document summarization. Moreover, through a markdown-structured document representation and the use of carefully designed prompt templates, we summarize the document’s content and augment it with images and tables, effectively achieving multimodal summarization with multimodal outputs (MSMO). Experimental results highlight improved factual accuracy and effective inclusion of multimodal information, advancing MSMO.
Long Document Summarization, Multimodal Document Summarization, Large Language Models, Financial Document Analysis.
F. M. Granata1, 2, L. De Grandis1, 2, A. Lanza2, D. Costa2, E. Oro2, and M. Ruffolo2, 1University of Modena, 2Altilia.ai, 3National Research Council of Italy
We propose an approach for leveraging large language mo-dels (LLMs) to answer questions over tables. Our work investigates both direct question answering and semantic parsing paradigms by converting natural language queries into SQL queries, Pandas code, and vanilla Python functions. Evaluations conducted on multiple benchmark datasets including FinTabNetQA, VWTQ, VTabFact, and a proprietary financial dataset (FinTab-It) reveal that HTML table representations enhance model performance, with GPT-4o exhibiting consistent accuracy and LLama 3.1 8b demonstrating sensitivity to input format. Furthermore, fine-tuning LLama 3.1 using QLoRA in low-resource settings yields modest performance improvements. These findings highlight the potential of LLMs to simplify and improve table-based question answering, and they open avenues for future research on optimized fine-tuning and alternative intermediate representations. .
Large Language Models, Question Answering, Tables, Semantic Parsing, HTML.
Praveen Jesudhas, Raghuveera T, and Shiney Jeyaraj, Department of Computer Science & Engineering, Anna University, Guindy, Chennai, India
Existing pre-impact fall detection systems have high accuracy, however they are either intrusive to the subject or require heavy computational resources for fall detection, resulting in prohibitive deployment costs. These factors limit the global adoption of existing fall detection systems. In this work we present a Pre-impact fall detection system that is both non-intrusive and computationally efficient at deployment. Our system utilizes video data of the locality available through cameras, thereby requiring no specialized equipment to be worn by the subject. Further, the fall detection system utilizes minimal fall specific features and simplistic neural network models, designed to reduce the computational cost of the system. A minimal set of fall specific features are derived from the skeletal data, post observing the relative position of human skeleton during fall. These features are shown to have different distributions for Fall and non-fall scenarios proving their discriminative capability. A Long Short Term Memory (LSTM) based network is selected and the network architecture and training parameters are designed after evaluation of performance on standard datasets. In the Pre-impact fall detection system the computation requirement is about 18 times lesser than existing modules with a comparable accuracy of 88%. Given the low computation requirements and higher accuracy levels, the proposed system is suitable for wider adoption in engineering systems related to industrial and residential safety. .
Fall detection systems, Computer vision, Action recognition, Sequence models, Neural net.
Salman Bader Hazza, Ibrahim ALhajouj, Abdullah Nader Aldossary, Moteb Abdullah Aldossary, and Saud Alhajaj Aldossari, Department of Electrical Engineering, Prince Sattam bin Abdulaziz University Wadi Aldawaser-11913, Saudi Arabia
With the upcoming farming revolution, this paper presents the development of an AI-powered agricultural monitoring system that integrates IoT devices with machine learning algorithms for real-time soil data analysis and nutrient prediction. A custom-built sensor-based device was designed to collect environmental data, including temperature, humidity, and essential soil nutrients (Nitrogen, Phosphorus, and Potassium). The collected data was preprocessed and used to train various supervised learning models, including Neural Networks, Random Forests, and CatBoost. These models were evaluated using key regression metrics such as MSE, MAE, and R2 to determine their predictive accuracy. The results demonstrate that AI techniques can significantly enhance nutrient estimation and decision support in precision agriculture. This study contributes to the growing field of smart farming by offering a low-cost, sensor-integrated solution for sustainable agricultural monitoring.
IoT, AI, catboost, random forest, neural network, farming technologies
Maher Rebai1 and Taha Houda2, 1De Vinci Higher Education Av. L´eonard de Vinci, 92400 Courbevoie, France, 2Price Mohammad Bin Fahd University 617, Al Jawharah, Khobar, Dhahran 34754, Arabie saoudite
Efficient path planning for mobile sensors is crucial in Wireless Sensor Networks (WSNs) to ensure optimal monitoring of coverage holes while considering real-world constraints. This work addresses the problem of determining an optimal trajectory for mobile sensor ensuring optimal monitoring of coverage holes while efficiently navigating through the sensing field. We introduce a novel Binary Integer Linear Programming (BILP) model that formulates the trajectory planning problem as a discrete optimization task, allowing for fine-grained control over sensor movement and coverage quality. The performance of the proposed approach is thoroughly evaluated through comparative experiments against both exact and heuristic methods from the literature. The obtained results confirm that the proposed approach outperforms the recent existing methods.
Wireless Sensor Network (WSN), Linear programming.
Ahmed Khan Leghari1, Mads Johansen1, and Andreas Lyndrup Jensen2, 1Digital & Sustainable Innovation, FORCE Technology, Denmark, 2Liquid Flow & Type Approval, FORCE Technology, Denmark
The exponential growth of IoT applications and related hardware has dramatically changed the manual way of doing things. Metrology is one such field that is witnessing widespread changes caused by digitalization and connectivity of metrological hardware. Metrological devices need calibration at regular intervals to maintain their metrological accuracy. After the calibration, each device gets a calibration certificate that documents the findings of the calibration and is valid until the next calibration. These calibration certificate are issued on a physical paper or as a PDF documents to be human readable. However, efforts are under way to come up with a globally acceptable machine readable calibration certificate called DCC (Digital Calibration Certificate). Growing demand of digitalization and connectivity of metrological devices means that the paper and PDF based calibration certificates will be soon thing of the past, and will eventually be replaced by machine readable Digital Calibration Certificates (DCCs). DCCs when transferred over the network could be exposed to security threats such as man-in-the-middle (MITM) attack. This paper propose a DCC exchange model that ensures integrity, confidentiality, non-repudiation and authenticity of DCCs sent over network from a calibration lab to the organization that requested for the equipment calibration.
Digital Calibration Certificate, DCC, man-in-the-middle (MITM), Calibration, IoT, Security, Integrity, Confidentiality.
Amit Saxena, College of Science and Technology, Bellevue University, Bellevue, NE 68005, USA.
The application of predictive and prescriptive maintenance procedures in industries is revolutionizing mainstream manufacturing by cutting down on time loss and waste of resources. Reactive maintenance and preventive strategies are some of the traditional maintenance management techniques that tend to cause inefficiency in the systems, high operational costs and some forms of failure. This paper uses data that results from breakdown analysis in the development of predictive maintenance models and prescriptive decision systems. A methodology is used that incorporates predictive analytics based on individual machine learning with the knowledge of the failure patterns. The analysis of historical breakdown records allows predictive models to achieve higher accuracy in forecasting potential failures by identifying key failure trends. The prescriptive maintenance program provides information regarding the best course of action to be taken concerning the equipment, minimizing operational disruptions and downtimes. Thus, as a means of testing the efficiency of the proposed concept, experiments were conducted on real-world industrial datasets. The implications of this are a lower number of unplanned maintenance interventions, efficiency of the assets, and hence, less costs. This paper adds to the literature on predictive and prescriptive maintenance as it highlights how historical breakdown information can, in turn, enhance the predictive analysis while giving suggestions concerning industrial maintenance management. Further outcomes will consider deep learning algorithms and real-time integration of the sensors for encompassing even better maintenance rate.
Predictive maintenance, prescriptive maintenance, historical breakdown data, machine learning, failure prediction.
Ziyi Chai1, Joshua Lai2, 1Santa Margarita Catholic High School, 22062 Antonio Parkway, Rancho Santa Margarita, CA 92688, 2California State Polytechnic University, Pomona, CA, 91768
We aimed to address the lack of accessibility in chess by developing a program that integrates text-to-speech and speech recognition [1]. The system allows users to input moves using voice commands and receive audio feedback of game states, making it helpful for players with visual or motor impairments. The design consists of a central controller to manage game logic, a speech recognizer for move input, and a history manager to track and undo moves. Designing for chess posed numerous edge cases, but we addressed them by building a well-constructed system using proper subclassing and modularity, ensuring flexibility without compromising core functionality. We tested speech recognition extensively across varied inputs, and despite occasional network issues and minor parsing errors, the system achieved very high accuracy and was able to successfully execute most commands the first time, with the remaining commands taking 2-3 more on average [2]. The program is cheap, accessible on most devices, and simple to use.
Accessible Gaming, Speech Recognition, Text-to-Speech, Assistive Technology in Chess.