Publications
Here you can find a list of my academic publications (published and submitted). I am passionate about sharing my work and collaborating with the research community.
DAgent: A Multi-Agent System for Device-Aware Assistance
- First Author
- Accepted at 35th International Conference on Computer Theory and Applications (ICCTA 2025)
PDF GitHub
In this work, we proposed DAgent, a modular multi-agent system designed for personalized, device-aware assistance. The system architecture includes specialized agents for tracing, retrieval-augmented generation (RAG), and coding, all orchestrated by a multi-agent OS assistant. Our evaluations showed strong performance in correctness, completeness, and clarity. Ablation studies highlighted the critical role of the Coding Agent, demonstrating the power of multi-agent collaboration for complex, real-world tasks.
Computational Analysis of Media Bias in The Guardian’s Coverage of the Israeli-Palestinian Conflict
- Co-First Author
- Awaiting Publication at the 9th International Conference on Computer-Human Interaction Research and Applications (CHIRA 2025)
📄 PDF
This paper presents a multi-method NLP analysis of media framing in The Guardian’s coverage of the Israeli-Palestinian conflict over several decades. Using transformer-based models and word embeddings, we uncovered systematic biases in language use. For instance, our findings revealed that fear-related language was 13.5% more prevalent in contexts related to Israel, whereas trust-related language was 9.5% more prevalent in contexts related to Palestine, providing a quantitative look at sentiment patterns in journalism.
ALEX-GYM-1: A Hybrid 3D Pose & Vision Model for Automated Exercise Evaluation
- First Author
- Awaiting Publication at the 22nd International Conference on Informatics in Control, Automation and Robotics (ICINCO 2025)
PDF GitHub
Current automated exercise evaluation systems often rely on a single modality, like pose estimation. In ALEX-GYM-1, we developed a novel multi-modal architecture that integrates both 3D CNN vision-based and pose-based pathways. This hybrid approach allows the model to capture a richer set of features, leading to a 30% reduction in Hamming Loss compared to single-modality methods and a 79.5% improvement over pose-only models, setting a new standard for exercise form evaluation.
TypePlus: A Deep Learning Architecture for Keystroke Authentication
- Co-First Author
- Awaiting Publication at the 2nd International Conference on Intelligent Systems, Blockchain, and Communication Technologies (ISBCom 2025)
PDF GitHub
We designed TypePlus, a lightweight, non-transformer architecture for free-text keystroke authentication. The model leverages weighted attention pooling and keycode embeddings to effectively capture individual typing patterns. Our approach achieves a state-of-the-art 2.86% Equal Error Rate on the Aalto University Keystroke Dataset, demonstrating that highly accurate biometric security can be achieved with efficient, non-transformer models.
PPE-Det: Evaluating Lightweight Object Detection Models for Edge-Based Safety Monitoring
- Co-First Author
- Awaiting Publication at the 2nd International Conference on Intelligent Systems, Blockchain, and Communication Technologies (ISBCom 2025)
PDF GitHub
Real-time safety monitoring in industrial environments requires efficient models that can run on edge devices. For this project, we first created a novel dataset of 5,000 images of personal protective equipment (PPE). We then benchmarked several lightweight (sub-3M parameter) object detection models on a Raspberry Pi 5. Our analysis found that YOLOv9t and YOLOv11n achieved the optimal trade-off between accuracy and efficiency, providing a clear recommendation for real-time edge deployment in industrial safety systems.
DeepCat: A Deep Learning Approach to Understand Your Cat’s Body Language
- Co-First Author
- Published in the 11th International Japan-Africa Conference on Electronics, Communications, and Computations (JAC-ECC 2023)
PDF DOI GitHub
This project introduced a mobile-friendly deep learning system designed to interpret a cat’s emotional state by analyzing its body language from images. Trained and evaluated on over 10,000 images, the system uses key visual markers to recognize emotions. It achieved 97% accuracy for eye detection, 85% for tail positions, and 84% for mouth configurations, demonstrating a practical and accessible application of computer vision for animal welfare and human-pet interaction.
