
I'm an applied-AI researcher and software engineer, focused on large-language-model tooling for software-security analysis and agentic code generation. My expertise blends modern web app development skills with hands-on experimentation in state-of-the-art LLMs.
I am currently completing my M.Sc. at York University, where my thesis investigates LLM-assisted vulnerability detection. Alongside my studies, I am working with Huawei Canada as a research intern to develop a repo-wide agentic test generation pipeline. Previously, I worked for 3 years as a software engineer at Dynamic Solution Innovators Ltd. and Synopsys Inc., contributing to large-scale and high-quality Java and Spring systems.
When I'm not working, you can find me traveling🌴, doing music🎸, or experimenting with new technologies🧑💻.
Building components for a repo-wide agentic test generation pipeline that transforms SRS requirements into executable test scripts.
Here I worked on the HEMS project which is a nation-wide examination management system.
Remotely worked with Synopsys Inc. as an offshore employee from DSi. Worked on Synopsys-Detect project (now renamed to BlackDuck-Detect) which is a software compositional analysis tool. It scans a software repository to find component, security, and license vulnerabilities resulting from project dependencies.
In this project, I constructed a benchmark dataset of C/C++ vulnerabilities called SecVulEval. It solves the lack of statement-level granularity and contextual information in previous datasets.
Static analysis tools are widely adopted in the industry to detect bugs in software systems. However, it is important that the tools themselves are reliable, i.e., do not miss bugs or produce false positives. In this project, I created an LLM-based metamorphic testing framework to detect inconsistencies in Java static analysis tools.


Have a project in mind or just want to connect? Feel free to reach out to me using the form or through any of the channels below.