I build...
- WebGL/WebGPU + frontend UI
- Full-stack websites & apps
- Pixel-perfect graphics that run on mobile
I build:
- WebGL/WebGPU and thoughtful frontend UI
- Full-stack websites & apps
- Pixel-perfect graphics that run on mobile
I work with:
- Modern web: TS/JS, React
- Graphics & shader programming
- APIs, databases, and client-side storage
I'm looking for:
- Interactive / Design-Eng roles
- Studios, startups, or product teams
- Internships or part-time work
- Cool conversations!


Mini MinecraftView project
OpenGL Minecraft simulation, all components of the rendering pipeline built from scratch. Final team project for CIS4600. My Contributions: Terrain Rendering and Chunking: - Designed a system to optimize infinite terrain rendering by dynamically loading and unloading visible chunks based on the player’s position. - Developed interleaved Vertex Buffer Objects (VBOs) to efficiently store and render chunk geometry, ensuring only visible faces were processed, reducing GPU load. Texture Mapping and Animation: - Mapped block textures with UV coordinates, including distinct faces for blocks like grass (top, sides, bottom). - Implemented animated textures for water and lava using time-dependent transformations, creating smooth, looping motions. Dynamic Sky: - Built a GLSL sky shader featuring: - A procedurally animated day-night cycle with moving sun and halo. - Procedural clouds using Worley noise-based fractional Brownian motion. Fluid Surface Waves and Reflection: - Dynamically displaced water geometry to create realistic wave motion. - Recalculated normals in the vertex shader to accurately reflect light on moving surfaces. - Enhanced light reflections with Blinn-Phong highlights.

Advanced Raytracing in C++
Implemented real-time 3D rendering pipelines using modern OpenGL and GLSL in C++. Built a mesh viewer supporting OBJ parsing, normal visualization, interactive camera controls, and scene graph hierarchies. Developed a deferred shading renderer with G-buffer composition (albedo, normal, depth, material masks), screen-space reflection, and physically-based lighting (Cook-Torrance BRDF). Integrated post-processing effects (e.g. Gaussian blur, tone mapping), matcap shading, and sky models (Hosek-Wilkie). Applied shader-based ray marching, subsurface scattering, and domain repetition using signed distance fields (SDFs).

Into the Blue Museum ExperienceView project
Virtual experience built and maintained for a 9-month-long feature at the Penn Museum, delivered under an 8-week deadline. My Contributions: IndexedDB Data Storage: - Proposed using IndexedDB when faced with the problem of storing various forms of user data online, allowing the site to function like an app with the convenience of a frontend-only website. Sticker Generation and Sharing Pipeline: - Designed and built core feature to capture webcam input, clip it along varying SVG paths with an animated cutting effect, apply a sticker-style outline, and store one sticker image per object in IndexedDB. Drag-&-Drop Stickerboard: - Created a stickerboard interface with draggable stickers, modals, and rasterizing compositions as shareable PNGs

Internet AtlasView project
Interactive 3D graph interface designed to map digital pathways and explore web browsing behavior, built to empower Internet users with greater visibility into data flows and autonomy. Frontend Development: - Led design and engineering of a 3D force-directed graph using React Three Fiber with dynamic camera transitions, animated SVG overlays, and WebGL shader effects. - Built interactive node/edge highlighting, camera zoom-to-node behavior, and a path-following mechanism for exploring browsing journeys. Backend + Infrastructure: - Led technical discussions and contributed core logic to an ML-backed pipeline involving LLM-optimized web scraping, Pinecone vector embeddings, and API querying via FastAPI. - Designed architecture to combine textual and image data into a shared embedding space and support real-time semantic similarity queries through Supabase.

Better-Spelling-BeeView project
My friend and I remade our favorite mobile game (NYT Spelling Bee!), focusing on enhancing user engagement through dynamic interactions and personalized features. Design: - Minimal and responsive web interface incorporating subtle animations. - Playful, tangible draggable objects including letter blocks and avatars. Development: - Complex state management, caching during gameplay sessions. - JWT token management for user auth and session persistence. - Efficient system to source a dictionary subset from 7 letters including 1+ pangrams from a subset by processing, sorting, and indexing a 46,444 word dictionary - Drag-and-drop ducks, clonning onto words and reordering dynamically with CSS gymnastics and npm libraries. Next Steps: - Deploy site - Recover some animations gone MIA after restructuring app (ducks in pond randomly flapping, fly and sink animations upon entering word).

Eat or Plant?
Interactive art installation blending physical interaction and environmental data to raise awareness about deforestation in the Amazon through the metaphor of chocolate consumption. Worked as an engineer & contributed to design with a group of architecture masters students. Hardware System: - Used copper touch sensors and LED strips to detect tree removal and display real-time rainforest air quality. - LED animations respond dynamically: trees removed turn off lights; thresholds trigger red flashing alerts. Software Logic: - Connected to the AirNow API to fetch Amazon rainforest AQI data every 5 minutes. - Managed 3 LED matrix arrays for tree positions, AQI-based background gradients, and color behaviors. - Programmed real-time feedback loop between physical chocolate trees and LED visuals. Physical Fabrication: - 3D printed and laser-cut using Rhino and Adobe Illustrator - Hand-made chocolates (and taste-tested a good portion of them) **later slides show some of the work I designed and printed with Rhino!

Holiday Gift BoxView project
As someone who loves writing handwritten letters but lives far from most of my loved ones, I wanted to capture the warmth, spontaneity, and joy of receiving a handwritten note through a simple shareable website. - Custom font from my own handwriting and hand-drawn graphics - 2D physics engine that explodes from an animated gift box to evoke a sense of physical space To create a seamless experience (since opening gifts should never be a task) while ensuring each person's information is secure, I stored unique IDs in the URL route directly sent to recipients to retrieve Firebase documents, all it takes to open is clicking a secure link sent via text.

Aristotle LLMView project
An attempt to build a LLaMA machine learning model using as few predefined libraries and functions as possible. My friend and I did research into the mathematical workings of LLMs, took notes and had discussions, and then took a stab at processing a dataset of Aristotle and Plato quotes from Kaggle and training a model to generate text from a seed string. The model worked with a lot of debugging and some AI assistance, but I could only train it on a few thousand lines over the course of 10+ hours due to hardware limitations and large parameters. Nevertheless, my overtrained Aristotle-bot did produce some wise-sounding lines before starting to repeat gibberish.