ABOUT THE GAME
The Masker is a memory and painting game set in a Venetian mask shop, created during our university's 2026 Winter Game Jam where our team placed 3rd. The jam shared the same theme as the Global Game Jam, challenging us to create something memorable under tight constraints. Players take on the role of a mask craftsman serving customers who request specific decorated masks. The core gameplay loop challenges players to memorize a pattern, select the correct mask template and recreate the design from memory using gesture based painting, all under time pressure.
Following the jam, The Masker was showcased at the Guildford Games Festival 2026, where we had the opportunity to present our work to industry professionals and fellow developers.
What makes The Masker unique is its gesture recognition scoring system. Rather than simple pixel matching, the game evaluates player drawings using a combination of shape recognition and spatial color analysis. This creates a forgiving yet meaningful scoring system where close approximations are rewarded while still distinguishing between good and poor attempts. The three tier feedback system (perfect, acceptable, rejected) gives players clear goals while keeping the experience accessible.
SYSTEMS I IMPLEMENTED
The Gesture Recognition Scoring System
The core technical challenge was creating a scoring system that could fairly evaluate hand drawn patterns against reference images. This proved far more difficult than anticipated and went through several iterations before arriving at the final solution.
Attempt 1: Histogram Comparison — My lecturer initially suggested using histogram-based image comparison. I researched this approach and discovered Hu Moments, a set of seven values that describe shape characteristics regardless of scale, rotation, or position. While mathematically elegant, Hu Moments failed in practice. The values were too abstract to produce meaningful similarity scores for hand drawn patterns and small drawing variations caused disproportionately large score swings. The approach couldn't distinguish between "close but imperfect" and "completely wrong."
Attempt 2: Pixel Counting with Quadrant Analysis — I pivoted to a more direct approach: dividing both the reference pattern and player's drawing into a grid and comparing pixel counts per quadrant. This worked better for detecting whether paint was in roughly the right location, but it completely ignored shape. A player could scribble randomly in the correct quadrants and score well, which felt unfair and unsatisfying.
Attempt 3: $P Point-Cloud Recognition — I discovered the $P (P-Dollar) algorithm, an academic gesture recognizer designed for comparing point-cloud representations of drawings. This excelled at shape matching as it could recognize that a wavy line was similar to another wavy line regardless of exact pixel placement. However, it struggled with color differentiation and spatial positioning.
Final Solution: Combined Weighted Scoring — The breakthrough came from combining approaches. The final system uses:
Shape Score (50% weight): $P algorithm comparing the gesture structure of the player's strokes against the reference pattern. Quadrant Score (50% weight): Spatial analysis checking if paint appears in the correct grid locations, with a color accuracy multiplier.
The quadrant scoring includes color verification — painting the right shape in the wrong color applies a 0.5x multiplier to that portion of the score. This means correct shape with wrong colors scores around 75%, while correct shape with correct colors scores up to 100%.
I also implemented an empty quadrant penalty system. If players paint in areas where the reference has no pattern, they receive score deductions. This prevents the exploit of simply covering the entire mask with paint.
The weighting decision (50/50 split) emerged from playtesting. Earlier versions weighted shape at 70%, but this felt too punishing for players who got the general idea right but placed it slightly off center. The equal weighting creates a more forgiving experience that rewards both accuracy and effort.
Game Loop Management
Beyond the scoring system, I built the GameSceneManager controlling the complete game flow: customer entry animations, memory phase with countdown timer, mask template selection, painting phase with color selection, score calculation, customer reaction (three tier responses based on score thresholds) and payment processing. The system handles state transitions cleanly and integrates with teammate built systems like customer animations and UI.
Multi-Pattern Template System
I implemented a template system where multiple pattern variations can share the same base mask. This allows for greater variety in customer requests without requiring unique assets for every combination. The system compares base images rather than data objects, enabling flexible content creation.
WHAT I LEARNED
Technical Skills Developed
Working on The Masker significantly developed my approach to algorithm selection and iteration. The journey from Hu Moments to the final combined system taught me that academic solutions don't always translate directly to game feel — sometimes you need to combine multiple imperfect approaches to create something that works. I gained practical experience implementing gesture recognition, texture manipulation for real time painting and building scoring systems that feel fair to players.
The project also strengthened my ability to debug visual systems. When color detection wasn't working, I added debug sampling that logged actual RGB values from pattern images which revealed that my detection thresholds didn't match the actual colors being used. This diagnostic approach — making invisible problems visible — is something I'll carry forward.
Team and Game Jam Lessons
As lead programmer on a five person team during a 5 day jam, I learned valuable lessons about scope and communication. We were ambitious with our gesture recognition system, arguably too ambitious for a jam. While it worked and impressed the judges (who specifically highlighted the mechanic), we spent significant time debugging edge cases that could have gone toward polish.
In retrospect, I would establish clearer "good enough" thresholds earlier. The scoring system went through iterations during the jam itself, which created integration challenges with teammates who were building UI and feedback systems around score values that kept changing. For future jams, I'd lock core mechanics earlier and communicate expected value ranges upfront, even if the underlying implementation continues to evolve.
The judges noted our gesture recognition as an ambitious mechanic, which validated the technical risk we took. But I also recognize that ambition needs to be balanced with deliverable milestones — something I'll be more mindful of in future team projects.