My Projects

ResearchSpace

ResearchSpace is an open-source platform for linked open data visualization, creation, and publication, currently under development by the British Royal Museum in London England. It is a tool that the LINCS project has been working on expanding and modifying to meet its specific requirements. I have been one of the main developers working on this project for LINCS, and in doing so have made contributions to the main project repository hosted by the British Royal Museum.

The platform is built using Java Springboot for the backend, with a webpacked React App frontend. It makes use of Blazegraph Triplestores, LDP Containers, Keycloak SSO, Pac4j Security, Apache Shiro, and a Cantaloupe IIIF Image server to drive data persistence, authentication, and dataset visualizations. My contributions to the main repository include Keycloak SSO login configuration. This allows platform hosters to configure the platform to accept authentication for user login using the Keycloak SSO that other LINCS tools use. I also designed a timeline visualization component and implemented platform-wide multilingualization using standard i18n tags. My contributions to the LINCS repository are mainly UX and cosmetic changes.

Read More

NSSI

The NERVE Secure and Scalable Infrastructure (NSSI) is a web api for managing the data conversion workflows used by tools and apps developed by LINCS. NSSI is modularized, and built to be configurable to support multiple data conversion services all accessible from one convenient endpoint. Essentially, NSSI is all about the organization, scalability, and queueing of large data processing tasks. Upon receiving a request, the service is responsible for dispatching the data to the correct service queue, loading the data into the system, and setting up a uri from which the processed data can be accessed from.

NSSI is built off Java Springboot, with Keycloak SSO security, and a RabbitMQ queueing system. The scalability comes from a special Kubernetes deployment setup, allowing us to scale the number of instances of each service to meet its individual demand. My contributions to NSSI include setting up a standardized JSON logging system for easy debugging, creating a calculation for estimating the time to completion for a job, and creating a push notification system to inform users when their jobs have finished.

Read More

Minecraft Plugin

At the start of my second year of university, I started working with a friend of mine on a plugin for one of our favourite games; Minecraft. We decided we would make use of the Java Spigot API to build a plugin for a minigame to be run on a public server. The game is a recreation of an old minigame that I used to play a lot when I was younger. The plugin turned out to be a lot more complex than we orginially thought it would be, but works very well and is currently being hosted on our own public server.

The plugin was written in Java and used multiple APIs; most notably of which is the Spigot API which is what we used to interface with Minecraft in order to alter the game's function. The other APIs we used were extensions of Spigot allowing us to further modify the game mechanics. We had to learn how to use these APIs as neither of us had used them before. The project is organized and built using Maven with versioning and merges between the two of us handled by GitHub. The source files are available for viewing on my friend's GitHub; Ethan Gervais.

Read More

Mobile Minigame

During my first year of university, I began a project with a highschool friend of mine in which we designed and built a mobile minigame. I was responsible for the coding and my friend created the graphics. We ended up building a simple 2d side-scroller about guiding a spaceship through a maze; it's called Cosmic Crash. The game includes a realistic physics engine as well as integrated ad slots.

This game was created using the Unity Engine and their C# scripting API. The Unity Engine was used to handle level design while Unity scripts built in C# are used to add function to the game environment. The ads are handled through Unity's own ads service and are controlled via more C# scripts. Since it's built in Unity, it can be published on both IOS and Android platforms. Our early testing has been exclusively in IOS, giving me experience in using XCode and testing applications on Apple products.

Read More

Keycloak Configuration TUI

The Keycloak Configuration TUI is a script I built for the configuration of the LINCS Keycloak instance. Keycloak is the SSO provider that the LINCS project has chosen to use across all of its tools and services. The point of this script is to provide administrators the ability to easily configure the service across all development environments and all projects that require authentication.

The TUI (Terminal User Interface) is built in Python using the Urwid module. Keycloak configuration files are stored as JSON, which are what the TUI is responsible for generating based on a predefined set of questions that are prompted by the interface. This allows us to ensure some amount of standardization across multiple apps that access this authentication provider. Following the generation of these JSON files, they are deployed via a simple HTTP request made to the service.

Read More

Website Design

This website was created out of boredom from being finished exams and realizing that I had nothing to do. I set off to learn something new by starting a project with the intention of challenging myself. In trying to find something to create that was different from other projects I've done, I realized that I had amassed a folder full of finished projects that I wasn't doing anything with. I figured that it was time to publish some of my best works, and this website is the result.

This website in its entirety is built and styled using HTML and CSS, along with some JavaScript animations. Before this website, I had a very rudementary knowledge of HTML from creating The Java Handbook. Yet, that was just extremely simple bolding, font size, and emphasis tags. During the planning of this website, I made it my mission to learn not only more about HTML, but to also learn CSS and JavaScript to make things look more aesthetically pleasing. I could not be happier with the results. I bought the domain in 2018 and being the perfectionist I am, I rebuilt the entire thing from the ground up in 2019 as I learned more advanced stylings in the markup languages. The site is hosted through GH Pages on GitHub, and the sourcefiles can be viewed there.

Read More

TicTacToe AI

During the 2020 Fall semester, I took a course called Data Structures which explored many different ways of storing data within programming languages. For the final project of this course, we were asked to make use of the structures we had discussed in order to create a statistical AI that can play TicTacToe. I really enjoyed the content we learned, and at the end of the semester, I had acheived a 100% average in the course.

This program is written entirely in C, with the testing files having been written by my professor, and the AI functions and implementation written by me. I generated all possible variants of a tictactoe board, and made use of a tree structure to store them (by linking the possible moves available from each board). From there, I worked through each board and assigned a score for all of x's turns and o's turns. When playing the game, the AI will generate the scores and use them to select which move to make (which board to branch to). This AI is very effective, and in my testing appears to be unbeatable.

Read More

Hashtable Database

This was an assignment for the Fall 2020 Data Structures course. Our task was to use hashtables to create a database of information from the IMDB movie database. These hashtables were then used to create a series of searching functions relating to actors and movies. The programs take input from a specific file format created by my professor and generate hashtable files that match keys and values for faster references.

This program is written in C and makes use of a hashtable library from another assignment. There are multiple programs included. One of them is responsible for generating the hashtables and outputing them to files, while the rest of the programs are responsible for performing searches on the generated files. Internally, each actor and movie is assigned a key value and the keys are mapped to each other which creates connections between actors and the movies they appeared in.

Read More

Data Structure Libraries

This is a collection of libraries that I wrote for the Fall 2020 Data Structures course. Each assignment in the course was centered around writing a library of functions for manipulating a specific data structure. During the 5 assignments in the course, we covered pointer/byte/bit manipulation, arrays, linked lists, trees, and hashtables. The test files included were written by my professor, but the functions within the libraries were written by myself.

All the libraries were written in C; the language chosen for the course. Since C does not implement data structures natively, these libraries are actually quite useful in the programming of C applications. Each library contains functions for creation, deletion, adding/removing elements, sorting elements, and searching for elements.

Read More

The LINCS Browser Extension

The LINCS browser extension is a Chromium-based extension designed as an access point for the datasets that LINCS stores. The basic function of the extension is to act as a webpage scanner for information related to the datasets of LINCS' research partners. After opening the extension on a website, the entire content of the site is run through an NER algorithm, the results of which are then queried against the LINCS triplestore. After this, the related information is displayed inline in the website content, with links provided that connect out to other LINCS tools.

The main purpose of this extension is to provide researchers with an idea of the type of information LINCS can offer them. The extension is built using JS and HTML, along with the Wink NLP Library and SPARQL queries sent to an Apache Fuseki triplestore. I was the main developer working on this project for LINCS, with my employers giving me a list of requirements and allowing me the freedom to build the extension from the ground up. Later on in the development cycle, I worked directly with the UX team for the project to develop and implement an interface that is visually appealing and functional.

Read More

Schema App Highlighter Extension

Schema App's Highlighter Extension is one of the newest additions to their line of Schema markup creation tools. This project consists of a Chrome extension that allows users to navigate through pages while creating highlighter templates. Highlighter templates are one of the most commonly used services at Schema App. Using XPaths, these templates allow you to create generalized markup for a webpage and then quickly deploy the created markup on potentially millions of similar webpages. The markup itself is generated via the template every time the page loads, so it's always up to date. I worked solely on frontend for this app.

The Highlighter Extension is built using Google's provided extension developer kit, along with Vue. For the backend, it interfaces directly with the monolithic Laravel PHP app that already exists for the Schema App and Schema Admin App tools. Through this project, I learned a lot about the creation of extensions and Vue. In my first Chrome Extension, The Lincs Browser Extension, I was given free rein to build as I decided to. Now that I have a professionally built extension to compare it to, I can identify areas of the extension that I would have built very differently. The other really interesting aspect of this project was learning about the suite of dev tools that have been created solely for the dev team to use. These are parts of the app that are hidden from the normal user and exist solely to increase development and decrease debugging time.

Read More

UUID Generator

The UUID generator is a really simple web service developed for the LINCS project. An essential part of linked open data generation is the need to mint new URIs. This webservice is responsible for the creation of these URIs according to LINCS specifications. Considering the amount of universally unique identifiers we need, it made the most sense to create a service that would do it for us and allow us to standardize the lengths and characters used. This service also prevents the rare case of overlapping two different resources with the same URI.

This service consists of two endpoints, one for generating a single UUID and one for batch generating multiple UUIDs. The service is built off Node.JS, with a library called NanoID for UUID generation, and a Postgres database for storing UUIDs that have already been generated. This project gave me my first experience at setting up a Kubernetes Deployment through GitLab Autodevops. I also learned alot about setting up both Docker and Docker Compose for local development.

Read More

Linked Open Data Visualization

The Linked Open Data Visualization (Lod Viz), is another project I worked on for the LINCS project. This is a visualization of linked open data that was originally designed by the Urban Complexity Lab (UCLAB) in Germany. The original version of this tool ran off csv data, instead of lod as it's meant to. My contribution to the project was to create a form to generate true lod, create an admin page for vetting the form data, and then modifying the visualization to run off the generated data.

Lod Viz is a PHP application, and for my modifications I had to learn the language. I used a Postgres database to hold form submissions, basic auth for an admin page, Vue.js for UX components, a PHP library called EasyRdf to generate the lod, and an Apache Fuseki triplestore to hold the generated data. After finishing my changes, I created a Python script with the Pandas and Rdflib modules to translate the existing csv formatted data into lod.

Read More

Schema App

Through this interface subscribers of Schema App are able to configure their markup and deployment methods. There are many ways of generating markup, including a manual data creation editor for static pages, a web crawler that can go through sites to apply daily updates to markup, templates that can generate markup on page load, and in some cases the app is even able to pull information directly from third-party applications that the subscriber already uses. For deployment, users can configure integrations with services such as Shopify, Big Commerce, WordPress, and other third-party tools. This is a massive repository built in monolithic Laravel PHP and Vue with Bootstrap components. My contributions include a variety of bug fixes and UX improvements.

Coming into this project, I had minimal experience in PHP. Coming out of this project, I had used PHP in almost everything I'd done for 4 months. It was a fantastic learning experience. Originally, I had a number of negative thoughts about PHP. I had a belief that it was old, outdated, and wasn't worth learning. In reality, I have come to enjoy the language a fair bit. The experience I previously had with EasyRdf turned out to be very useful, as Schema App used a modified version of the same library to build an API called GraphWrangler that further simplifies the interaction between RDF data and pragmatically accessing information. The main data source of the app is Amazon's Neptune triplestore, which as it turns out is incredibly similar to a triplestore I've used before called Blazegraph. It was even developed by the same group of people. Working on this project was great for expanding my knowledge of technologies that I've already used.

Read More

Schema Admin App

The Schema Admin app is the administrative side of Schema App's main platform. This is the place that Customer Success Managers and Operational Managers are able to access client accounts to configure their markup, set up enterprise contract deals, make administrative changes to a customer's account, and access metrics, customer information and more. This app essentially contains everything that has been decided as needing manual administrative configuration. This is where most of the Customer Insights Engine team does their work. During my time on this project I made a variety of UX improvements and bug fixes, primarily around the customer info reports system and the data editor.

The Schema Admin App is actually a part of the same repo as the main app. It uses the same monolithic Laravel PHP and Vue with Bootstrap approach. By packaging them in the same repo, the company was able to reuse a lot of foundational components and increase the development speed for the two apps since modifying one component affects both apps. There are some new components in the admin app however, including some really cool datacentric models around developing data reports and modifying data entry forms. The admin app is where I was able to practice my SPARQL and RDF knowledge.

Read More