This post I’m going to cover where the data for my site comes from and I’ll also share how that will change that over the next few months as I work towards a stable release (v1.0.0) of the website. This is the first post in a series tagged “meta” where I’ll write about issues I work through while building my personal site.
How this project came to exist
When this website was first published circa 2012, it was an aggregation of social posts and links to projects I was working on. My intention was to have a space online where I could share projects using my own design and user experience.
Since then, this website has remained a digital sandbox where I explore and test new tools, technology, and designs. My experiments with tools and frameworks lead me to build many of the older, archived projects in my GitHub account.
Here is a diagram showing how requests for data currently flows behind this site.
All of the components and services are open source projects available on GitHub.
Where this project is going
This site will continue to serve as my blog journaling the things I create and challenges I resolve. I’ll also remain a showcase of my personal code and creative projects. jQuery will be cut out when it moves away from Zurb Foundation 5 to something more modern.
In order to iterate faster and focus on building cool features instead of maintaining the old ones, some tech debt needs to be addressed aimed at making the architecture and code less complex and more consistent.
Here is a rough diagram showing how I plan to reorganize the data flow from above.
Starting at the top, the one-off stats “site” goes away and all content moves to a new universal client, which for now can continue to be served by Jekyll while I evaluate options. The Personal API will be the primary interface for all data sources, whereas my current setup has the client call a mix of microservices, the Personal API, and external APIs per page.
The Personal API will interact with data sources using a data abstraction layer, referred to as entities, which will facilitate database CRUD operations and abstract API interaction. Entities are a concept, and they can live within the Personal API code.
- Personal Site – blog engine and client.
- Personal API – API for my personal data and metrics.
- Job Scheduler – job scheduler for internal tasks.
- Entities – data abstraction layer for API calls and database operations.
The main items I plan on addressing are:
- Reducing the number of network calls made by the client to render the site. The current website makes 7 client-side calls to render the home page. Once all data is available through the Personal API I will expose a route that returns all data needed to render the page in one call.
- Reducing requests to external providers and gaining more control over my data by making scheduled jobs to fetch and store data.
- Improving code convention and consistency by creating an abstraction layer for interacting with data through entities, which will be like Models in MVC architecture.