In part 1 we covered the non-functional checks in code reviews.
In part 2 we will focus on the functional aspects of code reviews.
Caching
Caching is a critical aspect of an application that’s often ignored. The costs of saving (network resources) due to caching are exponential. Understanding the data pipeline throughout the application is essential for every developer to cache resources well. Use caching at different levels (client-side, server-level and application level) effectively. For a Gramex
application that would be at nginx
(static assets) and Gramex
application levels (queries, configurations etc.). Read more at Gramex Guide.
Using the Developer Tools in your favorite browser will expedite debugging about caching. Look at the Network
tab in Developer Tools (pressing F12 in browser should open it up) and observe the HTTP
status codes of every request to determine if caching is working as expected.
Trivia: I found out we aren’t using HTTP/2
protocol (for our website) that can enhance web experience for users by requesting resources parallelly. We fixed it immediately post that. This isn’t related to caching but a result of serendipity while looking for caching issues. Read more about the benefits of HTTP/2
in a Kinsta blog.
Error Handling
Endpoints that fetch requests on client-side from server-side are frequently not implemented thoroughly. Developers often code for happy paths but skip handling potential errors when deployed in production. At the end of it, real users will use the application and need to be notified after every action (successful or otherwise).
Consider the code below that retrieves list of districts
from the API.
fetch('districts')
.then(function(response) {
// response.json() is the output
})
It uses fetch
, a web API standard, but doesn’t handle potential errors. A better version is as below:
function fetch_error() {
// catches network errors
if(!response.ok) show_custom_message('Network error while fetching details. Retry in some time.')
// fetch returns a promise
return response.json()
}
fetch('districts')
.then(fetch_error)
.then(function(response) {
// response is the output
})
.catch(function(error) {
// catches endpoint specific errors
show_custom_message('Custom error message.')
})
All HTTP errors can be caught and addressed in an end-user-friendly way by creating custom pages. Few frequent errors that have to be handled well are unauthorized user, unauthenticated user, resource not found, time out error, internal server error, bad gateway. Know about all HTTP status codes at MDN.
Excessive database requests
Imagine you’re rendering navigation filters with data from database post which you fetch additional data from database to render components (charts, tables etc.).
Database will get busy once several users access the application. To reduce load on database we could use pre-processed data (saved in JSON
files) to serve preliminary data.
Trivia
In a recent project we noticed minimum and maximum values for numeric columns were being calculated every time user performs a specific action (frequently occurring). For a specific dataset, there were over 50
numeric columns. Hence, one request per column was sent to fetch minimum and maximum values.
Solution
Pre-calculating minimum and maximum values for all numeric columns, saving in JSON
files increased the performance of the application 10 fold.
Reinventing the wheel
Popular libraries have support to handle frequently occurring use-cases such as calculating lower case of characters (lodash
operations instead of custom regex
in JavaScript
), stringifying date objects (datetime
in Python
).
Smaller functions
Increasingly, I’m a fan of function compositions. Each function does a small unit of work and nothing more. This is harder to execute than it seems because of the mental models of developers. It’s easier to write everything continuously than break them into chunks as functions. Deliberate trainings can help here.
Several other key functional checks include for unresponsive errors (5XX HTTP status codes), long request times, long DOM renders, unreachable and unused code, auto-deployment checks, updated README with deployment steps, persistent states via URL parameters, auto validation for eslint
and pep8
rules.
How differently would you review code in your teams? Share your thoughts in comments.