Here’s How Not to Suck at JavaScript

Let’s be honest. A lot of JavaScript code sucks. Change that!

JavaScript is a force to be reckoned with. It is the single most widely-used programming language in the world. Its simplicity and the abundance of learning resources make it approachable for beginners. Large talent pools make JavaScript appealing to companies of all sizes. A large ecosystem of tools and libraries are a true boon to developer productivity. Having one language to rule both the frontend and backend is a huge benefit, the same skillset can be utilized across the entire stack.

JavaScript’s power is nuclear

JavaScript provides lots of tools and options, which is good! The bitter truth, however, is that it imposes almost no limitations on the developer. Giving JavaScript to somebody inexperienced is akin to giving a box of matches to a 2-year old child along with a can of gasoline…

The power of JavaScript is nuclear— it can be used to supply cities with electricity or it can be used to destroy. Building something that works with JavaScript is easy. Building software that is both reliable and maintainableis not.

Code Reliability

Photo by Jaromír Kavan on Unsplash

When building a dam, engineers first and foremost concern themselves with reliability. Building a dam without any planning or safety measures in place is dangerous. The same applies to building bridges, airplanes, cars… you name it. None of the fancy features matter if a car is unsafe and unreliable — not the horsepower, the loudness of the exhaust, or even the type of leather used in the interior.

Similarly, the goal of every software developer is to write reliable software. Nothing else matters if the code is buggy and unreliable. And what is the best way to write code that is reliable? Simplicity. Simplicity is the opposite of complexityTherefore, our first and foremost responsibility as software developers should be to reduce code complexity.

What sets an experienced developer apart from an inexperienced one is whether or not one can write reliable software. And yes, being reliable also includes maintainable — only maintainable codebase can be reliable. This entire piece focuses on writing code that is reliable.

Although I’m a strong believer in Functional Programming, I’m not going to preach anything. Instead, I’m going to take a few useful concepts from Functional Programming and demonstrate how they can be applied in JavaScript.

Do we really need software reliability? That’s up to you to decide. Some people would argue that software should only be good enough for customers to keep using it. I tend to disagree. In fact, I’m a strong believer that nothing else matters if the software is unreliable and unmaintainable. Who would buy an unreliable car, that would break and accelerate randomly? How many people would be using a phone that would be losing its connectivity a few times a day, and restart randomly? Probably not too many. Software isn’t that much different.

There’s Not Enough RAM

Photo by Liam Briese on Unsplash

How do we develop reliable software? We have to consider the amount of available RAM. As any developer knows, we should design our programs to be memory-efficient, and never consume all of the available RAM. If this happens, then memory swapping will start taking place — anything that does not fit into RAM will be stored on the hard-drive, and the performance of all of the programs running will start degrading.

How does this relate to writing reliable software? The human brain has its own version of RAM, called working memory. Yes, our brain is the most powerful machine in the known universe, but it comes with its own set of limitations — we can only hold about five pieces of information in our working memory at any given time.

This directly translates to programming — simple code consumes less mental resources, makes us more efficient, and results in more reliable software. This article along with some of the available JavaScript tooling will help you achieve that!

A Note to Beginners

In this article, I will be making heavy use of ES6 features. Make sure that you’re familiar with ES6 before reading. As a brief refresher:

// ---------------------------------------------
// lambda (fat arrow) anonymous functions
// ---------------------------------------------

const doStuff = (a, b, c) => {...}

// same as:
function doStuff(a, b, c) {

// ---------------------------------------------  
// object destructuring
// ---------------------------------------------
const doStuff = ({a, b, c}) => {
// same as:
const doStuff = (params) => {
  const {a, b, c} = params;
// same as:                             
const doStuff = (params) => {  

// ---------------------------------------------                            
// array destructuring
// ---------------------------------------------

const [a, b] = [1, 2];
// same as:
const array = [1, 2];
const a = array[0];
const b = array[1];


Photo by NeONBRAND on Unsplash

One of the greatest strengths of JavaScript is the available tooling. No other programming language can boast access to such a large ecosystem of tools and libraries.

I also probably won’t surprise anyone by saying that we should be making use of such tooling, ESLint in particular. ESLint is a tool for static code analysis. It is the most important tool that allows us to find potential issues within the codebase and ensure high quality of the codebase. The best part is that linting is a fully automated process and can be used to prevent the low-quality code from making its way into the codebase.

Many people barely make use of ESLint — they simply enable a pre-built config, like eslint-config-airbnb, and think that they’re done. That’s a plausible approach, but unfortunately, it barely scratches the surface of what ESLint has to offer. JavaScript is a language with no limitations. Inadequate linting setup can have far-reaching consequences.

Yes, being familiar with all of the features of a language can be useful. However, a skilled developer is also someone who knows what features notto use. JavaScript is an old language, it comes with plenty of baggage and it attempts to do it all. It is important to be able to tell the good parts from the bad ones.

ESLint Configuration

If you want to follow along and make use of some of the suggestions from this article, then you can set ESLint up as follows. I would recommend becoming familiar with the suggestions one-by-one and including the ESLint rules into your project one-by-one as well. Configure them initially as warn, and once you’re comfortable, you can convert some of the rules into error.

In the root directory of your project run:

npm i -D eslint
npm i -D eslint-plugin-fp

Then create a .eslintrc.yml file within the root directory of your project:

  es6: true
  # rules will go in here

If you’re using an IDE like VSCode, make sure to set up an ESLint plugin.

You can also run ESLint manually from the command line:

npx eslint .

The Importance of Refactoring

Photo by Jason Leung on Unsplash

Refactoring is the process of reducing the complexity of the existing code. When properly applied, it becomes the best weapon we have against the dreaded monster of technical debt. Without continuous refactoring, the technical debt will keep accumulating, which in turn will make the developers slower and more frustrated. Technical debt is one of the main reasons for developer burn-out.

Refactoring boils down to the process of cleaning up the existing code while making sure that the code still functions correctly. Refactoring is considered to be good practice in software development and it should be a normal part of the development process in any healthy organization.

As a word of caution, before undertaking any refactoring, it is best to have the code covered with automated tests. It is easy to inadvertently break the existing functionality and having a comprehensive test suite in place is a great way to limit any potential issues.

The Biggest Source of Complexity

Photo by Leonel Fernandez on Unsplash

I know this might sound weird, but the code itself is the biggest source of complexity. In fact, no code is the best way to write secure and reliable software. Frankly, this is not always possible, therefore the second best thing is to reduce the amount of code. Less code means less complexity, as easy as that! Less code also implies a smaller surface area for the bugs to attach to. There’s even a saying that junior developers write code, while senior developers delete code. Couldn’t agree more.

Long files

Let’s admit it, humans are lazy. Laziness is a short-term survival strategy wired into our brains. It helps us preserve energy by avoiding things that are not critical to our survival.

Some of us are a little lazy and not very disciplined. People keep putting more and more code into the same single file… If there’s no limit on the length of a file, then such files tend to keep growing infinitely. In my experience, files with over 200 lines of code become too much for the human brain to comprehend and become hard to maintain. Long files also are a symptom of a bigger problem — the code probably is doing too much, which violates the Single Responsibility Principle.

How can this be addressed? Easy! Simply break down large files into smaller more granular modules.

Suggested ESLint configuration:

  - warn
  - 200

Long functions

Another major source of complexity is long and complex functions. Such functions are hard to reason about. They typically have too many responsibilities and are hard to test.

Let’s consider the following express.js code snippet for updating a blog entry:

router.put('/api/blog/posts/:id', (req, res) => {
  if (!req.body.title) {
    return res.status(400).json({
      error: 'title is required',
  if (!req.body.text) {
    return res.status(400).json({
      error: 'text is required',
  const postId = parseInt(;

  let blogPost;
  let postIndex;
  blogPosts.forEach((post, i) => {
    if ( === postId) {
      blogPost = post;
      postIndex = i;

  if (!blogPost) {
    return res.status(404).json({
      error: 'post not found',

  const updatedBlogPost = {
    id: postId,
    title: req.body.title,
    text: req.body.text

  blogPosts.splice(postIndex, 1, updatedBlogPost);

  return res.json({

The function body is 38 lines long and it does several things: parses post id, finds an existing blog post, validates the user input, returns validation errors in case of invalid input, updates the collection of posts, and returns the updated blog posts.

Clearly, it can be refactored into a few smaller functions. The resulting route handler could then look something like this:

router.put("/api/blog/posts/:id", (req, res) => {
  const { error: validationError } = validateInput(req.body);
  if (validationError) return errorResponse(res, validationError, 400);

  const { blogPost } = findBlogPost(blogPosts,;

  const { error: postError } = validateBlogPost(blogPost);
  if (postError) return errorResponse(res, postError, 404);

  const updatedBlogPost = buildUpdatedBlogPost(req.body);

  updateBlogPosts(blogPosts, updatedBlogPost);
  return res.json({updatedBlogPost});

Suggested ESLint configuration:

  - warn
  - 20

Complex functions

Complex functions go hand-in-hand with long functions — longer functions are always more complex than shorter functions. What makes functions complex? Multiple things, but the ones that are easy to fix are nested callbacks and high cyclomatic complexity.

Nested callbacks often result in callback hell. It can be easily mitigated by promisifying the callbacks and then making use of async-await.

Here’s an example of a function with deeply nested callbacks:

fs.readdir(source, function (err, files) {
  if (err) {
    console.error('Error finding files: ' + err)
  } else {
    files.forEach(function (filename, fileIndex) {
      gm(source + filename).size(function (err, values) {
        if (err) {
          console.error('Error identifying file size: ' + err)
        } else {
          aspect = (values.width / values.height)
          widths.forEach(function (width, widthIndex) {
            height = Math.round(width / aspect)
            this.resize(width, height).write(dest + 'w' + width + '_' + filename, function(err) {
              if (err) console.error('Error writing file: ' + err)

Cyclomatic complexity

Another major source of function complexity is cyclomatic complexity. In a nutshell, it refers to the number of statements (logic) in any given function. Think if statements, loops, and switch statements. Such functions are hard to reason about and their use should be limited. Here’s an example:

if (conditionA) {
  if (conditionB) {
    while (conditionC) {
      if (conditionD && conditionE || conditionF) {

Suggested ESLint configuration:

  - warn
  - 5
  - warn
  - 2  max-depth:
  - warn
  - 3

What is the other important way to reduce the amount of code, and along with it, the complexity? Declarative code, but more on that later.

Mutable State

Photo by Alexey Turenkov on Unsplash

What is state? Simply put, state is any temporary data stored in memory. Think variables or fields within objects. State by itself is quite harmless. Mutable state though is one of the biggest sources of complexity in software. Especially when coupled with object-orientation (more on this later).

Limitations of the human brain

Why is mutable state such a big problem? As I have said earlier, the human brain is the most powerful machine in the known universe. However, our brains are really bad at working with state since we can only hold about five items at a time in our working memory. It is much easier to reason about a piece of code if you only think about what the code does, not what variables it changes around the codebase.

Programming with mutable state is an act of mental juggling️. I don’t know about you, but I could probably juggle two balls. Give me three or more and I will certainly drop all of them. Coding is the same, I became much more productive and my code became much more reliable once I dropped the mutable state.

The problems with mutable state

Let’s see in practice how mutability can make our code problematic:

const increasePrice = (item, increaseBy) => {
  // never ever do this
  item.price += increaseBy;

  return item;

const oldItem = { price: 10 };

const newItem = increasePrice(oldItem, 3);

// prints newItem.price 13
console.log('newItem.price', newItem.price);

// prints oldItem.price 13
// unexpected?
console.log('oldItem.price', oldItem.price);

The bug is very subtle, but by mutating the function arguments we’ve accidentally modified the price of the original item. It was supposed to remain 10, but in reality, the value has changed to 13!

How do we avoid such issues? By constructing and returning a new object instead (immutability):

const increasePrice = (item, increaseBy) => ({
  price: item.price + increaseBy

const oldItem = { price: 10 };

const newItem = increasePrice(oldItem, 3);

// prints newItem.price 13
console.log('newItem.price', newItem.price);

// prints oldItem.price 10
// as expected!
console.log('oldItem.price', oldItem.price);

Keep in mind that copying by using the ES6 spread  operator makes a shallow copy, not a deep copy — it won’t copy any of the nested properties. E.g. if the item above had something like , the seller of the new item would still refer to the old item, which is undesirable. Other more robust alternatives for working with immutable state in JavaScript include immutable.js and Ramda lenses. I’ll cover those options in another article.

Suggested ESLint configuration:

  fp/no-mutation: warn
  no-param-reassign: warn

Don’t push your arrays

The same problems are also inherent in array mutation using methods like push :

const a = ['apple', 'orange'];
const b = a;


// ['apple', 'orange', 'microsoft']

// ['apple', 'orange', 'microsoft']
// unexpected?

I think you might have expected the array b to stay the same? This error could have been easily avoided had we created a new array instead of calling push .

Such issues can easily be prevented by constructing new arrays instead:

const newArray = [...a, 'microsoft'];


Non-determinism is a fancy term that simply describes the inability of programs to produce the same output, given the same input. If this doesn’t sound good to you, it is because it is not any good. You might be thinking that 2+2==4 , but this is not always the case with non-deterministic programs. Two plus two mostly is equal to four, but sometimes it might become equal to three, five, and maybe even 1004.

While mutable state itself is not inherently non-deterministic, it makes the code prone to non-determinism (as demonstrated above). It is ironic that non-determinism is universally considered to be undesirable in programming, yet the most popular programming paradigms (OOP and imperative) are especially prone to non-determinism.


If mutability is not always the best option, then what are the alternatives? Immutability, of course! Making use of immutability is a very good practice and nowadays is encouraged by many. Yes, some might disagree, and say that mutable state is great (Rust fans?). I can say one thing for certain — mutability doesn’t make a codebase more reliable.

I won’t go too deep into immutability in this article. It is a big topic, and I will probably later dedicate an entire article to immutability.

Suggested ESLint configuration:

  fp/no-mutating-assign: warn
  fp/no-mutating-methods: warn
  fp/no-mutation: warn

Avoiding the Let Keyword

Photo by Robson Hatsukami Morgan on Unsplash

Yes, var should never be used in JavaScript to declare variables, I will not surprise anyone by saying that. However, you will probably be surprised to learn that the let keyword should be avoided as well. Variables declared with let can be reassigned, which makes reasoning about the code harder. Many of the bad practices that we’ve covered so far in this article run into the limitations of the human RAM (working memory), and using the letkeyword is no exception. When programming with the let keyword, we have to keep in mind all of the side effects and potential edge cases. Inadvertently, we might assign an incorrect value to the variable, and waste time debugging.

This is especially applicable to unit testing. Having shared mutable state in-between multiple tests is a recipe for disaster, given that most test runners run the tests in parallel.

What are the alternatives to the let keyword? The const keyword, of course! Although it doesn’t guarantee immutability, it makes the code easier to reason about by disallowing reassignments. And honestly, you don’t really need let — in most cases, the code that reassigns values to variables can be extracted into a separate function. Let’s look at an example:

let discount;

if (isLoggedIn) {
  if (cartTotal > 100  && !isFriday) {
    discount = 30;
  } else if (!isValuedCustomer) {
    discount = 20;
  } else {
    discount = 10;
} else {
  discount = 0;

And the same example extracted into a function:

const getDiscount = ({isLoggedIn, cartTotal, isValuedCustomer}) => {
  if (!isLoggedIn) {
    return 0;

  if (cartTotal > 100  && !isFriday()) {
    return 30;
  if (!isValuedCustomer) {
    return 20;
  return 10;

Being unable to make use of re-assignment using the let keyword might sound difficult at first, but it will make your code less complex and more readable. I haven’t used the let keyword in a very long time and have never missed it.

Getting into the habit of programming without the let keyword has a nice side effect of making you more disciplined. It will force you to break down your code into smaller more manageable functions. This, in turn, will make your functions more focused, will enforce better separation of concerns, and will also make the codebase much more readable and maintainable.

Suggested ESLint configuration:

  fp/no-let: warn

Object-Oriented Programming

“Java is the most distressing thing to happen to computing since MS-DOS.”

– Alan Kay, the inventor of Object-Oriented Programming

Photo by Vanessa Bucceri on Unsplash

Object-Oriented Programming is a popular programming paradigm used for code organization. This section discusses the limitations of mainstream OOP as used in Java, C#, JavaScript, TypeScript, and other OOP languages. I’m not criticizing proper OOP (e.g. SmallTalk).

This section is completely optional, and if you think that using OOP is a must when developing software, then feel free to skip this section. Thanks.

Good programmers vs bad programmers

Good programmers write good code and bad programmers write bad code, no matter the programming paradigm. However, the programming paradigm should constrain bad programmers from doing too much damage. Of course, this is not you since you already are reading this article and putting in the effort to learn. Bad programmers never have the time to learn, they only keep pressing buttons on the keyboard like crazy. Whether you like it or not, you will be working with bad programmers, some who will be really, really bad. And, unfortunately, OOP does not have enough constraints in place that would prevent bad programmers from doing too much damage.

Why was OOP invented in the first place? It was intended to help with the organization of procedural codebases. The irony is that OOP was supposed to reduce complexity, however, the tools that it offers, only seem to be increasing it.

OOP non-determinism

OOP code is prone to non-determinism — it heavily relies on mutable state. Functional programming guarantees that we will always get the same output, given the same input. OOP cannot guarantee much, which makes reasoning about the code even harder.

As I said earlier, in non-deterministic programs the output of 2+2 or calculator.Add(2, 2) mostly is equal to four, but sometimes it might become equal to three, five, and maybe even 1004. The dependencies of the Calculator object might change the result of the computation in subtle, but profound ways. Such issues become even more apparent when concurrency is involved.

Shared mutable state

“I think that large objected-oriented programs struggle with increasing complexity as you build this large object graph of mutable objects. You know, trying to understand and keep in your mind what will happen when you call a method and what will the side effects be.”

— Rich Hickey, creator of Clojure

Mutable state is hard. Unfortunately, OOP further exacerbates the problem by sharing that mutable state by reference (rather than by value). This means that pretty much anything can change the state of a given object. The developer has to keep in mind the state of every object that the current object interacts with. This quickly hits the limitations of the human brain since we can hold only about five items of information in our working memory at any given time. Reasoning about such a complex graph of mutable objects is an impossible task for the human brain. It uses up precious and limited cognitive resources, and will inevitably result in a multitude of defects.

Yes, sharing references to mutable objects is a tradeoff that was made in order to increase efficiency and might have mattered a few decades ago. The hardware has advanced tremendously, and we should now worry more about developer efficiency, not code efficiency. Even then, with modern tooling, immutability, barely has any impact on performance.

OOP preaches that global state is the root of all evil. However, the irony is that OOP programs are mostly one large blob of global state (since everything is mutable and is shared by reference).

The Law of Demeter is not very useful — shared mutable state is still shared mutable state, no matter how you access or mutate that state. It simply sweeps the problem under the rug. Domain-Driven Design? That’s a useful design methodology, it helps a bit with the complexity. However, it still does nothing to address the fundamental issue of non-determinism.

Signal-to-noise ratio

Many people in the past have been concerned with the complexity introduced by the non-determinism of OOP programs. They’ve come up with a multitude of design patterns in an attempt to address such issues. Unfortunately, this only further sweeps the fundamental problem under the rug and introduces even more unwarranted complexity.

As I said earlier, the code itself is the biggest source of complexity, less code is always better than more code. OOP programs typically carry around a large amount of boilerplate code, and “band-aids” in the form of design patterns, which adversely affect the signal-to-noise ratio. This means that the code becomes more verbose, and seeing the original intent of the program becomes even more difficult. This has the unfortunate consequence of making the codebase significantly more complex, which, in turn, makes the codebase less reliable.

I’m not going to dive too deep into the drawbacks of using Object-Oriented Programming in this article. Even though there probably are millions of people who swear by it, I’m a strong believer that modern OOP is one of the biggest sources of complexity in software. Yes, there are successful projects built with OOP, however, this doesn’t mean that such projects do not suffer from unwarranted complexity.

OOP in JavaScript is especially a bad idea since the language lacks things like static type checking, generics, and interfaces. The this keyword in JavaScript is rather unreliable.

The complexity of OOP surely is a nice exercise for the brain. However, if our goal is to write reliable software, then we should strive to reduce complexity, which ideally means avoiding OOP. If you’re interested in learning more, make sure to check out my other article Object-Oriented Programming — The Trillion Dollar Disaster.

This Keyword

Photo by Austin Neill on Unsplash

The behavior of the this keyword is consistently inconsistent. It is finicky and can mean completely different things in different contexts. Its behavior even depends on who has called a given function. Using this keyword oftentimes results in subtle and weird bugs that can be hard to debug.

Yes, it can be a fun interview question to ask job candidates, but the knowledge of the this keyword doesn’t really tell anything. Only that the candidate has spent a few hours studying the most common JavaScript interview questions.

What would I answer if I was given a tricky piece of code using the thiskeyword? As a Canadian, I’d say — “I’m sorry… I don’t know”. Real-world code should not be error-prone. It should be readable, not tricky. this is an obvious language design flaw, and should not be used.

Suggested ESLint configuration:

  fp/no-this: warn

Declarative Code

Photo by Sean Stratton on Unsplash

Declarative code is a popular term nowadays, but what does it really mean? Is it any good? Let’s find out.

If you’ve been programming for some time, then most likely you’ve been making use of the imperative style of programming, which describes a set of steps to achieve the desired result. Declarative style, on the other hand, describes the desired outcome, not the specific instructions.

Some examples of commonly used declarative languages are SQL and HTML. And even JSX in React!

We don’t tell a database how to fetch our data by specifying the exact steps. We use SQL to describe what to fetch instead:

SELECT * FROM Users WHERE Country='USA';

This roughly can be represented in imperative JavaScript:

let user = null;

for (const u of users) {
  if ( === 'USA') {
    user = u;

Or in declarative JavaScript, using the experimental pipeline operator:

import { filter, first } from 'lodash/fp';

const filterByCountry =
  country => filter( user => === country );

const user =
  |> filterByCountry('USA')
  |> first;

What approach would you prefer? To me, the second one seems cleaner and more readable.

Prefer expressions over statements

Expressions should be preferred over statements if our goal is to write declarative code. Expressions always return a value, whereas statements are used to perform actions, and do not return any results. This is also called “side effects” in functional programming. By the way, state mutation that was discussed earlier is also a side effect.

What are some of the commonly used statements? Think if , return , switch , for , while .

Let’s look at a simple example:

const calculateStuff = input => {
  if (input.x) {
    return superCalculator(input.x); 
  return dumbCalculator(input.y);

This can easily be rewritten as a ternary expression (which is declarative):

const calculateStuff = input => {
  return input.x
          ? superCalculator(input.x)
          : dumbCalculator(input.y);

And if the return statement is the only thing within a lambda function, then JavaScript allows us to get rid of the lambda statement as well:

const calculateStuff = input =>
  input.x ? superCalculator(input.x) : dumbCalculator(input.y);

The function body was reduced from six lines of code to one single line of code. Declarative code superpowers!

What are some of the other drawbacks of using statements? They cause side effects and mutations, which are prone to non-determinism. This makes the code less readable and less reliable. Statements are unsafe to reorder since they rely on the order that they were used in. Statements (including loops) are hard to parallelize since they mutate state outside of their scope. Working with statements implies additional mental overhead due to increased complexity.

Expressions, on the other hand, can be safely reordered, cause no side effects, are easy to parallelize.

Declarative programming takes effort

Declarative programming is not something that can be learned overnight. Especially given that the majority of people have mainly been taught imperative programming. Declarative programming takes discipline and learning to think in a whole new way. How do you learn declarative programming? A good first step is to learn to program without mutable state — not using the let keyword and not mutating the state. I can say one thing for certain, if you give declarative programming a try, you will be amazed how elegant your code will become.

Suggested ESLint configuration:

  fp/no-let: warn
  fp/no-loops: warn
  fp/no-mutating-assign: warn
  fp/no-mutating-methods: warn
  fp/no-mutation: warn
  fp/no-delete: warn

Avoid Passing Multiple Parameters to Functions

Photo by chuttersnap on Unsplash

JavaScript is not a statically typed language, and there’s no way to guarantee that the function is being invoked with the correct and expected parameters. ES6 brings many great features to the table, including destructuring objects, which can also be used for function arguments.

Do you find the following code snippet intuitive? Are you able to tell right off the bat what the parameters are? I can’t!

const total = computeShoppingCartTotal(itemList, 10.0, 'USD');

How about the following example?

const computeShoppingCartTotal = ({ itemList, discount, currency }) => {...};

const total = computeShoppingCartTotal({ itemList, discount: 10.0, currency: 'USD' });

I’d say that the latter is much more readable than the former. This is especially applicable to function calls made from a different module. Making use of an argument object also has the benefit of making the order of arguments irrelevant.

Suggested ESLint configuration:

  - warn
  - 2

Prefer Returning Objects From Functions

How much does the following code snippet tell you about the signature of the function? What does it return? Does it return the user object, the user id, the status of the operation? It is hard to tell without understanding the surrounding context.

const result = saveUser(...);

Returning an object from a function makes the developer’s intention very clear and the code becomes significantly more readable:

const { user, status } = saveUser(...);

const saveUser = user => {

   return {
     user: savedUser,
     status: "ok"

Controlling Execution Flow with Exceptions

Photo by Christoffer Engström on Unsplash

How do you like seeing internal 500 server errors when you’ve entered an invalid input in a form? What about working with APIs that don’t give any details, and throw 500 errors left and right instead? I’m sure that all of us have encountered such issues, and the experience likely was far from pleasant.

Although we’re taught to throw exceptions when something unexpected happens, this is not the best way to handle errors. Let’s see why.

Exceptions break type safety

Exceptions break type safety, even in statically typed languages. According to its signature, the function fetchUser(id: number): User is supposed to return a user. Nothing in the function signature says that an exception will be thrown if the user cannot be found. If an exception is expected, then a more appropriate function signature would be: fetchUser(...): User|throws UserNotFoundError . Of course, such syntax is invalid, no matter the language.

Reasoning about programs that throw exceptions becomes hard — one may never know whether or not a function will throw an exception. Yes, we could wrap every single function call in a try/catch block, but that seems unpractical, and would significantly hinder code readability.

Exceptions break function composition

Exceptions make it virtually impossible to utilize function composition. In the following example, the server is going to return a 500 internal server error if one of the blog posts cannot be found.

const fetchBlogPost = id => {
  const post = api.fetch(`/api/post/${id}`);

  if (!post) throw new Error(`Post with id ${id} not found`);

  return post;

const html = postIds |> map(fetchBlogPost) |> renderHTMLTemplate;

What if one of the posts was deleted, but the user is still trying to access the post due to some obscure bug? This will significantly degrade the user experience.

Tuples as an alternative way of error handling

Without going too deep into functional programming, a simple way to handle errors is to return a tuple containing the result and an error instead of throwing an exception. Yes, JavaScript has no support for tuples, but they can easily be emulated using a two-value array in the form of [error, result]. By the way, this also is the default method of error handling in Go:

const fetchBlogPost = id => {
  const post = api.fetch(`/api/post/${id}`);

  return post
      // null for error if post was found
    ?  [null, post]
      // null for result if post was not found
    :  [`Post with id ${id} not found`, null];

const blogPosts = postIds |> map(fetchBlogPost);

const errors =
  |> filter(([err]) => !!err)  // keep only items with errors
  |> map(([err]) => err); // destructure the tuple and return the error

const html =
  |> filter(([err]) => !err)  // keep only items with no errors
  |> map(([_, result]) => result)  // destructure the tuple and return the result
  |> renderHTML;

Exceptions are fine, sometimes

Exceptions still have their place within the codebase. As a rule of thumb, you should ask yourself one question — do I want the program to crash? Any exception thrown has the potential to bring down the entire process. Even if we think that we’ve carefully considered all of the potential edge cases, still, the exceptions are unsafe and will cause the program to crash sometime in the future. Throw exceptions only if you really intend the program to crash, for example, because of a developer error, or a failed database connection.

Exceptions are called exceptions for a reason. They should only be used when something exceptional has happened, and the program has no other choice but crash. Throwing and catching exceptions is not a very good way to control execution flow. We should only resort to throwing exceptions when an unrecoverable error has happened. Invalid user input, for example, is not one of them.

Let it crash — Avoid catching exceptions

This brings us to the ultimate rule of error handling — avoid catching exceptions. That’s right — we’re allowed to throw errors if we intend the program to crash, but we should never catch such errors. In fact, this is the approach recommended by functional languages like Haskell and Elixir.

The only exception to the rule is the consumption of third-party APIs. Even then, it is best to make use of a helper function wrapping the function to return a [error, result] tuple instead. For this purpose, you can make use of tools like saferr. I’ll cover such wrappers in more detail in the next section on partial function application.

Ask yourself — who is responsible for the error? If this is the user, then the error should be handled gracefully. We want to show the user a nice message, instead of presenting them with a 500 internal server error.

Unfortunately, there’s no no-try-catch ESLint rule. Its closest neighbor is the no-throw rule. Make sure that you throw errors responsibly, in exceptional circumstances, when you expect the program to crash.

Suggested ESLint configuration:

  fp/no-throw: warn

Partial Function Application

Photo by Ben on Unsplash

Partial function application is probably one of the best code sharing mechanisms ever invented. It blows OOP Dependency Injection out of the water. You can inject dependencies into your code without resorting to all of the typical OOP boilerplate.

The following example wraps the Axios library that is notorious for throwing exceptions (instead of returning the failed response). Working with such libraries is unnecessarily hard, especially when using async/await.

In the following example, we’re making use of currying and partial function application to make an unsafe function safe.

// Wrapping axios to safely call the api without throwing exceptions
const safeApiCall = ({ url, method }) => data =>
  axios({ url, method, data })
    .then( result => ([null, result]) ) 
    .catch( error => ([error, null]) );
// Partially applying the generic function above to work with the users api
const createUser = safeApiCall({
    url: '/api/users',
    method: 'post'
// Safely calling the api without worrying about exceptions.
const [error, user] = await createUser({
  email: '',
  password: 'Password'

Note, that the safeApiCall function is written as func = (params) => (data) => {...} . This is a common technique in functional programming called currying, and it always goes hand-in-hand with partial function application. This means that the func function when called with params , returns another function that actually performs the job. In other words, the function is partially applied with params .

It can also be written as:

const func = (params) => (
   (data) => {...}

Note that the dependencies (params ) are passed as the first parameter and the actual data is passed as the second parameter.

To make things easier, you can make use of the saferr npm package, which also works with promises, and async/await:

import saferr from "saferr";
import axios from "axios";

const safeGet = saferr(axios.get);

const testAsync = async url => {
  const [err, result] = await safeGet(url);

  if (err) {


// prints:

// prints: Network Error

A Few Tiny Tricks

Photo by Pierrick VAN-TROOST on Unsplash

Here are a few tiny, but handy tricks. They don’t necessarily make the code more reliable but can make our life a little easier. Some are widely known, others are not.

A little type safety

Yes, JavaScript is not a statically typed language. However, we can make use of a small little trick to make our code more robust by marking our function arguments as required. The following code will throw an error whenever the required value was not passed in. Please note that it won’t work for nulls, but is still a great guard against undefined values.

const req = name => {
  throw new Error(`The value ${name} is required.`);

const doStuff = ( stuff = req('stuff') ) => {

Short-circuit conditionals and evaluation

Short-circuit conditionals are widely known and are useful for accessing values within nested objects.

const getUserCity = user =>
  user && user.address &&;
const user = {
  address: {
    city: "San Francisco"

// Returns "San Francisco"

// Both return undefined 

Short-circuit evaluation is useful for providing an alternative value if a value is falsey:

const userCity = getUserCity(user) || "Detroit";

Bang bang!!

Negating a value twice is a great way to convert any value to boolean. Be aware though that any falsey value will be converted to false and this might not always be something that you want. This should never be used for numbers since 0 will also be converted to false .

const shouldShowTooltip = text => !!text;

// returns true
shouldShowTooltip('JavaScript rocks');

// all return false

Debugging with in-place logging

We can make use of short-circuiting and the fact that the result of console.log is falsey to debug functional code, even including React components:

const add = (a, b) =>
  console.log('add', a, b)
  || (a + b);

const User = ({email, name}) => (
    <Email value={console.log('email', email) || email} />
    <Name value={console.log('name', name) || name} />

What’s Next?

Photo by Maximilian Weisbecker on Unsplash

Do you really need code reliability? That’s up to you to decide. Does your organization equate developer productivity with the number of completed JIRA stories? Are you working in a feature factory that values nothing else but the number of features delivered? Hopefully not, but if they do, then you might consider looking for a better place to work at…

It might be overwhelming if you attempt to apply everything from this article all at once. Have the article bookmarked, and keep coming back to it once in a while. Every time, take away one single thing that you will deliberately focus on. And enable the associated ESLint rules to help you on your journey.

Source: medium