Start building
Updates

How to Create JavaScript Libraries

Mateusz Burzyński in Developer Console on Feb 20, 2018


How to Create JavaScript Library

This article is inspired by a great piece by Anton Kosykh, How to write and build JS libraries in 2018. It has some great visual examples which are definitely worth to check out.

As a frontend engineer, I have a slightly different perspective on the topic of libraries building. I’ll try to explain how to squeeze the most out of the modern build toolchain, that is:

  1. how to configure the tools,

2) how to create an isomorphic library & more (in part 2),

  1. how to prevent the unused parts of the application from landing in clients’ applications (soon).

CJS & ESM explained

CJS (a.k.a CommonJS) is a module format popularized by node.js ecosystem, where ESM (ECMAScript Modules) is a module format already standardized as part of the JavaScript language (since ES2015 specification). While you might not notice much of a difference in most scenarios besides other syntax used to write them, their design is different.

ESM modules have static structure (while CJS is dynamic), this means that you cannot in example:

  • import/export names based on runtime values
  • import/export conditionally
  • import/export things after module initialization

This gives us important traits - module’s shape cannot change, it is known statically (without running the code) and initialization order can be determined statically too. Those facts are leveraged by bundlers to produce more optimized code structures.

An unknown land

Setting up the tools for a library seems to be such a basic task that you may be surprised that it has numerous nuances. In fact, even the authors of well-known libraries often don't do this right, and there is no single exhaustive go-to source.

In this piece, we’ll get practical and create a dummy project step-by-step to discuss the best library building practices. Let’s go!

Getting started

Let’s start our project with creating src/index.js, src/cube.js & src/square.js files in a sample repo:

// index.js
export { default as cube } from './cube.js';
export { default as square } from './square.js';

// cube.js
export default x => x * x * x

// square.js
export default x => x * x

So far so good. What now?

The most popular opinion is that it’s best to publish the code with ES5 syntax, so we’ll need to transpile our library. We’ll use Babel for this — there are other solutions available (e.g. Bublé), but Babel has the most features and the whole plugin ecosystem available.

We could use babel-cli (babel src --out-dir dist), but generally speaking, Babel works with single files, not with whole projects. Of course, we could use it to transpile a catalog of files, but this would result in creating a copy of the catalog tree.

If a library consists of more than one file, a good practice is to use a bundler to create a so-called flat bundle (all project files merged into one). The obvious choice here is Rollup. Why? Because whenever possible, this tool doesn’t add any extra code, runtime wrappers, etc. Thanks to this approach:

  • the final files are lightweight (so that less code is shipped with an application which uses this library),

  • each file creates just one scope, so that static analysis is much easier (which matters for tools such as UglifyJS).

Rollup and Babel have different objectives, but they can be freely used together one project.

Installation

To install Rollup and Babel, run the following command:

npm install --save-dev rollup babel-core babel-preset-env rollup-plugin-babel

Note that this installs also babel-preset-env. Why would we need it? It enables the necessary Babel plugins on the basis of the environment definition that we may pass to it (by default it transpiles to ES5). Moreover, if we configure it properly, we can use only selected transforms, which is useful when, for example, our project doesn’t have to work in older browsers and we don’t have to transpile everything.

Other useful plugins are babel-plugin-transform-object-rest-spread and babel-plugin-transform-class-properties, so let’s install them right away:

npm install --save-dev babel-plugin-transform-object-rest-spread babel-plugin-transform-class-properties

Configuration

Now when we have everything installed, we can start the configuration process. For full control, we will use a pure JavaScript configuration file. Babel 6 doesn’t support .babelrc.js files out of the box (upcoming Babel 7 does, though), but here’s a workaround - just add these two files:

.babelrc

{ "presets": ["./.babelrc.js"] }

.babelrc.js

const { BABEL_ENV, NODE_ENV } = process.env

cosnt cjs = BABEL_ENV === 'cjs' || NODE_ENV === 'test'

module.exports = {
    presets: [
        ['preset-env', { loose: true, modules: false }]
        'transform-object-rest-spread',
        'transform-class-properties',
    ],
    plugins: [
        cjs && 'transform-es2015-modules-commonjs'
    ].filter(Boolean)
}

Comments:

  • For safety reasons, it’s best to disable module transpiling (with modules: false passed to the preset-env) until a script requires it (opt-in with NODE_ENV or BABEL_ENV environmental variables).

  • Some tools support both ESM and CJS module formats, but work better with ESM. It’s easy to go overboard with transpiling here, but, on the other hand, if we don’t transpile all necessary modules, we will notice it right away. If we, for example, forget to activate CJS transform, the tool that requires CJS modules will crash with a bang.

  • Commonjs transform has loose option (read more here), but it can cause more problems than it solves, especially when we use namespace import (import * as mod from './mod').

The best approach is publishing different versions of a module for various use cases. A good example is package.json which contains two “entry point” versions: main and module. Main is the standard field used by node.js ecosystem and therefore it should point to a CJS file. Module field, on the other hand, points to an ES module file (transpiled exactly like the main "entry point" minus the modules). This field is mainly used by web app bundlers like webpack or Rollup.

Thanks to the static nature of ES module files, it’s easier for bundlers to analyze their structure and optimize the files using techniques such as tree-shaking (removing unused imports) and scope-hoisting (putting most of the code within a single JavaScript scope).

To configure these two fields, let’s add the following lines to our package.json:

"main": "dist/foo.js",
"module": "dist/foo.es.js",

Note: In some cases for better compatibility it’s best to omit file extensions and let each tool add them by itself. This will be discussed in next part of the article.

Now we will need a script to build everything for us:

"build": "rollup -c"

And a Rollup configuration file (rollup.config.js):

import babel from 'rollup-plugin-babel'
import pkg from './package.json'

const externals = [
  ...Object.keys(pkg.dependencies || {}),
  ...Object.keys(pkg.peerDependencies || {}),
]

const makeExternalPredicate = (externalsArr) => {
  if (externalsArr.length === 0) {
    return () => false
  }
  const externalPattern = new RegExp(`^(${externalsArr.join('|')})($|/)`)
  return (id) => externalPattern.test(id)
}

export default {
  input: 'src/index.js',
  external: makeExternalPredicate(externals),
  plugins: [babel({ plugins: ['external-helpers'] })],
  output: [
    { file: pkg.main, format: 'cjs' },
    { file: pkg.module, format: 'es' },
  ],
}

Comments:

  • Rollup is a bundler which merges all files and dependencies into one file, but we want to use it for a different purpose. We want it to build our library and leave our dependencies where they belong — in the dependencies. We do not want Rollup to inline dependencies’ source code into our output bundles — we want it to leave them as they are, referenced to with import statements.

  • makeExternalPredicate _keeps subdirectories and files belonging to our dependencies as external too (i.e. _downshift/preact or lodash/pick)

  • As mentioned above, Babel works with single files. In order do deduplicate some common runtime logic, it may insert a helper function into your files.

Here's a sample helper function:

// input
const a = { ...b }
const c = { ...a }

// output
function _extends() {
  _extends =
    Object.assign ||
    function (target) {
      for (var i = 1; i < arguments.length; i++) {
        var source = arguments[i]
        for (var key in source) {
          if (Object.prototype.hasOwnProperty.call(source, key)) {
            target[key] = source[key]
          }
        }
      }
      return target
    }
  return _extends.apply(this, arguments)
}

var a = _extends({}, b)

var c = _extends({}, a)

_extends is precisely this: a helper function added by Babel. The problem is that Babel adds it to each file and we bundle our input source files into a single output file. Ideally, we’d like to have those helpers inserted only once into our bundle. That’s why we’ll use babel-plugin-external-helpers in our Rollup configuration file (nice thing is that it will warn us if we forget to do that).

The best thing? In the future, it won’t be necessary to add this file manually because rollup-plugin-babel compatible with Babel 7 will do this for you.

And that’s all! We have the basic use cases covered. The below part of this article covers more advanced usages and examples.

Proxy directories

What happens if we want our library to support several "entry points"? For example, we may have a library with a React component which should allow for importing an alternative version of this component (say, for Preact). We want the users to do either this:

import FooComponent from 'foo'

or this:

import FooComponent from 'foo/preact'

Sounds easy — it would be enough to create a file or a directory called “preact” in the root of our library and let the resolving algorithms find it. The problem with this solution, however, would be that it would give the users only one module format, while we want to provide two (ESM and CJS).

What to do, then? Use proxy directories!

This simple yet surprisingly little-known technique requires creating a directory named after our alternative entry point (in this case it will be “preact”) with the following package.json file:

{
  "name": "foo/preact",
  "private": true,
  "main": "../dist/foo-preact.js",
  "module": "../dist/foo-preact.es.js"
}

And that’s it! You can see this method in action in redux-saga/effects:

{
  "name": "redux-saga/effects",
  "private": true,
  "main": "../lib/effects.js",
  "module": "../es/effects.js",
  "jsnext:main": "../es/effects.js"
}

or in styled-components/primitives:

{
  "name": "styled-components/primitives",
  "private": true,
  "main": "../dist/styled-components-primitives.cjs.js",
  "module": "../dist/styled-components-primitives.es.js",
  "jsnext:main": "../dist/styled-components-primitives.es.js"
}

You can even check out how to create multiple proxies programatically with a script in funkia/list:

const fs = require('fs')
const path = require('path')
const { promisify } = require('util')
const pkg = require('../package.json')

const readDir = promisify(fs.readdir)
const mkDir = promisify(fs.mkdir)
const writeFile = promisify(fs.writeFile)

const removeExt = (ext, str) => path.basename(str, `.${ext}`)

const fileProxy = (file) => `{
  "name": "${pkg.name}/${removeExt('js', file)}",
  "private": true,
  "main": "../dist/${file}",
  "module": "../dist/es/${file}"
}
`

async function processDir(dir) {
  const files = (await readDir(dir)).filter(
    (file) => /\.js$/.test(file) && file !== 'index.js'
  )
  return await Promise.all(
    files.map(async (file) => {
      const proxyDir = removeExt('js', file)
      await mkDir(proxyDir).catch(() => {})
      await writeFile(`${proxyDir}/package.json`, fileProxy(file))
      return proxyDir
    })
  )
}

processDir('dist').then((proxies) =>
  console.log(`Proxy directories (${proxies.join(', ')}) generated!`)
)

So far so good. The last thing to do is building our files. Let’s add some extra config to rollup.config.js:

import babel from 'rollup-plugin-babel'
import pkg from './package.json'

const external = [
  ...Object.keys(pkg.dependencies || {}),
  ...Object.keys(pkg.peerDependencies || {}),
]

const makeExternalPredicate = (externalsArr) => {
  if (externalsArr.length === 0) {
    return () => false
  }
  const externalPattern = new RegExp(`^(${externalsArr.join('|')})($|/)`)
  return (id) => externalPattern.test(id)
}

export default [
  {
    input: 'src/index.js',
    external: makeExternalPredicate(externals),
    plugins: [babel({ plugins: ['external-helpers'] })],
    output: [
      { file: pkg.main, format: 'cjs' },
      { file: pkg.module, format: 'es' },
    ],
  },
  {
    input: 'src/preact.js',
    external: makeExternalPredicate(externals),
    plugins: [babel({ plugins: ['external-helpers'] })],
    output: [
      { file: 'dist/foo-preact.js', format: 'cjs' },
      { file: 'dist/foo-preact.es.js', format: 'es' },
    ],
  },
]

Done! This method works fine when our "entry points" are completely independent. What if, however, it should be possible to use several entry points in one application at the same time? How to make sure that the code is not duplicated? This is where experimentalCodeSplitting comes in handy.

experimentalCodeSplitting

experimentalCodeSplitting is a fresh Rollup feature which abstracts common dependencies into a single file and adds import’s and require’s to the files that need these dependencies:

// ...

export default {
  experimentalCodeSplitting: true,
  input: ['src/index.js', 'src/preact.js', 'src/storage.js'],
  external: makeExternalPredicate(externals),
  plugins: [babel({ plugins: ['external-helpers'] })],
  output: [
    { dir: 'lib', format: 'cjs' },
    { dir: 'es', format: 'es' },
  ],
}

For full reference and complete manual, refer to Experimental Code Splitting docs.

Note that if we need to generate more than one output version for the same set of input files (like we do at every example) we should specify different output directories for each format.

We should also make sure that each input file has a unique file name so output files can have stable & predictable names (if we'd bundle src/index.js and src/preact/index.js we'd end up with index.js and index2.js files).

Config fatigue

The more configuration needs our file wants to cover, the heavier it may get. To avoid bloated files and code repetition, we may want to use a simple function generating configurations based on the parameters we pass to it. A good example here is the react-textarea-autocomplete configuration file:

const createConfig = ({ umd = false, output } = {}) => ({
  input: 'src/index.js',
  output,
  external: [
    ...Object.keys(umd ? {} : pkg.dependencies || {}),
    ...Object.keys(pkg.peerDependencies || {}),
  ],
  plugins: [
    babel(),
    resolve(),
    commonjs({ extensions: ['.js', '.jsx'] }),
    umd && uglify(),
    license({
      banner: {
        file: path.join(__dirname, 'LICENSE'),
      },
    }),
  ].filter(Boolean),
})

Browser

When our library targets both a server and browser environment, it may need to know where exactly it is running. One option to do this is using a couple of “if” statements scanning the environment for global variables, global objects or certain behavior. This solution, however, is rather clunky, heavyweight (the code will be run in every environment) and unreliable. But fear not — there are two better options:

1. Build-time variables

Adding a magic variable like __SERVER__ or __BROWSER__ will let us create separate code versions for various use cases. Styled-components do this nicely:

Variables in Styled-componentsVariables in styled-componentsTo use variables, we have to install rollup-plugin-replace and add it to our configuration file in plugins section:

replace({ __SERVER__: JSON.stringify(false) })

And, of course, we’ll need to create an extra configuration object so that the values in the output files are modified correctly for each variable.

2. Alternative files

Another approach is using alternative files for server and browser environment, like this rng.js and this rng-browser.js.

As long as we provide the same API for both versions our users won't even have to know that the code is different for different environments. For easier bundling process, I suggest a slightly different naming convention — instead of using a hyphen (file-browser.js), use a dot (file.browser.js)

Let’s add rollup-plugin-node-resolve to our project and use it in an alternative configuration to output files for web application bundlers. Copy the existing config file and add this to the plugins:

nodeResolve({ extensions: ['.browser.js', '.js'] })

Regardless of the method that we selected, we should have four output files:

dist / foo.js
dist / foo.es.js
dist / foo.browser.cjs.js
dist / foo.browser.es.js

The last step is adding a new field to our main package.json (even if we combine this technique with "proxy directories" we need to add this only in a root package.json). Each file that can be imported from our library and has different implementations per environment should have a corresponding file mapping so the alternative file can be loaded instead of the original one:

"main": "dist/foo.js",
"module": "dist/foo.es.js",
"browser": {
  "./dist/foo.js": "./dist/foo.browser.js",
  "./dist/foo.es.js": "./dist/foo.browser.es.js",
  // optionally - more “rewrite rules” if you use "proxy directories"
  "./dist/foo-preact.js": "dist/foo-preact.browser.js",
  "./dist/foo-preact.es.js": "dist/foo-preact.browser.es.js"
}

React Native

If we want our library to support React Native but the code for this platform is different than for node or for browsers, the best tactic is the second technique mentioned above (“Alternative files”). Metro Bundler used by React Native recognizes .native.js extension (also .ios.js and .android.js which takes precedence depending on the platform), so it’s enough to create a new Rollup config with such plugin:

nodeResolve({ extensions: ['.native.js', '.js'] })

If we make sure that the output file has the same name as the file from “main” only with .native.js extension (so in our example it will be dist/foo.native.js), we won’t have to add anything to package.json (if we leave "main" without extension). Although it still might be a good idea to add extra entry in package.json:

"main": "dist/foo.js",
"module": "dist/foo.es.js",
"react-native": "dist/foo.native.js",

Note: React Native doesn’t support ES modules, so it’s enough to build single output file with CJS format.

Finally, if we want to have separate sets for iOS and Android React Native files, we need two different configurations. Each should transpile the files to CJS format with different extensions:

nodeResolve({ extensions: ['.ios.js', '.native.js', '.js'] })

and:

nodeResolve({ extensions: ['.android.js', '.native.js', '.js'] })

Again, the output files should share names with the file from “main”: dist/foo.ios.js and dist/foo.android.js respectively. In that case you should consider leaving "main" without extension or using "react-native" key without an extension, so appropriate file can get loaded.

.mjs

It’s possible that in the future node.js will support an alternative extension (.mjs) for ESM modules (note: this is one of proposals and it is not set in stone yet that it will be recommended way of authoring ESM packages). However, we should support older node versions (at least LTS ones) for some time yet. If we want to use the native ESM modules in node, it’s best to output an additional file for node. It should have .mjs extension and node should prefer this file over the .js file. In this scenario you definitely should leave your "main" without an extension, so any node.js can chose which file to load.

process.env.NODE_ENV

In the pursuit of keeping the file size small, we shouldn’t forget about usability. Don’t be afraid to add helpful warnings, as they won’t hurt your library’s performance. A good practice is wrapping development-only code as follows:

const warn = (msg) => console.warn(msg)

if (process.env.NODE_ENV !== 'production') {
  if (props.render && props.children) {
    warn(
      "It doesn't make sense to use both props.render and props.children at once, props.children will be used."
    )
  }
}

Most users have configured their bundlers to replace process.env.NODE_ENV with an environmental variable. Therefore, a minifier (e.g. UglifyJS) can detect "constant conditions" (e.g. "development" !== "production" will get evaluated to false) and remove the code that cannot execute at runtime.

Note: The aim here is to remove as much dead code as possible, so we should wrap the redundant pieces on the highest possible level. If, for example, we wrap the unnecessary code inside a helper function (e.g. warn), the code will not be removed in the production build. In this case, it would remove only the body of warn function, turning the function into noop, but leaving intact the function itself and all function calls.

Conclusion

Even though setting up the tools for a library sounds trivial, we have just seen how complicated it really is. Hopefully, Microbundle (a wrapper around Rollup) will reduce the need for such extensive configuration - we can simplify this behind a couple of configuration options and a naming conventions. However, it’s still good to know how the whole process works behind the scenes because no automation tool can handle all use cases that we can think of.


Latest articles

Mar 22, 2024

Essential App Monetization for Developers in 2024

Mar 19, 2024

Text Mining and Natural Language Processing: Transforming Te...

Mar 13, 2024

How to Make Money Coding: Strategies for 2024