The state of UI testing at Mixpanel

End-to-end tests

At Mixpanel, we’ve been writing UI tests for a long time. However, they haven’t always been easy to set up, write, and debug. When we first began testing the UI, we wrote tests in Python using the Selenium framework. In this setup, the Python tests interact with the browser through the API provided by Selenium. These Selenium commands are then sent to browser-specific drivers for controlling different browsers. These “end-to-end” tests required setting up a web server, database, and various supporting backend services, as well as populating these services with the data needed for the tests. These tests have the benefit of not just testing the UI, but also testing the integration between the backend services needed to render the UI. The intent was for these tests to mimic the experience of an end user visiting Mixpanel’s production website.


Figure 1. The setup of Mixpanel’s end-to-end tests.

However, these end-to-end tests also have a number of downsides:

  • Front-end developers have to learn the Selenium API, which is quite different from other tools used by front-end developers.
  • This setup introduced latency in several places: 1) latency from when a Selenium command is sent by the test to when it’s executed in the browser, 2) latency of network requests made by the browser to the web server. Having to take these variable delays into account made it harder to write tests that weren’t flaky.
  • There was overhead setting up all the backend services and populating them with the fixture data needed for each test.
  • Tests became harder to maintain/debug, since issues were not limited to just the front-end, and could be from any of the backend services in the stack.

End-to-end tests are currently used to test Mixpanel’s older reports. The components within these older reports were built with Backbone. Many of these components have dependencies on global state and intertwined dependencies with other components. This was partially due to lack of discipline but also because these components were written before JavaScript had the good module and bundler tooling (e.g. Webpack) that it currently has. These entangled dependencies made it hard to test individual components in isolation. This is one of the reasons end-to-end tests were used to test these reports – they required a web server to serve a production-like version of the website with all the necessary dependencies for the components being tested.

WCT tests

In the previous section, we saw how Mixpanel used to write UI tests, and some of their drawbacks. However, recent front-end developments at Mixpanel have allowed us to prefer a different approach to UI testing that solves the above-mentioned problems.

In the last 1-2 years, we have started using Web Components1 as the building blocks for Mixpanel’s newer reports. These reports have a top-level “application” custom element which is composed of other custom elements, all the way down to custom elements representing basic components like buttons and tooltips.

When we started using custom elements, we focused on creating components with well-defined attribute-based interfaces to pass information into the component, and event propagation for the component to communicate with the outside world. This contrasts with the entangled dependencies that exist in Mixpanel’s older Backbone components2. Creating modular custom elements has now made it possible to write more isolated/modular tests for individual custom elements like buttons and tooltips, while also being able to write higher-level tests for an entire report composed of many custom elements.

These new-style tests are written using the web-component-tester (WCT) browser testing framework, which came out of the Polymer3 project. Hence, we refer to them as WCT tests. While the end-to-end tests exercise the entire stack, WCT tests are strictly front-end-only UI tests. WCT tests are written in JavaScript, which runs on the same web page as the components they’re testing.

Figure 2. The WCT test running environment. The test code is being run in an iframe on the left. Information about success/failure of individual tests is output on the right side.

WCT tests address all the downsides of end-to-end tests mentioned earlier:

  • When writing WCT tests, developer can use the JavaScript DOM APIs and other JavaScript testing libraries like Sinon instead of needing to learn a new set of APIs (Selenium4).
  • WCT tests run faster and are less prone to race conditions since the test code runs directly on the web page, whereas the end-to-end tests have a layer of separation between test code and the web page.
  • WCT tests have no dependencies on a web server by mocking out requests to the server, and thus are much easier to setup, write, and maintain compared to end-to-end tests. (See the Mocking server responses section below for more details.)

Contrast the simplicity of the setup for these WCT tests shown in Figure 3 below with the setup for end-to-end tests from Figure 1.


Figure 3. The setup of WCT tests.

Guidelines when writing WCT tests

We use the WCT framework as an environment for running UI tests, as seen in Figure 2 earlier. However, the framework doesn’t enforce how tests should be written/structured. So we’ve come up with some of our guidelines when writing WCT tests which are discussed below.

Mocking server responses

As described earlier, Mixpanel’s end-to-end UI tests required setting up backend services. In contrast, WCT browser tests are front-end-only. Any server requests are stubbed to return mock responses. Our front-end code uses fetch to make network requests. At the beginning of each test, we use the Sinon mocking library to create a mock server that responds to fetch  requests that will be made by the test.

DOM helpers

Interacting with the DOM is a necessity for browser tests. To keep our code DRY, we created a small library of utilities that all tests should use when they need to interact with the DOM. Here are some of the utilities we have:

  • Helpers to wait during a test. For example, nextAnimationFrame is an async function that awaits till the next requestAnimationFrame. retryable and condition will wait until some condition is met. (They’re described in more detail in a later section.)
  • Helpers for interacting with DOM elements. For example, clickElement will click an element while sendInput will send text to an input element.
  • Helpers for querying element in the Shadow DOM, since our custom elements make use of the Shadow DOM. For example, queryShadowSelectors queries for the first matching element in the Shadow DOM, while queryShadowSelectorsAll queries for all matching elements (similar to querySelectorAll) in the Shadow DOM.

Element wrappers

When writing browser tests, a large portion of the test code will be for performing actions on components and querying the state of the component after these actions. Often, multiple tests perform similar interactions with the same component. To keep the test code DRY, we created the concept of “element wrappers”.

An element wrapper is a helper class that wraps an element5 in the DOM. They consist of the methods mentioned above that are needed by test code for performing actions and querying the DOM state of these elements.

Besides keeping the test code DRY, another benefit of element wrappers is that they are modular. They allow you to group all the possible interactions with a component in a single place. Similar to how custom elements can be composed of other custom elements, element wrappers can mirror this composability by providing helper methods that return element wrappers for child elements. These child element wrappers can then be used to interact with these child elements.

An example of an element wrapper is the Calendar element wrapper which wraps the <mp-calendar> custom element, which is used for picking dates from a calendar.


Figure 4. Screenshot of the <mp-calendar>  custom element.

Below is the implementation6 of the Calendar element wrapper.

The Calendar element wrapper provides methods like clickDate and clickNextMonthButton for performing actions on the <mp-calendar> custom element. It also provides query methods like isNextMonthButtonDisabled for querying the DOM state of the <mp-calendar> custom element.

Waiting without sleeping

Within our WCT tests, it’s often necessary to write asynchronous code that waits for some condition before continuing the test:

  • The Panel library we use for creating custom elements batches DOM updates to the next requestAnimationFrame by default for performance reasons. This means any time we perform an action on an element (e.g. clicking a button), the update to the DOM associated with the change happens asynchronously. Since a large portion of browser testing is triggering actions on the web page, needing to wait for the DOM to update is a common occurrence in our tests.
  • fetch requests (even though they’re mocked) are asynchronous.
  • Animations will delay a component from reaching its final state.

To deal with the abundance of asynchronous code in our WCT tests, we have opted to use async/await syntax introduced in ES2017. This allows the test code to be more readable by removing the excessive nesting associated with callbacks and (to a lesser extent) Promises.

An anti-pattern when you need to wait within a test is to sleep. However, this makes the test brittle and slows the test down because you end up sleeping longer than needed in most cases. Instead, the test should wait for some explicit conditions to be met to decide if it can continue execution. In this vein, we created some helper functions for this use case: retryable and condition. Both these functions take a function as input and will repeatedly execute it until some condition is met or a predefined timeout. retryable will continue to execute the function until it doesn’t throw an exception. condition will continue to execute the function until it returns a truthy value.

Below is a simplified version of a WCT test in our codebase that follows these guidelines. Comments have been added for explanation purposes.

UI testing within CI

The end-to-end and WCT tests are run on every pull request. They are also regularly run on master to catch any bad code that might have slipped through the cracks. WCT tests selectively run depending on the code change. For instance, if only backend changes are made, WCT tests will not run. If front-end changes are made to a single report, only the WCT tests for that report will run. The end-to-end tests in contrast are run for every pull request since virtually any code change (front-end or backend) could impact them.

The end-to-end tests are run in VMs that are set up with all the backend services needed to run them. The tests are run in Chrome on this VM using Xvfb7. In contrast, the WCT tests run on Sauce Labs, a platform for running automated browser tests that the WCT frameworks supports of the box. Sauce Labs itself allows configuring a list of browser environments to test on. Below is the wct.conf.js (WCT framework configuration file) we use to run our tests on Sauce Labs.

As you can see, we run our WCT tests on the latest version of Chrome, Firefox, Safari, and Edge.

Closing remarks

In this post, we looked at the different types tests we write to test the UI at Mixpanel. In the beginning, we wrote only end-to-end tests which exercise the entire stack. Despite them being ill-suited for the purpose, we used end-to-end tests for testing the UI for a long time because that’s all we had. However, because of better modularization of our front-end code, we are now able to write front-end-only WCT tests for this purpose. Nonetheless, the introduction of WCT tests don’t obviate the need for end-to-end tests, which still serve the important function of verifying high-level behavior across the stack.

Since WCT tests are easier and less time-consuming to write compared to end-to-end tests, developers have been much more receptive to writing them. The difference in adoption between test two types can be seen by taking a look at our codebase. We currently have almost 7x8 as many WCT tests as end-to-end tests, despite the fact that we’ve only been using WCT for a couple of years. Reducing the friction in writing and maintaining UI tests has therefore increased our regression coverage significantly, making for both happier users and happier front-end engineers.

Making Web Components Work

or: How We Learned to Stop Worrying and Love the DOM

 

Clean, attractive user interfaces and effective user experience have always been pillars of Mixpanel’s products. Over the years, as our data visualization UIs have introduced richer interactions and more advanced capabilities, a central concern of ours has been managing ever-increasing front-end complexity, driving us to build and experiment with approaches that simplify development and enable more powerful results. While the front-end world at large has gone through waves of framework churn and the accompanying fatigue of “Rewriting Your Front End Every Six Weeks”, this burst of ecosystem activity has also produced some great ideas and productivity gains. A recurring theme which has emerged and guided Mixpanel’s UI work is the strength of the “component” concept. Many of the successful JavaScript frameworks and libraries of recent years – React, Angular, Polymer, Vue, etc. – organize code and conceptual models to reflect the tree hierarchy of the rendered DOM, in such a way that complex UIs emerge from the composition of smaller elements which can render themselves and act semi-independently.

Developing quietly for years in the background of the JS wars, the set of Web Components standards has always promised something that no 3rd-party framework can offer: a suite of native in-browser technologies for creating and managing encapsulated UI components, leveraging well-known existing DOM/HTML APIs and open standards. Back in 2015, our front-end team started exploring the possibilities of Web Components – specifically Custom Elements and Shadow DOM – for building new features and gradually unifying our suite of legacy UIs. Since then, this has grown into our standard toolset for building UIs, both for greenfield projects and for introducing incremental updates to older features: the basis of new products like InsightsJQL Console, and Signal, as well as our expanding standardized component library. Using Web Components as a cornerstone of complex productionized UIs, however, has required development of tooling and responses to issues and gaps in the basic technologies: standardizing the rendering cycle, composing and communicating between components effectively, understanding which features can be polyfilled reliably on older browsers, running component code in server-side environments, etc. The following discussion aims to describe our choices and approaches, particularly the features of our little open-source library Panel which marries Web Components technologies to a state-based Virtual DOM renderer, effectively extending the basic standard to facilitate composing full, powerful UIs easily.

What if your app were just a DOM element?

The fundamental unit of Web Components is the good old HTMLElement, which your code extends by implementing methods to run when lifecycle events occur: an instance of your custom element is created, it is added to the DOM, its HTML attributes change, etc. We will explore the power of this approach in the following discussion with the help of a small interactive demo, the “Panel Farm” running below:

The demo is also available at https://mixpanel.github.io/panel-farm/ (with code at https://github.com/mixpanel/panel-farm). This toy project includes building blocks of more advanced usage: component nesting and intercommunication, build system, client-side routing, shadow DOM, animations, etc. Check out the demo and try inspecting the DOM with your browser’s developer tools. You’ll notice some HTML elements with custom tag names:

The  <panel-farm> element at the top level is not just a rendered result of running the app code; it is the app, accessible in the JavaScript DOM API as an HTMLElement with all the methods and accessors available to normal DOM elements, as well as some new methods. Try calling  document.querySelector(`panel-farm`).update({welcomeText: `meow!`}) in the JS console and watch the DOM update automatically on the Welcome page. Via the standard built-in browser dev tools, you can inspect the current app state, find HTML elements it’s rendered, enumerate its DOM children or its subcomponents, and perform live manipulations. Modern browser tools offer powerful debugging environments for Web Components, by virtue of their nature as HTML elements:

(NB: For an even more seamless in-browser development and debugging experience, the Panel State Chrome extension by Noj Vek adds a dev tools tab to the Elements explorer for inspection and manipulation of state entries.)

Custom Elements of various other kinds can already be found “in the wild,” whether for example in GitHub’s subtle  <time-ago> component that displays relative times (in use on github.com since at least 2014, as seen in this interview), or in the more recent 2017 rewrite of Youtube’s UI (based on Google’s Polymer framework, as noted in their blog post on the launch):

Still, despite some good company in using Web Components, our choice in 2015 of embracing the standard was admittedly unusual, betting on an under-development built-in browser technology as opposed to simply picking up one of the more ready-made popular JS libraries like React or Angular (although back when we were exploring these options, the front-end dev world was much less crystallized into these few options, and the now-popular Vue had nowhere near its current traction). It was clear at the time that the component-based approaches of all these libraries offered a great central concept for hierarchical UI code, and the popularization of “Virtual DOM” and DOM-diffing provided well-supported practical implementations of powerfully simple rendering APIs. Less widely-used and experimental frameworks, such as Mercury, Cycle, and Ractive, demonstrated that there was space for further exploration into “reactive” DOM templating (where the UI updates automatically to reflect the current state of a data store). Adopting a similar Virtual-DOM/state-based approach allowed us, with quite minimal code, to standardize our workflows for view templating, DOM update management, animation, component composition, and data flow management (in particular, making it easy to nest and communicate between components without a rat’s nest of event listeners); in other words, to give Web Components just the boost they need to work well for advanced UI development.

How it works

The Panel library is available under the open-source MIT license, with source code available at https://github.com/mixpanel/panel and package installation via NPM at https://www.npmjs.com/package/panel. API documentation lives at http://mixpanel.github.io/panel/. The description from the repo’s Readme offers a good distillation of the project’s goals and approach:

Panel makes Web Components suitable for constructing full web UIs, not just low-level building blocks. It does so by providing an easy-to-use state management and rendering layer built on Virtual DOM (the basis of the core rendering technology of React). Through use of the Snabbdom Virtual DOM library and first-class support for multiple templating formats, Panel offers simple yet powerful APIs for rendering, animation, styling, and DOM lifecycle.

The basic usage is straightforward and familiar from numerous Virtual DOM UI tools. A component is a JS object which renders part of the UI, maintaining an internal state object which is fed to the view template; calls to the component’s update() method apply changes to the state and trigger a re-render of any parts of the DOM which change as a result. Component lifecycle, on the other hand (element creation, DOM entry/exit, etc), is managed directly through the Custom Elements API (hooks such as connectedCallback() and attributeChangedCallback()). Probably the most important aspect of the API design is the decision to maintain the “vanilla” Web Components APIs as far as possible, rather than wrapping them in higher-level abstractions. Developers using Panel can rely on quality external references such as MDN’s web docs and Eric Bidelman’s excellent overviews (e.g., “Shadow DOM v1”) to understand standard patterns and usage; and this knowledge is transferable to other environments that use Web Components.

To call Panel a “framework” would be a stretch – it’s really more of a minimal glue layer between the Web Components API and the Virtual DOM rendering engine provided by Snabbdom, with just enough built-ins to address the pain points that we’ve confronted in our production apps. The core library code runs to a few hundred lines, much of which is comments and documentation for public methods. Apart from the Component/View layer which translates state into rendered DOM, a simple built-in Router handles syncing the URL/History API and the app’s state. The intention was to keep the library code lightweight and easily understood, without sacrificing the power of the core reactive rendering flow.

There is no baked-in model layer or data-/state-management framework. External libraries such as Redux and RxJS can plug in seamlessly to the view layer offered by Panel, and an optional Panel “State Controller” offers a lightweight mechanism for managing state separately from Component internals without bringing in further dependencies. Anything which can send state updates by calling update() with a JS state object will work with Panel (see the example at https://github.com/mixpanel/panel/tree/master/examples/redux). Similarly, a more traditional MVC Model layer such as Backbone.Model can work, by sending Component updates in response to model events, e.g.,  myModel.on(`change`, () => myApp.update({field: `new content`})). In Mixpanel’s newer apps, depending on complexity, we tend to avoid event-flow and model libraries, finding a sufficient solution in Plain Old JavaScript Objects representing state, supplemented occasionally with ES2015 Classes for more involved model-layer code.

The following brief case studies introduce some of the other significant features of Panel and Web Components as tools for flexible, full-featured front-end development.

Your widget is an app, your app is a widget

There is no formal distinction between a simple component and an “application.” In the Panel Farm app, the  <animal-badge> which displays a picture of a cute animal in a circle frame is a completely standalone component. It has an HTML attribute animal which determines which picture it shows, and can be embedded anywhere simply by inserting into the DOM.

<animal-badge animal="husky"> “Woof!” (^^^ This is a running version of the <animal-badge> element. Try inspecting with browser dev tools and changing its animal attribute to “doge” or “llama” or…)

The  <panel-farm> “application” is composed of various such components and standard DOM elements, but conceptually it too is still just a Component, with nested child Components. Its main DOM template looks something like this (in Pug/Jade notation; see below on templating):

In the example above, since the  <animal-badge> element is a standalone Custom Element, its implementation doesn’t matter to the main app. It could be a Panel component, it could be a vanilla Web Component, or any other type of custom HTML element; it is simply inserted into the DOM and acts independently of the  <panel-farm> instance. The insertion of  <view-welcome> and  <view-farm> via the  child() method, however, explicitly links these elements to the <panel-farm> instance:

<panel-farm> and  <view-welcome> and  <view-farm> literally share a single state object. A call to  update() on any of these elements will result in all of them being updated if necessary. The various  <animal-badge>s, on the other hand, are Panel components which could maintain their own internal state and do not have access to the state of  <panel-farm>. This flexibility allows powerful combinations of self-similar Panel components, which can act in concert via the straightforward shared state mechanism, while still facilitating integration with 3rd-party components through their public APIs such as DOM events and HTML attribute listeners. In practice, state-sharing is useful for subdividing applications into linked components where updates to the central store cascade automatically (no need for swarms of event listeners and data flow logic), whereas standalone components work well for reusable UI building blocks with clear, limited APIs (and there are other options available to limit the state shared between linked components). This is how independent components from Mixpanel’s UI toolkit such as  <mp-dropdown> and  <mp-toggle> are gradually becoming integrated into parts of our front end written 5 years ago as well as last week.

Imperative and/or declarative

As Web Components, Panel components and apps can easily offer both declarative and imperative APIs. For instance, to mirror the type of imperative API favored by jQuery plugins, the <animal-badge> component could offer a public method that changes the picture it displays:

In this case, calling  setAnimal(`raccoon`) on an instance would render the template with updated state. The declarative alternative used in the Panel Farm code has the component read from its HTML attribute animal and update itself whenever its value changes, using the Custom Elements observedAttributes and attributeChangedCallback:

The declarative option is particularly suited to using components within Virtual DOM environments, where declaring the expected state of the DOM is the natural mechanism, rather than calling methods to manipulate the DOM imperatively.

Templates and functions

The <panel-farm> top-level template example in a previous section uses the dedicated templating language Pug (formerly Jade):

This is the notation we use in Mixpanel’s apps for convenience, but it is largely syntactic sugar for the construction of template functions. The same template can be expressed as a pure inline JS function:

This takes in the component’s state object as input and returns as output a Virtual DOM tree (constructed using the dialect of Hyperscript notation used by Snabbdom). For the conversion from Jade to JS, we use the virtual-jade library and simply import runnable template functions:

But at the end of the day, any format which can convert to (Snabbdom-compatible) Hyperscript can work seamlessly here, including Facebook’s famously divisive JSX format (see the example in the Panel repo):

Light and shadow

The question of component styling and CSS scoping has received two recent innovative responses, in the divergent approaches favored by Web Components (the Shadow DOM spec) and by Virtual DOM-based systems (inline styling via “CSS in JS”). Panel apps can benefit from both approaches – even mixing if necessary – facilitating the appropriate method for different contexts and workflows.

A Shadow DOM approach allows you to retain the power of traditional CSS with respect to cascading styles, inheritance, and notation, while keeping styles isolated to your component tree:

In this usage, the styling of elements within a component is managed largely in the “traditional” CSS manner, through the presence or absence of CSS classes and other selectors (and classes can be manipulated deftly through the object notation common to Jade and Snabbdom, e.g.,  {cool: true} to add or maintain the class cool on an element).

It is possible, however, to let the Virtual DOM renderer manage style properties itself, bypassing traditional stylesheets altogether, as the Panel Farm app does at one spot in the main template by setting a style object:

To see the effect of managing style this way, try running  document.querySelector(`panel-farm`).update({backgroundAnimalStyle: {top: `3px`, left: `10px`}}) in the JS console and watch the doge move to the other side of the viewport.

Both systems provide methods of scoping style rules to individual components without the problems of global selectors, and in Panel apps they can live side-by-side as necessary – the fine-grained declarative control of CSS-in-JS complementing the traditional cascading rulesets of Shadow DOM stylesheets. In practice, at Mixpanel we use CSS-in-JS techniques sparingly (for the exceptional cases which require true dynamic calculation in JS), sticking mostly to traditional global stylesheets for full application context (compiled from Stylus to CSS), and Shadow DOM scoped CSS (again compiled from Stylus) for generic UI components used across the product (with some caveats discussed below).

Bump and slide

Highly declarative UI models have always had some difficulty with animation: it’s easy to declare “this is what the DOM should look like right now,” but more difficult to notate transitions between different states cleanly. CSS transitions provide a relatively straightforward model for some situations and can be coupled to selector changes easily, e.g., “elements with class animal-badge have opacity: 1 by default, but when they have the class inout (entering or exiting) they have opacity: 0 and opacities transition between each other for 250ms.” These transitions work well with Virtual DOM systems, which can manage class and style changes seamlessly, but we run into trouble when trying to animate the main lifecycle events, elements being newly created or deleted. For these cases, some of the solutions suggested for Virtual DOM libraries can be pretty heavyweight and domain-specific (see for instance the discussion in https://github.com/Matt-Esch/virtual-dom/issues/112). It is largely due to Snabbdom’s simple, pragmatic support for element lifecycle hooks that we use it as the rendering engine for Panel, together with a simple class module extension that adds support for manipulating classes when adding and removing elements. These basic tools, for instance, allow the <view-farm> template to animate the removal and addition of <animal-badge>s by applying the inout class only when an element is transitioning in or out of the DOM:

Although complex animations that require JS calculations and multiple stages still need statement management based on their specific context, the basic cases of managing transitions/animations on entry/exit and class changes represent the vast majority of situations we need for our UIs. Being able to produce these in a simple declarative fashion is a win.

It’s not all roses

Of course, there are still plenty of bumps and warts in the Panel/Web Components environment, and open questions which we continue to explore and debate.

The browser compatibility story is delicate

Although it seems like every year someone predicts that this will be the year Web Components go big (“#shadowdom2016”, alas…), and the promise of a natively-supported, cross-browser componentization standard is an attractive prospect, the real world isn’t quite there yet. At the time of writing Chrome, Opera, and Safari have released native implementations of Custom Elements and Shadow DOM, with Firefox working on v1 API implementations (as of May 2018 Shadow DOM has been enabled in the Firefox Nightly build, and according to docs on MDN, both Custom Elements and Shadow DOM are “expected to ship in Firefox in 2018“); of the major browsers only Edge has not yet begun implementation work, and Shadow DOM and Custom Elements remain its most requested features (with “High” and “Medium” roadmap priority, respectively). So in order to work with the current versions of Firefox and Edge, we need to ship polyfills along with our production code. The suite of webcomponents.js polyfills from Google’s Polymer team is a marvelous piece of work and a wonderful gift to the open-source world – without the polyfills, using Web Components in customer-facing production environments would be a total non-starter – but there are many edge cases around DOM manipulation and it is impossible to replicate the behavior of native implementations exactly, particularly the style encapsulation of Shadow DOM. There were enough limitations/performance issues of the old Shadow DOM v0 polyfill and the newer ShadyCSS that we have needed to stick to scoping Shadow DOM CSS with specific classes until all our supported environments have Shadow DOM implementations; the Stylus prefix-classes built-in eases the pain considerably, but it is still a far cry from the real encapsulation of native Shadow DOM.

Custom Elements are global

Once you register an element definition with customElements.define(`my-widget`, myWidgetClass), every <my-widget> that pops up in your HTML uses the code that you initially passed. For most environments and workflows this is fine, but it does prevent multiple versions of a component from appearing in the same page with the same tag name. This limitation has affected us in cases where multiple scripts on the same page wanted to register the same components, but at the end of the day these are edge cases and it’s an ill-advised approach. Questions about how to package and export components remain unresolved, for instance whether a module should just export a component definition Class, or whether it makes sense for the module also to add the component to the global customElements registry.

Testing

Testing can require some involved infrastructure, because of the tight integration of components with browser APIs. The wct (Web Component Tester) tool, again from the Polymer team, provides a great solution for browser tests, integrating seamlessly with Sauce Labs to facilitate cross-browser testing in CI environments. Individual functions can be extracted from components for quicker/simpler unit tests; we do a fair amount of this with Mocha in a Node.js environment. But creating fast, simple, entirely deterministic tests for the behavioral logic of components – how components and apps transition between different states – has no one simple solution. State logic can be extracted to a StateController or Redux at the expense of extra layers of abstraction; Panel also provides a server-side environment which can load components and run their code without the overhead of loading a browser. The balance of different styles of tests and an agreed overall philosophy of UI testing are issues which we’re still pinning down.

At the end of the day, despite the problematic aspects, it’s become abundantly clear over several years of building on Web Components at Mixpanel that they are absolutely viable for real-world, productionized front-end work. Once Firefox and Edge finish their implementations of v1 Custom Elements and Shadow DOM, we’ll have a truly cross-browser, native, powerful API supercharging the DOM for the needs of modern web applications. Being able to work with the DOM API directly and browsers’ built-in development tools comes with distinct advantages, and helps replace the cognitive load of framework specifics with standardized techniques and tooling (HTML element attributes/properties, encapsulated styling via CSS, etc.). The occasionally-advanced idea that Web Components can spell the end of JS frameworks may be rather exaggerated – complex applications need much more management than just component encapsulation and lifecycle, and we built Panel to fill in some of the missing pieces of the Web Components environment around rendering, communication, and state management – but they do represent an important step forward for dynamic web UIs. Easy interoperability between disparate frameworks, a standardized API for componentization, simpler and more lightweight client-side code: these developments are not to be taken lightly, as elements of the frenetic JS library world begin to migrate to the more stable, long-term view from the browser-dev side. It’s early days yet, but Web Components open an exciting avenue forward for browser UI development, and it feels great to take steps toward that brighter future.

Straightening our Backbone: A lesson in event-driven UI development

Mixpanel’s web UI is built out of small pieces. Our Unix-inspired development philosophy favors the integration of lightweight, independent apps and components instead of the monolithic mega-app approach still common in web development. Explicit rather than implicit, direct rather than abstract, simple rather than magical: with these in-house programming ideals, it’s little surprise that we continue to build Single-Page Applications (SPAs) with Backbone.js, the no-nonsense progenitor of many heavier, more opinionated frameworks of recent years.

On an architectural level, the choice to use Backbone encourages classic Model-View designs in which control flow and communication between UI components is channeled through events, without the more opaque declarative abstraction layers of frameworks such as Angular. Backbone’s greatest strengths, however – its simplicity and flexibility – are a double-edged sword: without dictating One True Way to architect an application, the library leaves developers to find their own path. Common patterns and best practices, such as wiring up Views to listen for change events on their Models and re-render themselves, remain closer to suggestions than standard practices, and Backbone apps can descend into anarchy when they grow in scope without careful design decisions.

Continue reading