Friday, December 29, 2017

Kubernetes Distilled, Part 1: Deployments and Services

Kubernetes documentation is thorough, and you should read it when you have the time. Until then, this is a condensed overview aimed at developers using a Kubernetes platform that should get you comfortable enough, quickly enough, to have enough of a foothold to experiment and understand the official documentation where more detail is needed.

I'm not an expert, so YMMV.


Kubernetes (or "k8s"–which is to Kubernetes as a11y is to accessibility) runs on top of a cluster of nodes, where a node is machine as in physical or virtual machines, and a cluster is a datacenter or part of one. Nodes are the resources k8s uses to run its Control Plane and the workloads it schedules. You'll interact with k8s through the control plane API, usually from a command line client like kubectl, or from client libraries in a language of your choice.

On top of k8s, there are often services like OpenShift which provide yet another layer of abstraction, and can for example handle provisioning nodes and clusters running k8s for you.


K8s APIs are declarative. You do not say exactly how your application will run. Instead, you describe what your needs are in terms of objects (sometimes referred to as "resources" such as in kubectl help), each with a kind, a specification (or simply "spec"), and metadata. At k8s core, there is a basic, generic framework around these objects and listening to changes in its spec or status. Upon this framework, k8s builds its abstractions as decoupled extensions.

There are low level kinds of objects like Pods, usually managed by high level objects like Deployments. Objects can manage other objects by means of controllers. Controller-backed objects like Deployments and Services are usually where developers spend their time interfacing with k8s as they provide a high level of abstraction about common needs.

Specs are usually provided via the kubectl command line client and yaml files that look something like this:

apiVersion: apps/v1
kind: Deployment
  name: nginx-deployment
    app: nginx
  replicas: 3
      app: nginx
        app: nginx
      - name: nginx
        image: nginx:1.7.9
        - containerPort: 80

Controllers constantly watch the status and the spec of objects they manage, and try to keep them in sync. It's how your updates are recognized and how failures are recovered. For this reason you may find if you go "below" an abstraction and try to change a lower level object's spec directly, your changes may quickly be undone as k8s thinks it's "recovering" your objects that strayed from their specs. It is also technically possible to create situations where the same objects may have multiple conflicting states specified by other objects, causing controllers to constantly change their states back and forth between the differing specs.

All objects' metadata includes a name, lower case strings made up of alphanumerics, dashes, and dots, unique among other objects of the same type, and a uid, unique among all objects over the lifetime of the cluster. Name is required. Uids are provisioned automatically. Metadata requirements vary by object kind.

Most of your kubectl usage will be via the create, get, and replace subcommands which work with objects, their specs and statuses (for example kubectl get -o yaml deployments my-deployment).


A pod defines a single deployable unit as one or more containers that share networking and storage. This is where your code runs. A pod is to a container like a VM is to your application's process(es). Most pods will run one container, and most containers will run a single main process. Each pod gets its own IP address. Like VMs, pods are your unit of horizontal scaling: pods are replicated by a kind of controller, like a ReplicaSet. Unlike VMs, pods are always ephemeral: they are short lived, and they don't maintain state or their IP addresses after they are destroyed. Non-volatile, persistent storage is provided by a different object, a PersistentVolume. A load balanced virtual IP is provided by a Service.

Pods created directly are not maintained by a specific controller, so you likely will spec and create pods indirectly through templates inside other objects' specs. Templates tell controllers, like the DeploymentController (which uses a PodTemplateSpec inside a DeploymentSpec), how to define PodSpecs for pods they manage.


Deployments accomplish deploying and updating your application as a set of containers with various resource requirements to a number of scheduled pods. Generally, your first steps into k8s will be by defining a DeploymentSpec. Technically, a Deployment manages ReplicaSets, and each ReplicaSet manages its own set of Pods.

In addition to usual spec requirements (apiVersion, kind, metadata) a basic Deployment spec includes...

A PodTemplateSpec, which defines the containers and volumes about a pod. A container spec includes the image to use, and the ports to be exposed, like so:
        app: nginx
      - name: nginx
        image: nginx:1.7.9
        - containerPort: 80

Changing the template will result in a rollout. This will create a new ReplicaSet with pods using the updated template, scale it up to the number of desired replicas, and scale down the old ReplicaSet to 0. Deployments have a DeploymentStrategy which defaults to RollingUpdate that maintains at least 75% and at most 125% of desired replicas up at all times (rounded).

An immutable label selector that is intended for developers to group pods to be managed by a single Deployment. Multiple deployments should never select the same pod(s). Generally this will be the same as the pods' label:
      app: nginx
The number of pods to run ("replicas") among pods matching the selector.
  replicas: 3

For more detailed configuration, see Writing a Deployment Spec and the Deployment API reference.

Services, Endpoints, and discovery

Deploying your application may be all you need if it does purely background work. However if your application provides a remote API, you can use a Service object to define a virtual IP (with resolvable domain name, if you're using KubeDNS) that load balances among the service's selected pods. A service spec selects pods the same way deployments do, via label selectors.

Under the hood, the ServiceController maintains an Endpoint which lists the IPs and ports of healthy pods with each Service. Nodes in the cluster are configured to load balance connections to the single virtual IP (called "cluster IP") among the pods, via simple round robin (at least by default).

kind: Service
apiVersion: v1
  name: nginx
    app: nginx
  - protocol: TCP
    port: 80
    targetPort: 80

Services can be discovered using docker-style environment variables or via DNS.

To get domain names, you must use KubeDNS. KubeDNS is an addon service and deployment that runs on k8s like any other, additionally configuring pods to use it for name resolution, and "watches the Kubernetes API for new Services and creates a set of DNS records for each". KubeDNS assigns a domain name with the format of "${}.${service.namespace}.svc.${cluster.DNSDomain}" with an A record pointing at the cluster IP. The service name and namespace come from metadata. If no explicit namespace provided, "default" is used. The cluster DNSDomain comes from the KubeDNS config map (more on config maps later). The default is "cluster.local". With defaults, the example above would be resolvable from pods within the cluster at "nginx.default.svc.cluster.local". Pods' DNS resolution has some additional defaults configures, so technically pods in the same namespace and the same cluster could simply use "nginx" domain name.

Services have different types. By default, the ClusterIP type is used, which does nothing more than assign a cluster IP and expose it to the cluster, but only the cluster. To expose services outside of the cluster, use the LoadBalancer. type. While there is a LoadBalancer type, most services will do some kind of load balancing.


To recap the basics:

  • Kubernetes uses a framework of "objects" with "metadata" and "specifications."
  • Many objects are managed by "controllers" which are processses running within the Kubernetes control plane that watch objects status and specifications, automating the work necessary to keep the resources described by the objects in sync with their specifications.
  • Your application runs as a set of containers inside replicated, ephemeral Pods. The PodSpec has which image to use and the ports to expose.
  • You can deploy and replicate your application using a Deployment and a PodSpecTemplate.
  • You can expose your application to other pods using a Service which creates a persistent, virtual IP routable within the cluster and, if KubeDNS is used, a domain name resolvable within the cluster's Pods.

In part two I will likely talk about how to templatize configuration for your application and how to provide persistent storage to your pods. Please comment if there is something else you'd like a terse, useful summary of.

Thanks for reading!

Monday, July 14, 2014

A Case Study of Java's Dynamic Proxies (and other Reflection Bits): Selenium's PageFactory


I spend a lot of time studying the source code of libraries that I really enjoy using. Since my day job is developing automated web UI tests at Red Hat, Selenium is one of those libraries. At the time I started writing this, I was not very familiar with Java’s reflection capabilities, nor what the heck a “Dynamic Proxy” was. The Proxy pattern is particularly powerful, and java.lang.reflect’s dynamic proxies provide a fairly nice way to do it. Dissecting PageFactory demonstrates.

What is PageFactory?

Selenium WebDriver comes with a fancy class called PageFactory. It allows you to write your page object’s like this:

class LoginPage {
  private WebElement username;
  private WebElement password;
  @FindBy(id = "submitLogin")
  private WebElement submit;

  public HomePage login(String usernameToType, String passwordToType) {
    return new HomePage(driver);

Those familiar with Selenium and the Page Object Pattern will notice right away our members are not By types, but WebElements. And by the looks of it, we can act on them straight away without having to findElement all over the place, despite appearing uninstantiated. And what’s that annotation? This is where the PageFactory automagic happens!

If you’d like to learn more about PageFactory and how to use it, check out the Selenium wiki. If you’re like me and have to know what crazy Java wizardry is behind those factory gates, here’s your golden ticket…

Charlie and the Page Factory

The magic starts when you initialize the page class via PageFactory’s static initElements method. It sprinkles reflection and dynamic proxy fairy dust on your WebElement fields so that rather than throwing NullPointerException, they do stuff. This post covers the details of that process, so that you too may wield such black magic!

Pass initElements either a WebDriver and a page object’s .class

// If you want to instance a new page from test code.
public void funWithPageFactory() {
  WebDriver driver = new FirefoxDriver();
  LoginPage loginPage = PageFactory.initElements(driver, LoginPage.class);
  // Do stuff

Or a WebDriver and the page instance itself.

public class LoginPage {
  // Elements go here...

  public LoginPage(WebDriver driver) {
    // Passing an already instantiated instance is just as cool
    PageFactory.initElements(driver, this);

  // Methods to do stuff on elements go here...

Either way, you start with the most reduced set of data you need: a WebDriver and some class that has WebElement fields (classic constructor style dependency injection). These elements have optional annotations that describe how to construct a By instance. And as you very well know those are used by a driver to find elements. So, really, those are the three essential ingredients: a WebDriver, WebElement fields, and By objects. We make By’s from annotations, or we’ll assume them from the name of the WebElement fields.

So if you can’t use an element before finding it, when does driver.findElement(by) actually get called? The end result of a chain of Oompa Loompa shenanigans is that elements “find themselves” when they are called upon. Behind the scenes, findElement is not being called until you actually try to “do stuff” on that element. That’s the real drama of PageFactory and where the bulk of interesting work happens. Let’s take a look at that flow.

A Factory in Chicago that Makes Miniature Models… of Factories

The following steps are a little hard to follow (perhaps because of a little pattern overload… but I won’t argue with Google). Ultimately, we need to sheath those WebElement fields with Proxy instances that implement WebElement, and in between calling your desired method on the desired element, do that driver.findElement call we know needs to happen in order to get the element we want to work with. PageFactory wraps that need in ElementLocators. We’ll need a locator for each element, and so Selenium divvies out that duty to, you guessed it, ElementLocatorFactory.

1. Instance an ElementLocatorFactory by giving it a SearchContext

An ElementLocatorFactory can take a WebElement field from the page object, combo it with that SearchContext (in a typical case this is the WebDriver we passed initElements), and spit out, you guessed it, ElementLocators. We reference the individual fields via the reflection api, and we get actual Field objects that the locator factory can accept in order to create locators.

Here’s some code to illustrate that:

// ElementLocatorFactory instantiation and usage.
public void demonstrateLocatorFactory() {
    WebDriver driver = new FirefoxDriver();
    // Some page modeled with the PageFactory pattern.
    LoginPage loginPage = new LoginPage();

    // ElementLocatorFactory is an interface, and 
    // DefaultElementLocatorFactory is Selenium's stock implementation
    ElementLocatorFactory locatorFactory = 
            new DefaultElementLocatorFactory(driver);

    // Here's the reflection bit. It just does exactly what it looks like.
    // Field types have a method to access their annotations (if any), so
    // the ElementLocator's will use the Field to get the annotations, which
    // has all the info to create a By object, as we'll discuss.
    Field[] fields = loginPage.class.getDeclaredFields();

    for (Field field : fields) {
        // Assume for brevity these are all WebElement fields
        ElementLocator locator = locatorFactory.createLocator(field);
        // Do stuff with the locator...

So the Locator Factory creates Locators from Fields. If you look at all these interfaces (SearchContext, ElementLocator, By1), you’ll start to see they look really similar. What’s special about an ElementLocator instance specifically is that it can find an element on demand without any parameters. A SearchContext needs a By to do that. A By needs a SearchContext. Like Doc Brown needs both a flux capacitor and some plutonium to travel through time, we need both a DOM context and a means-of-finding-something-in-that-context to reference an element.

Now, we’ve got a SearchContext (from the driver we passed initElements), but what about the By? Well, actually, we have that already too. More specifically, we have enough information to make a By. We have our page object, and that page object has fields, and each field has an annotation that says, “Hey this is how you make a By for me,” and if not, we assume that the name of the field is the exact id (in HTML terms) of the element we’re looking for. And so, an ElementLocator constructs a By itself, given the information attached to a field. And now we’ve got a SearchContext and a By wrapped up in this ElementLocator guy that we can pass around like it’s a WebElement… Effectively it is! That is, without the shortcomings of a direct WebElement reference. In order to get a WebElement we have to find it first, and if it can’t be found, we can’t get a reference to it (findElement would throw a NoSuchElementException). An ElementLocator on the other hand is as good as pointing to a specific element, but we can hold off on actually finding that element until we’re ready assert that that element should actually be there in the driver’s current context. An ElementLocator can even cache an element once it’s found it for the first time, and just reuse it on subsequent lookups.

In summary, an ElementLocatorFactory takes a SearchContext and a Field, smashes them together and makes a portable ElementLocator. An ElementLocator constructs the By from the Field, looking at its annotations if it has any, and from there it has all the ingredients to reference a specific element without additional parameters. This useful feature is going to be essential in a minute.

2. Instance a FieldDecorator by giving it the ElementLocatorFactory

The next type we encounter is a FieldDecorator. A field decorator is the thing that actually uses the ElementLocatorFactory, so we instantiate the decorator by passing along the locator factory in the decorator’s constructor. It’s going to need the ability to generate locators for a given field (which is what the factory does), because it has the core task of actually assigning WebElements to the fields of our page object – that is, “decorating” those fields.

3. Use the FieldDecorator to assign references to the page object’s WebElement fields

The decorate method of our FieldDecorator takes a Field and a ClassLoader. The ClassLoader is just what it sounds like: every Java class is loaded by “something.” To load a class is to take a class definition from some form, and spit out a Java-executable .class file: the real working bits. There are different ClassLoader implementations depending on the platform or source of the Java class. PageFactory will always just reuse the ClassLoader that our page object used. Any Class<?> object has a getClassLoader method for this purpose.

More important is the Field, which the FieldDecorator will use to generate an ElementLocator for the particular field we are attempting to decorate. Cool, but how do we get a Field object? The Class<?> interface also provides this facility, via getDeclaredFields. This is reflection. With a Field, you can examine its modifiers, and also set or get it for a particular instance of the class that declares the field. This is what PageFactory does, as seen here:

private static void proxyFields(FieldDecorator decorator, Object page, 
    Class<?> proxyIn) {
  Field[] fields = proxyIn.getDeclaredFields(); // proxyIn is just the page 
                                                // object being initialized.
  for (Field field : fields) {
    Object value = decorator.decorate(page.getClass().getClassLoader(), 
    if (value != null) {
      try {
        field.setAccessible(true); // Fields accessed via reflection still 
                                   // obey Java's visibility rules, however 
                                   // this can be overriden by setting the 
                                   // "accessible" flag.
        field.set(page, value);
      } catch (IllegalAccessException e) {
        throw new RuntimeException(e);

As you can see, FieldDecorator.decorate(...) returns an object, and we set that object as the value of the field we passed to it. What is that value? You might be able to guess at this point. Recall we instantiated the DefaultFieldDecorator by passing along the ElementLocatorFactory. So this thing knows how to make element locators, perhaps it just returned an element then based on the locator for the field? What if the element couldn’t be found at initialization?

Enter the proxy pattern. Instead of assigning those fields a WebElement directly, we assign it a “proxy” instance of a WebElement. That is, an object that implements the WebElement interface, but not by way of a conventional class. Instead, when methods are called on the proxy, that method and its arguments are passed to an intercepting method as arguments (as in Method method, Object[] args). That intercepting method is ours to implement by way of an InvocationHandler. When we implement the InvocationHandler interface, we implement that intercepting method. There, we can do whatever we want, provided it returns a type that complies with the method’s signature. Due to that constraint, it usually involves calling invoke of the original method object (say click()), on some other WebElement object. See where this is going? That “other” WebElement is the one our ElementLocator can track down independently. By implementing WebElement via a proxy, we defer calling SearchContext.findElement (and potentially throwing an exception), until we actually try to do something on that element. Magic!

Instantiating and implementing a Proxy is quite simple. Here’s a contrived example:

// The interface(s) that the proxy will implement governs this type.
WebElement proxyElement;

// java.lang.reflect.Proxy has a static method, newProxyInstance. At compile 
// time we can only say that this returns an Object type, but it's really 
// returning a new class that implements whatever interfaces we say it does.
// So we can safely cast to WebElement.
proxyElement = (WebElement) Proxy.newProxyInstance(
        // We have to pass a ClassLoader here so the proxy class can be 
        // defined. Recall this is why decorate accepts a ClassLoader (with 
        // which we pass the page object's ClassLoader.
        // This is an array of Class types -- these are the interfaces that 
        // this object supports, governing the cast rules and the methods we 
        // have to be able to handle in our InvocationHandler. This is why we
        // can cast to WebElement.
        new Class[] {WebElement.class}, 
        // This is our invocation handler.

And this is what the InvocationHandler looks like that PageFactory uses, with my own comments added:

public class LocatingElementHandler implements InvocationHandler {
  private final ElementLocator locator;

  // Inject the thing that can lookup a specific element at a later time
  public LocatingElementHandler(ElementLocator locator) {
    this.locator = locator;

  public Object invoke(Object object, Method method, Object[] objects) 
      throws Throwable {
    // The lazy look up!
    WebElement element = locator.findElement();

    // This proxy also implements "WrapsElement" and must implement its single
    // method manually, like so:
    if ("getWrappedElement".equals(method.getName())) {
      return element;

    try {
      return method.invoke(element, objects);
    } catch (InvocationTargetException e) {
      // If the method that is reflectively invoked throws an exception, it's 
      // rethrown as an "InvocationTargetException". We can throw the original,
      // more interesting exception by "unwrapping" it.

      // Unwrap the underlying exception
      throw e.getCause();

All of this work is encapsulated inside of the field decorator. You pass it a field and an ElementLocatorFactory, and it returns a proxy, which we then assign to the respective field. Tada!


With Java’s reflection and dynamic proxies you can,

  • Retrieve a class’s fields and modify them at runtime
  • Pose as an implementation of an interface by intercepting method calls and implementing your own logic (ie. a proxy)

And this is how PageFactory does its magic.

Happy reflecting!

Saturday, May 18, 2013

[Outdated] The Fastest Way To Implement JavaScript Classes with Inheritance

"JavaScript is a beautiful and expressive language." Everybody says it, and it's true! The downside, of course, of such a flexible language is that there are 1001 ways to do everything, and not all ways are created equal. The performance (or lack thereof) of different patterns can be surprising, as well as their implementation quirks.

JavaScript's various patterns for object-oriented-programming comprise an especially vibrant topic. Many JavaScript developers have their own Best Way to implement JavaScript classes and inheritance. Many others will simply use existing libraries exist to facilitate the task. I've looked at a lot of different patterns, analyzed their performance, took into account their convenience, and came up with what I think is the simplest, most practical and most performant approach.

In other words, my Best Way. ;-)

Where Were They Going Without Ever Knowing The Way(s)

I'll try and broadly categorize the most common approaches out there that I've encountered. Keep in mind the dynamics of the language are such that even within these categories, there are still many popular variations, and so these categories are by no means thorough, but for performance comparisons it won't make much of a difference.


Perhaps the most "old school" pattern, closures allow truly private variables and simple inheritance. In the closure method, you define a new class by defining a constructor function which defines your class properties and methods on a local object variable and returns that object, effectively creating a new instance of a class.

function ClassConstructor(privateValue, publicValue) {
    var classObject = {};
    var privateProperty = privateValue;
    classObject.publicProperty = publicValue;
    classObject.getPrivateProperty = function () { 
        return privateProperty;
    classObject.method = function (arg) {
        // Private properties are not actually properties of the object we are
        // creating to represent our class, but nonetheless they are accessible
        // to the instance of this class we are returning with this function
        // (and ONLY to THAT instance) because of the closure. Refer to them in
        // methods just with their names.
        privateProperty += arg;
        // Refer to public properties with the 'this' keyword.
        this.publicProperty += arg;
    return classObject;

And so, it's simple to create a new instance of a class, and the rest works as expected.

var instance = ClassConstructor('hidden', 'public');


// Returns undefined!

// Returns 'publicargument'

// Returns 'privateargument'

Inheritance in the closure format is also quite simple. If you're creating a child class, call the parent class's constructor in the child class's constructor, and extend that instance with your child class's additional properties and methods.

function ChildClassConstructor(privateString, publicString, childClassProp) {
    var childClassObject = classConstructor(privateString, publicString);
    childClassObject.childClassPublicProperty = childClassProp;
    childClassObject.childClassPublicMethod = function () {
        this.privateProperty += childClassProp;
    return childClassObject;

The closure method has elegantly clean code in my opinion. If you know basic JavaScript, it makes perfect sense, and of course implements private members beautifully. Additionally, the code that defines a new class runs extremely fast, because all you are doing is defining a function.

Unfortunately, that's only relevant if you're defining a lot of new classes at runtime, which will cause JIT optimizers to hate you anyway. The real upshot of having very little work to do to define class, is that there's more work to do to actually instantiate a new object of that class. As you can see, the closure method is an awful performer compared to the other, more browser-optimized approaches.

"New" School

The next category I want to talk about employs the new operator. Browsers really like new because the objects it's creating are already defined via a constructor and its prototype property. They know exactly how to make new objects of that type. When we create objects with the closure method, we're doing so from the ground up, defining each property and method one at a time, each time an instance is created. Intuitively, we know those objects are going to be the same because we've defined the procedure to make them, but browsers much prefer a sort of, "blueprint object" to copy. It's the difference between instance creation taking 1 line of code (with the new operator), or any number of lines of code (with the closure method), depending on the complexity of the class.

Let's take a look at how to define classes via a constructor and its prototype property, for use with new.

// Note the name of this function is also used as the name of our class

function SomeClass(value1, value2) {
    // We use 'this' because unlike our closure constructor, this function is
    // going to be called with the new instance of our object as the invocation
    // context. So 'this' is referring to that new instance, and we'll use it to
    // define and set the value's of the properties of this class to values of 
    // the arguments passed to the constructor. Note that this means when you
    // define properties here they are not part of the prototype, but local to 
    // each instance. For local properties, this is what we want.
    this.property1 = value1;
    this.property2 = value2;

// Methods, on the other hand, should be defined on the prototype property, as 
// they won't be changing from instance to instance.

SomeClass.prototype.method = function (arg) {
    this.property1 += arg;
    this.property2 += arg;

And (drumroll), time for new to do its magic! Here's how to create a new instance of our generic, "SomeClass" type.

var instanceOfSomeClass = new SomeClass(123, 456);

Tada! When we use new this way, two things happen. First, a new object is created with the value of the prototype property of SomeClass as its prototype. Then, it calls the constructor function, SomeClass, with the arguments we pass to it, and that new object as the invocation context for that function. Finally, the new instance is assigned to our instanceOfSomeClass variable. And this all happens really fast.

You Are The Prototype

Closures implement inheritance very simply: just by making copies of objects and extending them. When we use the new operator, we'll make use of JavaScript's prototypal inheritance model. You're probably already familiar with it, but just in case it's basically this: every object has an associated prototype object. When looking up a property or method on an object, the interpreter will first check the object for that property or method of course, but if it's not found it'll then look to its prototype. If it's still not found, it'll check the prototype's prototype, and so on, until it's found or it's reached the end of the prototype chain (the generic Object.prototype, who's own prototype is null).

To create an instance of a subclass, we need the prototype chain of that instance to look up the subclass, then the parent class. An instance get's its prototype from the prototype property of its class's constructor, and so, we need that prototype property to have it's own prototype, and it should be the prototype property of it's parent class's constructor. This is analogous to how we implemented inheritance with closures. Essentially, we'll start with a previous object, and extend it. Except in this case we're effectively dealing with prototypes: we start with a parent class's prototype, and then extend it.

Referencing a method or property of an instance of the child class will check out that object first, then it's prototype (the child class), and then the its prototype's prototype (the parent class), stopping where ever the property or method is found first. That's a mouthful. Let's check out some code.

// Start just as before with a normal class definition, defining properties in
// the constructor, and methods on the prototype of that constructor.

function ParentClass(x,y) {
    this.x = x;
    this.y = y;

// Now extend the prototype *property* of the constructor with some methods.
ParentClass.prototype.add = function (x,y) {
    this.x += x;
    this.y += y;

// Now let's define the constructor of our child class.

function ChildClass(x,y,z) {
    // Okay, so when a new instance of ChildClass is created it's going to have
    // the prototype chain taken care of, but what about the constructors?
    // There's still relevant properties and initialization to take care of
    // there! Welp, it's less than glamorous, but we just have to call it
    // ourselves, using 'this' as the invocation context. Remember, 
    // constructors are called with the new 'instance' of the class as the 
    // invocation context, so with 'this' we're just passing that along with the
    // relevant arguments., x, y);
    // Initialize the properties new to the child class
    this.z = z;

// Now here's where we inherit the prototype of the parent class. Notice we use
// 'new' here because we want to modify the actual prototype of the prototype
// property. Right now the only mechanism we have to do that is with 'new',
// which returns a new object with the prototype equal to the operand's 
// (parentClass's) prototype property.

ChildClass.prototype = new ParentClass();

// Now, this overwrites something important to us. That prototype property has a
// 'constructor' property. Actually, every object does. And 'new' uses it. When 
// we overwrite the prototype property entirely this way, we're also overwriting 
// the constructor property that would have been === ChildClass. No workaround
// but to fix it manually.

ChildClass.prototype.constructor = ChildClass;

// Now we can extend the prototype as normal, overriding parentClass methods
// with new ones of the same name (to be found first in the prototype chain),
// or additional, unique methods for the ChildClass.

ChildClass.prototype.add = function (x, y, z) {
    this.z += z;
    // In methods, calling the parent class's version works the same way as in
    // the constructor., x, y);

Alright! We have fully functioning object-oriented approach for JavaScript, complete with inheritance, that is very fast. Now for the finale.

My Way Can Beat Up Your Way

Okay, we're just about done, but let's take a look at a few issues with our latest approach, and get to the specifics of my Way.

First off, when we are creating an object with new, we know that this calls the constructor function, in addition to spawning a new object with a prototype equal to that constructor's prototype property. When we are setting up the prototype property of a child class's constructor, we want the object with the right prototype, but we don't want to call that constructor. We didn't even pass any arguments. And what arguments would we pass at that point? Worse, if you have some heavy intialization code in your parent class's constructor that does more than just assign those arguments to properties, it probably won't even work at all.

Luckily, there's a really easy way around this for modern browsers (>IE8). Instead of using new parentConstructor() use Object.create(parentConstructor.prototype). This does the same exact thing as new, except it doesn't call the constructor function, and it accepts the object-to-be-used-as-a-prototype as an argument directly, instead of a constructor function. It can also do some fancy stuff with a second argument, though it's not really relevant to this post. The only downside is that Object.create is about half as fast as new, but since it's only called once per subclass definition, I don't imagine that ever outweighing its benefits.

Secondly, we can abstract away some of the details for ourselves by making a sugar function to create a child class.

function createChildClass(Child, Parent) {
    // What we just talked about.
    Child.prototype = Object.create(Parent.prototype);
    // Now let's add a reference to the parent class's prototype property to the
    // Child class for easy referencing.
    Child._parent = Parent.prototype;
    // Overwrite the constructor property as before.
    Child.prototype.constructor = Child;


function ChildClass (x, y, z) {
    // Use our _parent property defined as a property of ChildClass. Save some
    // characters and time. Notice because _parent refers to the prototype
    // property of the constructor and not the constructor of the class itself,
    // we have to explicitly look up the constructor like so., x, y);
    this.z = z;

// This makes the magic happen.
createChildClass(ChildClass, ParentClass);

// Extend the prototype property...
ChildClass.prototype.method = function (...) {
    // Do stuff
    // Call the parent method, ...);

"No, Your Way Can't Beat Up My Way"

Well, maybe. My Way will outrun yours, though. There are a gazillion features that you could implement from here, yes, but the thing is, anything extension or tweak from here is almost definitely going to perform significantly worse (aside from adding static private members, which is easy and very performant via a closure around the methods defined on a prototype property). Now, performance might not be a huge deal depending on your project, say if you're not instantiating classes very often. If that's the case, there are definitely some excellent features you can add that will help prevent bugs and abstract away even more of the implementation details. But, if you're looking for performance, and ECMAScript 6 classes aren't yet implemented, my Way is Best ;-).