A resource is a piece of data. It makes the semantic interoperability and standardization of data and web services possible.

It's like a model in traditional web MVC's, but it is more general than that. Resources can be:

  • database records
  • remote API JSON responses,
  • files on the file system
  • config JSON
  • UI data such as x/y position
  • etc.

You may wonder why a resource is this abstract, why not just be like an MVC model?

The reason is the way we've used models in the past (ActiveRecord in Rails for example) is just a piece of the system. It is only part of the solution. If you look into the code for great libraries such as fog and chef, you realize that they are using models as well, but in a totally different way. They ended up reimplementing a version of ActiveRecord in new context (remote API resources for Fog, and install packages and other operating system config/data for Chef).

This has powerful implications. If you unify all of those different use cases, you gain the ability to query anything, anywhere, using a standard API across all the different areas.

What this means is astounding. You have one simple abstraction over all data.

You can transfer your knowledge of data models to completely new areas. And you can query any of this data using a robust and extensible query language in JavaScript.



Adapters are an abstraction over remote services and databases. This includes pretty much everything you can think of (and if there's a case this doesn't handle, definitely point it out):

  • Databases like MySQL, MongoDB, Cassandra, Redis, etc.
  • Remote services like Facebook, Twitter, and other things with API's (REST or not)
  • Things without APIs that you can make have an API, such as web crawling
  • Operating system resources like files, processes, installed packages, etc.

They make it possible to have a standard interface to any data, anywhere.

You achieve one standard query API by having adapters.

To implement an adapter, simply implement theexecmethod. If you do this you can perform all the standard query actions (create/read/update/delete), and the resources your adapter abstracts become usable just like any other resource (like a model in traditional MVC frameworks like Rails).

Here's how theexecmethod looks for some custom adapter:

var adapter = require('tower-adapter');

adapter('custom').exec = function(query, fn){


All it takes is a query object, and a callback function (for handling async).

It is up to your specific adapter to determine how to handle the query object. To be brief (more in the query section), the query object has the same basic stuff you'd find in any other database query system like SQL, MongoDB, etc.:

  • query.selects: an array of all the tables/collections/keystores (i.e. resources) used in the query
  • query.constraints: an array of<left> <operator> <right>statements likelikeCount >= 10(as objects, so they're easy to manipulate)
  • query.paging: thelimitandoffsetvalues specified, if any
  • query.sorting: an array of the different sorting properties/directions

All of that is described in depth in the query section. For now just know that to implement a custom adapter, you simply implement theexecfunction, which means taking those 5 query properties and converting them into the format specific to some database or service.

Adapters and Streams

All adapter actions are implemented as nodejs-compliant streams. Actions are executed from aquery. So when you create/read/update/delete records, you can pipe the result to the input of another query (or query constraint), or build a complex join-like query out of multiple sub-queries. Essentially, data becomes part of the node.js stream ecosystem. It also works like this in the browser.

Adapters are tiny

Once you start wanting to integrate data from dozens or even hundreds and thousands of data sources, size matters. Adapters are boiled down to the bare essentials to make it so you could load dozens of adapters onto the client if desired, and not have to really worry about how that'll impact performance.

Implementing a REST adapter

Here's how theexecmethod looks for a custom REST adapter (which saves client-side records through AJAX to your server, and allows basic searching):

 * Map of query actions to HTTP methods.

var methods = {
  find: 'GET',
  create: 'POST',
  update: 'PUT',
  remove: 'DELETE'

adapter('rest').exec = function(query, fn){
  var name = query.selects[0].resource;
  var method = methods[query.type];
  var params = serializeParams(query);

    url: '/api/' + name,
    dataType: 'json',
    type: method,
    data: params,
    success: function(data){
      fn(null, data);
    error: function(data){

 * Convert query constraints into query parameters.

function serializeParams(query) {
  var constraints = query.constraints;
  var params = {};

    // params['likeCount'] = 10;
    params[constraint.left.attr] = constraint.right.value;

  return params;

This is all it takes to hook up your resources to a backend with full search/sorting/pagination.

You can implement remote service and database adapters just as easily.

Implementing a database adapter

This is just like implementing the REST adapter. All you do is implement theexecfunction, and convert thequeryobject into the database-specific format.

Here is a high-level example of how to create a MySQL adapter:

var adapter = require('tower-adapter');
var mysql = require('mysql');

adapter('mysql').exec = function(query, fn){
  var table = query.selects[0].resource;
  var constraint = query.constraints[0];
  var condition = [constraint.left.attr, constraint.operator, constraint.right.value].join(' ');

  var statement = [
    'SELECT * FROM ' + table  // SELECT * FROM posts
    'WHERE ' + constraint     // WHERE likeCount >= 10

  mysql.execute(statement, fn);

Tower has a simple MySQL adapter that shows how to convert a query object into a SQL statement (thanks to the squel repo). Have fun saving and querying any data in MySQL.

To REALLY make a database adapter robust, dig into the database's documentation and implement all it's features. Please finish and open-source an adapter! You will solve the problem for everyone, and you will never have to deal with it again*.

Implementing a remote service adapter

No guides on this yet. You can make a remote service queryable just like a database. For now, see the most complete example EC2 adapter.


Queries are one of the most powerful things in tower. They are the interface between resources and adapters, making it possible to find/create/update/remove records across adapters using a standard syntax.

Tower's query API is very similar to most SQL-like query languages. As it turns out, even NoSQL databases like Cassandra and Neo4j have SQL-like query languages. The reason is, when you start talking about resources with attributes, and relationships between resources, it all boils down to set theory, or more specifically graph theory and relational algebra. I'm not sure the formal relationship between these different areas of mathematics, but they all are required areas of study when building a query language.

In the most general sense, a query is a set of constraints applied to a graph, where the graph is all of the resources and attributes, and the constraints are linear inequalities such asposts.likeCount >= 10. The query analyzer then figures out the most optimal way to traverse the graph of resources and attributes (across adapters) and builds a query plan: the most optimal way to traverse the graph given those constraints.

This is what Tower's query engine does. You build a query, which get's compiled to a "topology" (the most optimal map/way to traverse your graph of resources across adapters), and then performs the queries on the adapters. You get back the final result. You don't need to worry at all about the database-specific implementations.

Query API

This is the basic api for the query object:

var query = require('tower-query');


It is very similar to a SQL query.

There are 4 main properties on the query object:

  • query.selects: an array of all the tables/collections/keystores (i.e. resources) used in the query
  • query.constraints: an array of<left> <operator> <right>statements likelikeCount >= 10(as objects, so they're easy to manipulate)
  • query.paging: thelimitandoffsetvalues specified, if any
  • query.sorting: an array of the different sorting properties/directions

These properties store the parameters passed in through the query DSL.

Queries are tied to both resources and adapters.

Queries and Adapters

The query module defines exports ausemethod, to which adapters are passed. By telling the query to "use adapters", you tell the query system what data it has access to.

When you define an adapter,usegets called automatically for you. You can also tell a specific query instance which adapters to use in the same way (so you can say, "this query only has access to this specific adapter" for example).

You can access the query object from the adapter as well. We're moving to a point where you should be able to do this on all adapters:

var facebook = require('facebook-adapter');


Basically, that just delegates to the following (which is possible now):

var facebook = require('facebook-adapter');


By having that simpler version, if you build a REST API for your startup or whatever, you will be able to have a simple/clean API for free, with a completely robust query API. You'd simply say "include our script tag on your site", and now customers are using your super lean, lightweight, robust JavaScript API.

Queries and Resources

You can also access the query object from resources:

var resource = require('tower-resource');


This just adds the query object, basically just this:

var query = require('tower-query');


Queries and Relations

All of the theory has been fleshed out on how to robustly implement a generic query executor that works across all different types of databases (mysql, mongodb, cassandra, neo4j, etc.) and remote services (facebook, twitter, etc.). And between adapters, such as "fetch all facebook posts for users in our database who have signed up in the past 2 weeks" or whatever. Cross-adapter queries means you will be able to query anything, anywhere. This means that data objects everywhere can be queried and combined in new ways.

However, we're not quite finished this yet. Decided to release this early.

The math

Basically, we have a graphGwith verticesVand edgesE:


The vertices are the adaptersA, resourcesR, and resource attributes ("fields",F). So, they are subsets ofV.

A,R,F ⊂ V


{ adapter('mongodb'), adapter('cassandra'), ... } ∈ A

Also,Ris a collection of subsets ofA(don't know how to quite represent this yet).

ThenFis the same, a collection of subsets grouped byR.

So then it comes down to, a query is just a set of edge constraints onV.

Simple constraints are when a property is set to a primitive value such as a string/number/etc. I'm not sure those need to be treated as nodes so I'm ignoring them for now (that is, the database natively will handle all that stuff).

Complex constraints are basically joins, where the value is pointing to a resource field,F. That is, it is an edge between 2 or more resource fields,FitoFn.

This really simplifies everything. So basically, a bunch of joins just mean there's a bunch of edges between members ofF, call themJ.

J ⊂ E

That means (at least for reasonably complex queries), you just have to solve the minimum cut maximum network flow problem for the networkJ.

Somehow you can groupJin an order based on the fact that they are connected toF, which are a subset ofRwhich is a subset ofA.

If you think of it in terms of 3 adapters,facebook, twitter, mongodb, and they all have their ownuserresource, and all have afirstNameattribute, and maybe a couple more attributes each, and several of their attributes are joined together, how do you find the best way to do the query?

One issue is, there can't be any directed cycles. Realizing this simplified everything, because in my head I kept thinking of that case and not seeing a solution, but from what I've read they say it must be a DAG (directed acyclic graph), no cycles! We can come back to handling the cyclic case later (maybe the algorithm can just randomly say one goes before the other).

So to figure out the best way to do the query, you have to figure out the network flow problem onJ ⊂ E. That's pretty much it.


Those are some notes on how to approach solving the problem. It's totally possible, should be able to complete soon. Would love any input.

So, the query system currently supports being infinitely extensible on individual adapters (which should be useful for you now), and in the works is making queries work cross-adapter, with relations.


Tower's template component is built in response to the 100's of other template engines that fall short in one or more key areas:

  • performance
  • extensibility
  • overall file size of the implementation
  • simplicity
  • client/server compatability

All an HTML template engine needs to do is map data to the DOM.

At the lowest level, this is how it's used:

var template = require('tower-template');
var el = document.querySelector('#todos');
var fn = template(el);
fn({ some: 'data' });

A template functionfnis built by passing a DOM node totemplate.

var fn = template(el);

With that function you can do two things:

  1. You can apply new content to it, which will update theelyou passed in to build the template.
  2. Or you can callfn.clone(content), which will clone the originalel, and apply the new content to it. This is useful for creating new list items, for example.

Template Compiler Deep Dive

The template compilation process is super simple. This is the process at a high level:

  nodeFn(scope, node); // node is the cached item from building the template, so `document.body`
    scope = directivesFn(scope, node); // process directives for this node, returns new/old scope
      scope = scopeFn(scope); // find the correct scope for this node, from its directives
    eachFn(scope, node.childNodes);
      nodeFn(scope, childNode); // recurse, where `scope` might be a new one from above

So, for each node, it first processes the directives (directive.exec), and eachdirective.execreturns either the current scope or a new scope. The end result of processing directives is ascope(new or original), that then will be used to process child nodes. So then it iterates through the child nodes, and repeats the whole process.

This way, when you execute the template function with a scope:

var template = require('tower-template');
var fn = template(el);

it basically just iterates through a bunch of functions, passing scopes to directives which then apply the scope data to the DOM.



<ul class="nav nav-tabs">
  <li data-list="item in nav" data-class="active">
    <a href="#"></a>


Data for the DOM.

When you describe the UI, you talk about the design and the content. Content is the data shown to the user.

Most frameworks allow you to bind arbitrary data to the DOM. Angular allows you to bind any plain-old JavaScript object. Ember requires you to use their own observable objects. Knockout makes you wrap your object to be "observable". In the end, they all do this sort of thing because all browsers do not yet support listening for property changes in a reliable way.

But if you really think about it, you do not need to bind plain-old JavaScript objects to the DOM. Nor do all the objects in the framework need to be observable. What you need is a clear way to show the user content, arbitrary data that is specifically meant for the DOM.

You need an API to take arbitrary data (whether it's your model, some random config properties, hardcoded menu items, the result from a remote API call, whatever), and say "expose this to the DOM". Here's how you do that:

var content = require('tower-content');

  .attr('random', 'string', 'config!')
  .attr('items', 'array', [ 'Home', 'About' ]);

Then you tell the DOM how to find that content:

<body data-content="main">

That's it! You can put any arbitrary data into content attributes.

Why is this preferrable to the other frameworks' approaches? 2 big reasons:

  1. You don't add heavy code to all of your framework objects for handling observing. This makes your code lightweight and super fast.
  2. You are being explicit about the content the user sees. This distinguishes where the data in your app interfaces with the DOM, so you know where to look for rendering performance issues without having to learn how the entire framework manages their observer behavior.

The parts ofcontent

ThecontentAPI has 2 methods:

  • attrfor defining attributes and default values.
  • actionfor defining functions that should execute when a user clicks or performs some action.

So you defineattrfor every property you want to expose to the DOM, andactionfor every function you want to run when the user does something. That's it!

You might wonder if that's really all you need (attrandaction), "what about arbitrary functions" or "what about this one case"? You might be right, there may be a few more cases to cover (though I doubt it). But so far based on using this it doesn't seem like you'll need anything more that this. We'll have to all figure out the exact best way as we go, but for now this is a super lean and super simple approach, and seems to cover pretty much every use case in practice.

Most use cases are covered because, other than binding data to the DOM, you want to "do stuff" when the user clicks something (emits and event). So the bulk of your code is actually in those event handlers (call them "actions"), which are independent.


Directives are the API to the DOM. In a perfect world, they're the only place your app code touches the DOM.

One developer called them domain-specific extensions to HTML.

Creating a custom directive

You know how those old-school websites sometimes have events posted where the event expired like 3 weeks ago, but it's still says "Come to our event this saturday!"? I feel bad because you know it's going to take them a bunch of time calling and emailing people to "update their site" to remove that event.

Never again.

Here is how you can solve that with directives and never have to worry about stuff like that in the future: automatically remove the event when the event passes.

<div data-expires="june 30, 2013">
  Come to our BBQ on Sunday, June 30th!

So we just added an attribute to the DOM that we just made up,data-expires. Now we need to define a directive for it:

var directive = require('tower-directive');

directive('data-expires', function(scope, el, attr){
  var date = new Date(attr.value);
  var now = new Date;

  // if today is on or past the day, remove the element
  if (date <= now) $(el).remove();

It is as simple as that. Now when that DOM node is encountered, and the directives are parsed for it (this stuff all happens when you run a DOM node through a template), it will execute that directive, passing in thescope(the currenttower-contentinstance which has a bunch of properties and methods on it), the actual DOM elementel, andattrwhich has the value that the element attribute was set to, parsed usingtower-expression. All you really need to know is that in this directive function, you get the element that had that directive on it, and some data you can use (or ignore) to manipulate the element.



Once you start building more complex templates which have custom JavaScript, and maybe some configuration (like pagers, modal windows, form fields, etc.), custom elements are perfect for this.

In Tower, an "element" is:

a template + some JavaScript

That's it.

So take a pager for example. Rather than calling it a "pager view" or something that has a "pagination controller", just think of creating a "pager" which is just some HTML with a JavaScript API. Here's how that might look.

The Element's HTML Template

First you define the HTML template. This HTML is straight from the Twitter Bootstrap pager component:

<ul class="pager">
  <li><a href="#" on-click="prev()">Prev</a></li>
  <li><a href="#" on-click="next()">Next</a></li>

So in there we just added the standard event directiveon-click, which executes a function (prev()ornext()). In Tower we call the 2 functionsprevandnextas "actions", since most of the time, from the DOM's perspective, when you execute a function it's in response to some thing the user did, so you can think about it as "a user performing an action" (also,actionis used all throughout the core modules, so it makes having a super consistent and clean API). In addition to these "actions", you can also define "attributes" (such as configuration, or maybe to allow customizing thePrevandNextlabels), but for this simple case we'll just focus on actions.

An element, like a template, gets its actions and attributes from content. In the DOM there is (by convention) a root/global content object, and then you can create new nested content objects (more on this in the content section). When you create nested content like this, it's called creating a new "content scope", similar to how in JavaScript, each function creates a new "scope".

Why is this relevant for an element?

Elements create a new content scope automatically for you, so you just need to know that when you define actions and attributes on your customelement, all that's actually happening is those methods are being delegated to acontentobject specific to that element. You can see that in the source code.

The Element's JavaScript

Here's how we'd define the pager element that works with the above HTML template:

var element = require('tower-element');
var html = require('./template'); // the template HTML from above

  .action('prev', function(pager){
    // do something to the pager, pager.el, or pager.content
  .action('next', function(pager){


Then we can instantiate the pager like this:

var pager = element('pager').init();
var el = pager.render();

The actual DOM node is stored on the element instance as well (after.render()has been called):


Using Attributes on Elements

Now what if we want to be able to specify the label forPrevandNext?


We just need to make the hardcoded strings in the template into variables (like mustache/handlebars), and add those attributes to theelement('pager')DSL (and you can set default values too):

<ul class="pager">
  <li><a href="#" on-click="prev()">{{prevLabel}}</a></li>
  <li><a href="#" on-click="next()">{{nextLabel}}</a></li>
  .attr('prevLabel', 'string', 'Prev')
  .attr('nextLabel', 'string', 'Next')
  .action('prev', function(pager){

  .action('next', function(pager){


So now if you instantiated and rendered your pager, it would be rendered like it was before. But you can also customize the labels this time:

var pager = element('pager').init();
document.body.addChild(pager.render({ prevLabel: '<', nextLabel: '>' }));

You can also pass the attribute values in oninit, in case you wanted to set up defaults. Either way, using.initor.renderworks fine the first time around:

var pager = element('pager').init({ prevLabel: '<', nextLabel: '>' });

Building hardcore UI elements

Ideally, this element API will allow for creating super robust/complex UI elements, such as forms, datagrids and things like Pinterest's scroller (Airbnb open sourced infinity which is similar).

However, if you do end up creating a complex UI component like these, I recommend not building all of the logic into theelementDSL. Instead, try to build it so it can be used independently of Tower, but then you can wrap it in a Towerelementand release that repo too, so your component can easily hook into Tower's templating system. This way, if someone came along and wanted to use your component but they were using something like Ember or Angular, they could make it work there too.

This also goes to show, it should be easy to integrate any external UI thingy into Tower so it can be used with this standard API.

Last little note. I'd say that most of the time you won't need to build custom elements, and can instead just use plain HTML templates and the built in directives. But if the time comes where that's not enough, this is here to make life easy.

Expression Engine

Within Tower, sits an incredibly powerful expression engine that powers every binding. We felt like we could create an expressive language that sits in element attributes.

Let's get started with a simple example:

<div data-text=""></div>

That's cool, right? Well, it's just an empty text binding, nothing to it. So let's add a binding to a simple attribute.

<div data-text="user"></div>

This would effectively render the following, ifuserwould equal toJoe

<div data-text="user">Joe</div>

This short example just showed you a simple expression; a string expression. Expressions do get more complicated than that, but the premise stays the same.

The expression engine is hand-built. Well, you'll probably say "Yeah, sure, just like anything else that's programmed. All by hand". No, no, no. When I say "by-hand", I mean in relation to the Lexer and Parser, and anything else related to the expression engine. We don't use some sort of magical generator that produces extremely huge Lexers, Parsers, and whatever else they have.The expression engine is small, soon-to-be modular, and fast. This makes it viable to run on the client-side, as well as the server. Oh, yes, this has no dependencies on the DOM, which means it can be used server-side.

Let's get into a more complicated example, shall we?

<div data-list="user in users [buffer: 2, max: 10]"></div>

Wow. What's that?

That's a "foreach" or "each" expression, with arguments. In this case, we want to create an additional 2 element buffer in our list, and only have a maximum of 10 elements. Super simple and concise.

The string "user in users" is probably familiar. Many languages and some frameworks have this syntax, or something similar.

Want more?

<div data-list="user in users [buffer: 2, max: 10] | filter: startsWith(a) | sort: reverse()"></div>

Deep Dive Into Expressions

Now that you know what expressions are, let's get a better picture on how it works.

The expression engine has the following stages:

Input -> Lexer -> Parser -> Search

tower-expression/index.jsexposes anexpressionfunction and accepts a string: an expression. It'll hand the big work of lexing and parsing the string.

We've made an extensible Lexer that's not specific to the expression system.

var lex = new Lexer()
  .def('token1', /^\[$/)
  .string('random string to lex')


The idea of cookbooks came out of a desire to use Chef, the great automation library for Ruby, in JavaScript. So every layer in your app -- the client, the server, and also the command line -- is just plain JavaScript.

Cookbooks are an extensible system for abstracting away common tasks and commands. This includes components such as:

  • generators
  • build scripts (like things you would do with Grunt/Yeoman)
  • making an adapter useable from the command line (see tower-ec2-cookbook)
  • abstracting away install scripts (such as installing node.js, git, or mongodb on EC2, like Chef cookbooks)
  • building aliases to commands (such as simplifying the API for ssh-ing to a remote server, or normalizing how you enter a database console)

Cookbooks are super easy to write. It's just oneindex.jsthatexports"actions" (such ascreate,remove,install, etc.). Because of Tower's CLI abstraction, you can execute these cookbook actions ("recipes") from the command automatically. Super powerful.

How a cookbook works

One of the simplest cookbooks is just a generator (like one you find in Rails). Here is the tower-component-cookbook, which generates a new module (withpackage.jsonfor npm andcomponent.json) with this commmand:

$ tower create component my-component

Here's what happens when we execute that command:

  1. Goes to tower-cli (since it is just using thetowerexecutable), and figures out that you called the actioncreateon the cookbookcomponent.
  2. Finds the cookbookcomponent.
  3. Calls the methodcreateon the cookbookcomponent, passing in a new recipe object (which just has some helpful methods like you'd find in a generator), and the command line arguments.
  4. The actioncreatethen can parse the arguments (using any one of the many cli option parsers for node, it's agnostic), and do whatever it needs to, in this case, generating a JavaScript module.

This is the structure of a cookbook action:

exports.create = function(recipe, args, fn){
  // fn (callback) is an optional third param

You have full control over how the arguments are parsed and what happens. Here is an example of parsing arguments with commander:

exports.create = function(recipe, args, fn){
  var projectName = args[4];

  var options = require('commander')
    .option('-o, --output-directory [value]', 'Output directory', process.cwd())
    .option('-b --bin [value]', 'include executable', false)
    .option('--component [value]', 'Add component.json', false)
    .option('--package [value]', 'Add package.json', true)
    .option('--travis [value]', 'Add travis.yml', false)
    .option('--namespace [value]', 'Namespace for component')

  // ...

Take a look at the component-cookbook source for the robust implementation.

Tower's Command Line Interface (CLI)

$ tower <action> <object> [options]

Tower's command line was built to be super fast. When you enter the tower console, it's instant (on the order of 10ms):

$ tower console

Another big goal of the CLI was to make it infinitely extensible (but avoiding the problem of people swooping on names and whatnot). To do this, we standardized the structure of command line arguments:

$ tower <action> <object> [options]

You can create your own commands by creating a cookbook (see the cookbook section).