blog

Customizing and whitelisting SASL authentication mechanisms in Strophe.js

| categories: xmpp, strophe.js, foss, sasl | View Comments

Introduction

If you've decided to read this fairly technical blogpost, then you probably have at least a rough idea what SASL is about and why one would want to create custom SASL auth mechanisms or whitelist the supported mechanisms.

I'll therefore provide just a very brief recap of the topics involved:

The Simple Authentication and Security Layer or SASL RFC 4422 is a framework for adding authentication support to connection-based protocols.

It provides an abstraction layer for authentication mechanisms, so that protocols, such as XMPP don't have to deal with the intricacies and complexities of supporting multiple authentication mechanisms.

It therefore makes auth mechanisms pluggable (if they are SASL compatible).

Strophe.js has supported SASL since a long time, but it didn't provide an easy way add custom SASL mechanisms, or to whitelist the mechanisms to be used.

Until now... or rather, since the 1.2.9 release.

Creating a custom SASL auth mechanism

To create a custom SASL authentication mechanism is fairly simple.

You can glean what's required by simply looking at how the default mechanisms are created.

See for example how the SASLPlain mechanism is defined.

And look as the SASLMechanism prototype to see the interface that the mechanism supports.

Perty much it boils down to creating constructor, settings its prototype to an invoked Strophe.SASLMechanism instance, providing its name, a boolean to indicate whether it should proactively respond without an initial server challenge, and an integer value specifying its priority amongst the supported mechanisms.

The default mechanisms and their respective priorities are:

  • EXTERNAL - 60
  • OAUTHBEARER - 50
  • SCRAM-SHA1 - 40
  • DIGEST-MD5 - 30
  • PLAIN - 20
  • ANONYMOUS - 10

Then it's a matter of implementing onChallenge and any of the other methods provided by the SASLMechanism prototype.

onChallenge is called once the server challenges the client to authenticate itself or proactively if the mechanism requires that the client initiates authentication (configured with the isClientFirst parameter of Strophe.SASLMechanism).

So, lets create a fictional auth mechanism called SASL-FOO which works similarly to SASL-PLAIN, except that the password is encrypted with double-encoding ROT13 (hint: this is a joke).

We would then create the authentication mechanism like so:

Strophe.SASLFoo = function() {};
Strophe.SASLFoo.prototype = new Strophe.SASLMechanism("FOO", true, 60);

Strophe.SASLFoo.prototype.onChallenge = function(connection) {
    var auth_str = connection.authzid;
    auth_str = auth_str + "\u0000";
    auth_str = auth_str + connection.authcid;
    auth_str = auth_str + "\u0000";
    auth_str = auth_str + DoubleROT13(connection.pass);
    return utils.utf16to8(auth_str);
};

Whitelisting the supported SASL auth mechanisms

Now with SASL-FOO in hand, we can whitelist the supported authentication mechanisms by specifying a list of mechanisms in the options map passed in when we instantiate a new Strohe.Connection.

var service = 'chat.example.org';
var options = {
    'mechanisms': [
        SASLFoo,
        Strophe.SASLPlain
    ]
};
var conn = new Strophe.Connection(service, options);

Bonus: Whitelisting SASL auth mechanisms in Converse.js

Due to the above changes it'll also be possible to whitelist SASL mechanisms in Converse.js (version 2.0.1 and upwards).

This is done via the connection_options configuration setting:

converse.initialize({

    connection_options: {
        'mechanisms': [
            converse.env.Strophe.SASLMD5,
            converse.env.Strophe.SASLPlain
        ]
    },
});
Read and Post Comments

Strophe.js and Converse.js now support passwordless login with client certificates

| categories: converse.js, xmpp, sasl, strophe.js, foss, openfire | View Comments

Introduction

Did you know that x509 certificates, the certificates that webservers use to prove their identity during the establishment of an HTTPS connection, can also be used by a client (like your webbrowser) to prove its identity, and even to authenticate?

I'm talking here about so-called client certificate authentication.

Client certificate authentication is especially popular in environments with high security requirements. They can even be used to enforce 2-factor authentication, if in addition to a client certificate you also require a password. That usecase is however out of scope for this blog post.

With the release of Strophe.js 1.2.8, it's now possible to have passwordless login with TLS client certificates in Converse.js and any other Strophe.js-based webchat projects.

For Converse.js, you'll need at least version 2.0.0.

Here's what it looke like:

Logging in with an SSL client certificate

The technical details and background

XMPP and SASL

The XMPP logo

XMPP supports authentication with client certificates, because it uses SASL (Simple Authentication and Security Layer).

SASL provides an abstraction that decouples authentication mechanisms from application protocols.

This means that XMPP developers don't need to know about the implementation details of any authentication mechanisms, as long as they conform to SASL.

Up til version 1.2.7, Strophe.js supported the SASL auth mechanisms: ANONYMOUS, OAUTHBEARER, SCRAM-SHA1, DIGEST-MD5 and PLAIN.

For client certificate auth, we need another SASL mechanism, namely EXTERNAL. What EXTERNAL means, is that authentication happens externally, outside of the protocol layer. And this is exactly what happens in the case of client certificates, where authentication happens not in the XMPP layer, but in the SSL/TLS layer.

Strophe.js version 1.2.8 now supports SASL-EXTERNAL, which is why client certificate authentication now also works.

How do you communicate with an XMPP server from a web-browser?

There are two ways that you can communicate with an XMPP server from a web-browser (e.g. from a webchat client such as Converse.js).

  1. You can use XMLHttpRequests and BOSH, which you can think of as an XMPP-over-HTTP specification.
  2. You can use websockets.

Both of these protocols, HTTP and websocket, have secure SSL-reliant versions (HTTPS and WSS), and therefore in both cases client certificate authentication should be possible, as long as the server requests a certificate from the client.

I'm going to focus on BOSH and HTTPS, since this was my usecase.

The HTTPS protocol makes provision for the case where the server might request a certificate from the client.

Note

NOTE: Currently the only XMPP server that supports client certificate authentication with BOSH is Openfire, and funnily enough, only Openfire 3. In Openfire 4, they refactored the certificate handling code and broke client certificate authentication with BOSH. I've submitted a ticket for this to their tracker: https://issues.igniterealtime.org/browse/OF-1191

The authentication flow

So this is how the authentication flow works. I'll illustrate how the authentication flow works by using actual log output from converse.js

Note

NOTE: My XMPP server's domain is called debian, because I was running it on a Debian server and because naming things is hard. In hindsight, this wasn't a good name since it might confuse the dear reader (that means you).

2016-09-15 12:07:05.481 converse-core.js:128 Status changed to: CONNECTING

Firstly, Converse.js sends out a BOSH stanza to the XMPP server debian, to establish a new BOSH session.

2016-09-15 12:07:05.482 converse-core.js:128
    <body rid="1421604076"
          xmlns="http://jabber.org/protocol/httpbind"
          to="debian" xml:lang="en" wait="60"
          hold="1" content="text/xml; charset=utf-8"
          ver="1.6" xmpp:version="1.0"
          xmlns:xmpp="urn:xmpp:xbosh"/>
2016-09-15 12:07:06.040 bosh.js:749 XHR finished loading: POST "https://debian:7445/http-bind/"

The above stanza was sent as an XMLHttpRequest POST, and the above XML was sent as the Request Payload.

Strophe.js takes care of all this, so nothing to worry about, but sometimes digging through the internals is fun right? Right?!

2016-09-15 12:07:06.042 converse-core.js:128
    <body xmlns="http://jabber.org/protocol/httpbind"
          xmlns:stream="http://etherx.jabber.org/streams" from="debian"
          authid="fe0ee6ab" sid="fe0ee6ab" secure="true" requests="2"
          inactivity="30" polling="5" wait="60"
          hold="1" ack="1421604076" maxpause="300" ver="1.6">
        <stream:features>
            <mechanisms xmlns="urn:ietf:params:xml:ns:xmpp-sasl">
                <mechanism>EXTERNAL</mechanism>
            </mechanisms>
            <register xmlns="http://jabber.org/features/iq-register"/>
            <bind xmlns="urn:ietf:params:xml:ns:xmpp-bind"/>
            <session xmlns="urn:ietf:params:xml:ns:xmpp-session">
                <optional/>
            </session>
        </stream:features>
    </body>

So now the XMPP server, debian, has responded, and it provides a list of SASL mechanisms that it supports. In this case it only supports EXTERNAL.

Luckily our webchat client supports SASL-EXTERNAL, so it responds in turn and asks to be authenticated.

2016-09-15 12:07:06.147 converse-core.js:128
    <body rid="1421604077" xmlns="http://jabber.org/protocol/httpbind"
          sid="fe0ee6ab">
        <auth xmlns="urn:ietf:params:xml:ns:xmpp-sasl"
              mechanism="EXTERNAL">dXNlcjAxQGRlYmlhbg==</auth>
    </body>

Now here comes the tricky part. The XMPP server's BOSH servlet, asks the webbrowser (which is establishing the HTTPS connection on our behalf) to give it the client certificate for this user.

The webbrowser will now prompt the user to choose the right client certificate. Once this is done, the XMPP server authenticates the user based upon this certificate.

2016-09-15 12:07:06.177 bosh.js:749 XHR finished loading: POST

2016-09-15 12:07:06.180 converse-core.js:128
    <body xmlns="http://jabber.org/protocol/httpbind" ack="1421604077">
        <success xmlns="urn:ietf:params:xml:ns:xmpp-sasl"/>
    </body>

The XMPP server responds with success and we're logged in!

How to set up client certificate authentication with Converse.js and OpenFire 3.10.3

Note

NOTE: Thanks goes out to Dennis Shtemberg from Infusion, who initially tested client certificate authentication with BOSH on Openfire and on whose notes the following is based.

1. Install Openfire 3.10.3

The XMPP logo

On Debian(-based) Linux, you can simply do the following:

wget http://www.igniterealtime.org/downloadServlet?filename=openfire/openfire_3.10.3_all.deb
sudo dpkg -i openfire_3.10.3_all.deb

2. Configure Openfire's system properties

Open the admin console: http://localhost:9090/ (where localhost is the host the server is running on)

Navigate to Server > Server Manager > System Properties and add the following properties:

Property Value
xmpp.client.cert.policy needed
xmpp.client.certificate.accept-selfsigned true
xmpp.client.certificate.verify true
xmpp.client.certificate.verify.chain true
xmpp.client.certificate.verify.root true
sasl.mechs EXTERNAL

Make sure the xmpp.domain value is set to the correct host. If you're running Openfire on localhost, then you need to set it to localhost. If you're not using localhost, then replace all mention of localhost below with the xmpp.domain value.

3. Lay the groundwork for generating an SSL client certificate

First, make sure you have OpenSSL installed: aptitude install openssl Then create a directory for certificate files: mkdir ~/certs

Now create a config file called user01.cnf (~/certs/user01.cnf) with the following contents:

[req]
x509_extensions = v3_extensions
req_extensions = v3_extensions
distinguished_name = distinguished_name

[v3_extensions]
extendedKeyUsage = clientAuth
keyUsage = digitalSignature,keyEncipherment
basicConstraints = CA:FALSE
subjectAltName = @subject_alternative_name

[subject_alternative_name]
otherName.0 = 1.3.6.1.5.5.7.8.5;UTF8:user01@localhost

[distinguished_name]
commonName = user01@localhost

The otherName.0 value under subject_alternative_name assigns the user's JID to an ASN.1 Object Identifier of "id-on-xmppAddr". The XMPP server will check this value to figure out what the JID is of the user who is trying to authenticate.

For more info on the id-on-xmppAddr attribute, read XEP-178.

4. Generate an SSL client certificate

  • Generate a self-signed, leaf SSL certificate, which will be used for client authentication.

    • Generate a private RSA key

      openssl genrsa -out user01.key 4096

    • Generate a sigining request:

      openssl req -key user01.key -new -out user01.req -config user01.cnf -extensions v3_extensions

      • when prompted for a DN enter: user01@localhost
    • Generate a certificate by signing user01.req

      openssl x509 -req -days 365 -in user01.req -signkey user01.key -out user01.crt -extfile user01.cnf -extensions v3_extensions

    • Generate PKCS12 formatted certificate file, containing the private key and the certificate. This will be the client certificate which you will log in with.

      openssl pkcs12 -export -inkey user01.key -in user01.crt -out user01.pfx -name user01

      • when prompted for export password enter: user01

5. Install the PKCS12 certificate on your local machine

Double click the pfx file and follow the steps to import it into your machine's keystore.

6. Import the x509 certificate into Openfire

sudo keytool -importcert -keystore /etc/openfire/security/truststore -alias user01 -file ~/certs/user01.crt sudo keytool -importcert -keystore /etc/openfire/security/client.truststore -alias user01 -file ~/certs/user01.crt sudo systemctl restart openfire

Note

NOTE: The default keystore password is "changeit"

7. Create the user associated with the SSL client certificate

Go back to Openfire admin console, navigate to Users/Groups > Create New User and create a new user.

  • Username: user01
  • Password: user01 (This is not controlled by Openfire).
  • Click Create User

8. (When using Java 1.7) Patch Openfire

When trying to log in, I received the following error:

2016.09.08 00:28:20 org.jivesoftware.util.CertificateManager - Unkown exception while validating certificate chain: Index: 0, Size: 0

Turns out the likely cause for this is the fact that I was using the outdated Java version 1.7.

At the time, I didn't know that Java is the culprit, so I patched the following code

If you read the comments in the link above, you'll see there are two sections, with one being outcommented. I swopped out the two sections, and then recompiled Openfire.

After that, client certificate auth worked. The best way to avoid doing this is apparently to just use Java 1.8.

9. Test login with converse.js.

The converse.js logo

Now you're done with setting up Openfire and you can test logging in with Converse.js.

Download the latest version of Converse.js from the releases page.

To hide the password field (since the password won't be checked for anyway), you need to open index.html in your text editor and add authentication: 'external to the converse.initialize call.

Then open index.html in your browser.

In the converse.js login box, type the JID of the user, e.g. user01@localhost and click login.

Note

NOTE: If things go wrong, pass debug: true to converse.initialize, then open your browser's developer console and check the output. Check especially the XHR calls to http-bind. Checking the output in the Network tab can also be very helpful. There you'll see what Openfire responds to requests to its BOSH URL.

Conclusion

Client certificate authentication is a bit of a niche requirement, doing so with BOSH/HTTP even more so.

However, I expect webchat XMPP clients to become more and more prevalent in the coming years, even on the desktop, for example when packaged with Github's Electron (an Electron version of converse.js is planned BTW, based on the fullscreen version inverse.js).

The fact that this works because of SASL-EXTERNAL authentication being added to Strophe.js means that this functionality is not only possible in Converse.js, but all webchat clients built on Strophe.js (granted that they use version 1.2.8 or higher).

Unfortunately XMPP server support is lacking, with only Openfire supporting this usecase currently, and not yet (at the time of writing) in the 4.0.x branch. To see whether this gets fixed, keep an eye on the relevant ticket

Read and Post Comments

Open Source software and the expection of free labor

| categories: foss, open-source, economics | View Comments

Over the last few years of starting and then maintaining an open source project that has received a decent amount of attention, converse.js, I've noticed some interesting things about the expectations some people have towards developers who work on FOSS (free and open source software).

Predictably irrational

Book cover: Predictably irrational

People of course love to receive something for nothing. Dan Ariely, in his book "Predictably Irrational" illustrates some of the biases people have when it comes to free stuff. When confronted with the words "free" (as in gratis), people do things that are irrational and are at odds with how a rational actor (the mythical homo economicus) is expected to behave, which is the bedrock upon which most economic theories are based.

The outcome of the various studies Ariely conducted was consistent: when faced with multiple choices, the free option was commonly chosen. With the opportunity to receive something for free, the actual value of the product or service is no longer considered. [1]

“Most transactions have an upside and a downside, but when something is FREE! we forget the downside. FREE! gives us such an emotional charge that we perceive what is being offered as immensely more valuable than it really is.”

Dan Ariely

The biases regarding "FREE!" apply not only to monetary costs, but also to time. We forgo some of our time when we wait in line for free popcorn or to enter a museum on a free-entrance day. We could have been doing something else at that time, so there's a resultant opportunity cost. [1]

Freedom isn't free, it costs folks like you and me

[2]

These biases of course can also come into play when people evaluate free (as in beer) software. In the same way that people didn't take into consideration the cost of the time they spend in trying to get something for "free", people often also don't consider the non-monetary costs of using FOSS.

A common retort that usually surfaces on Slashdot, Reddit or Hacker News whenever a discussion around using a Linux distribution on the Desktop takes place, is “Linux is only free if you don't value your time”.

That's of course completely true. I do value my time, took that into consideration and still concluded that I want to use GNU/Linux and free and open source software.

Using FOSS requires a certain amount of commitment, and it should be clear to the user why they are willing to go that route (freedom from vendor-locking, the ability to control and keep private your data, the ability to modify the code to your liking etc.).

I think people have been hyping the "FREE!" aspect of FOSS way too much.

Software for nothing and your support for free

[3]

Note

Disclaimer

I consider a certain amount of support and maintenance as a requirement for a successful open source project and not something you (as the developer) can ignore.

I try to channel bug reports and feature requests to the Github issue tracker and general support questions to a mailing list, where hopefully other people would also be willing to share the load by answering questions.

So while I complain about people wanting "something for nothing" below, I invariably mean people who write to me directly, instead of on the issue tracker and who are often trying to get me to work on something right away.

So, when considering that many people don't properly evaluate the costs involved in using FOSS, some requests and emails that I sometimes receive start to make sense.

“Please guide me”

One common recurrence, is to be contacted by someone who is integrating converse.js into a project for a paying client, and somehow got stuck. Perhaps they didn't read the docs or perhaps they don't have the requisite technical skills to do the job. These emails sometimes have a pleading, desparate tone to them. Perhaps to instill some sense of guilt or obligation or perhaps just because the person is really desperate and under time pressure.

What gets me every time however, is that as far as I can tell, these are people working for commercial businesses who get paid for the work they do. They then trawl the web looking for hapless FOSS developers to do their work for them for free, or as expressed in the commonly used phrase in these kinds of emails: “Please guide me”.

The novelty and warm fuzzy feeling of altruistically helping strangers solve their problems disappears like mist before the sun when you realise that they're getting paid for the work you're doing for them right now.

And make no mistake about it, maintenance and support for an open source project is work and sometimes even drudgery. The fun part is writing new code or trying out new things, not helping people who can't be bothered to study the documentation.

We need a feature and we hope you'll do it for free

Another common theme is emails where people somehow just assume that I'll implement some feature for them. At first this presumptuousness startled me.

I think it's totally fair to ask when the project is charitable and the people involved don't receive any payment themselves, but that's often not the case.

Instead, the underlying assumption appears to be that I love working on open source projects so much that I'll do it all for free and that I don't have ideas on what to work on next.

Sometimes people qualify their requests by stating that they're a small non-profit. Non-profits do however pay out salaries, don't they?

I'd be willing to reduce my hourly rate when working for a non-profit with a good cause, but I'm most likely not going to do work for free.

The software is free, but the time spent working on it costs money

A nuance that's perhaps lost on many people, is that I have often worked on converse.js for money. There was a rather long "bootstrapping" phase in the beginning where the project wasn't good enough for anyone to actually use or pay money for further development, but after the project stabilized I started getting small paying gigs of custom development on converse.js.

In all cases I made it clear that the Mozilla Public License forces me to open source any changes I made to the covered files, and therefore the work I did for these paying customers (bless their hearts) was open sourced as well.

The point is that while the software is free (as in beer and as in speech), the time spent working on it costs money.

Either someone else pays me to spend my time working on it, or I end up paying by doing something for free while I might be getting paid doing something else (opportunity costs) or by taking time away from other activities.

FOSS development costs money, either the developer is commissioned, or they pay for it themselves (perhaps unwittingly).

Doing work for free devalues it and takes the piss out of actual paying customers

The last point I'd like to make, is that by taking on these requests to do free work for commercial entities (and non-profits), I'm not only devaluing my work, but I'm also disincentivising paying customers (which includes non-profits).

After all, why would anyone pay me to do anything if I'm so eager to please that I'll do it all for free?

The only reason I could see to do that, is to get that mythical "exposure" that's often also sold to web and graphic designers.

The Oatmeal comic: "Exposure"

[4]

So what do you do if you need work done and can't pay for it?

Free and open source software is a beautiful, world-changing and paradigm shifting idea. However, software developers, like all people, need to be paid for their work, also when they work on FOSS.

If you can't pay for software development, then you can still try to incentivise FOSS developers in other ways, but be aware that it'll be more difficult.

One important lesson that I'm glad I learned early in life, is that when you're asking someone to do something for you, then you need to explain to them why it's in their best interests to do so.

People inherently look out for themselves. It's perfectly natural and doesn't necessarily mean they're selfish to the point of being anti-social, it just means that they need to take care of themselves and that they can't expect other people to do it for them or to even have their best interests at heart.

So when desiring something from someone, such as their help, the best approach is to explain to them what's in it for them

For example, if you want a friend to help you out with something, let's say to join a beach cleanup project, you don't tell them why it'll be good for you, you explain to them that it'll be an opportunity to chat, to meet new people, to go for a swim and to have the enjoyment of a clean unspoiled beach.

This is simple stuff, but many people apparently don't know this.

So if you want someone to help you with a software project, explain to them why it would be in their best interest. If you can't find a reason why, then perhaps it's actually not in their best interest and you need to create an incentive for them.

Money works pretty well as an incentive, but there are other ways as well. One sure-fire way to build up goodwill and gratitude (that might translate into more help and assistance) is to contribute. If you can't write code, fix typos in the docs, evangelize the project or contribute in other areas which you have some expertise, like translations, design, UX, helpful feedback etc.

FOSS development is a community effort and a team sport. There'll always be people who try to take more than they give, but on average, humans are matchers. When given something, they want to reciprocate and give back. Keep that in mind when you're trying to get something for nothing.

References

[1](1, 2) Wikipedia article on Predictable Irrational
[2]Sung to the tune of Freedom isn't free
[3]Sung to the tune of "Money for nothing" by Dire Straits.
[4]From The Oatmeal
Read and Post Comments

Sprint Report: Merging Mockup and Patternslib

| categories: mockup, javascript, patternslib, austria, sprint, foss, plone | View Comments

Alpine City Sprint, 21 to 26 January 2015

This is a report on what I did at the Plone Alpine City Sprint organized by Jens Klein and Christina Baumgartner in Innsbruck between 21 and 26 January 2015.

Firstly, I want to thank to them for organizing and hosting a great sprint and for being such friendly and generous hosts.

My goal for this sprint was to work on merging the Patternslib and Mockup Javascript frameworks and I'm happy to report that very good progress was made.

Long story short, the merge was successful and it's now possible to use patterns written for either project within a single framework.

Before I expand on how this was achieved during this sprint, I'll first provide some necessary background information to place things into context and to explain why this work was necessary.

What is Patternslib?

Patternslib brings webdesign and development together.

Patternslib's goal is to allow website designers to build rich interactive prototypes without having to write any Javascript. Instead the designer simply writes normal HTML and then adds dynamism and interactivity by adding special HTML classes and data-attributes to selected elements.

For example, here is a Pattern which injects a list of recent blog posts into a element in this page:

Click here to show recent blog posts

The declarative markup for this pattern looks as follows:

<section id="alpine-blog-injected">
    <a href="#portlet-recent-blogs"
       class="pat-inject"
       data-pat-inject="target: #alpine-blog-injected">

        Click here to show recent blog posts
    </a>
</section>

The dynamic behavior comes from the fact that I have the Patternslib Javascript library loaded in this page.

On pageload, Patternslib scans the DOM looking for elements which declare any of the already registered patterns. If it finds such an element, it invokes a Javacript module associated with that pattern.

In the above example, the <a> element declares that it wants the Inject pattern applied to it, by virtue of it having the pat-inject HTML class.

Patterns are configured with specific HTML5 data properties. In the example above, it is the data-pat-inject property which specifies that the target for injection is the #alpine-blog-injected element in the current page and the content being injected is specified by the href attribute of the <a> element. In this case, the href points to an anchor inside the current page, but it might just as well be a link to another page which will then get injected via Ajax.

More information on how to configure pat-inject can be found on the patternslib website.

Each pattern has a corresponding Javascript module which is often just a wrapper around an existing Javascript library (such as Parsley, Select2 or Pickadate) that properly defines the configuration options for the pattern and properly invokes or applies the 3rd party library.

So, in order to do all this, Patternslib can be broken down into the folowing components:

  • A registry which lists all the available patterns.
  • A scanner which scans the DOM to identify declared patterns.
  • A parser which parses matched DOM elements for configuration settings.
  • The individual Javascript modules which implement the different patterns.

What is Mockup?

Mockup, declarative interaction patterns for Plone..

Mockup was inspired by, and originally based upon, Patternslib and was meant to bring the power of declarative interaction patterns to Plone.

When Mockup started, Patternslib was undergoing significant refactoring and development and it was decided that Mockup should fork and go its own way.

What this means is that the 4 different components mentioned above, were all changed and due to these changes the Mockup project diverged from Patternslib and started developing in a similar but different direction.

So what's the problem?

While Mockup was being developed for the upcoming Plone 5 release, we at Syslab continued using and improving Patternslib in our projects.

Syslab built an intranet for the Star Alliance, which was based on a prototype design by Cornelis Kolbach, the conceptual creator of Patternslib. This design became the inspiration and blueprint for the Plone Intranet Consortium (PIC), which consists of 12 companies working together in a consortium to build an intranet solution on top of Plone.

So, the PIC are building a product using Patternslib, while Plone 5 itself is being built with Mockup, an incompatible fork of Patternslib.

This was clearly a bad situation because we now had:

  • Two incompatible Javascript frameworks being used with Plone.

    Not only were the two frameworks incompatible in the sense that patterns written for the one don't work on the other, but they could also not be used on the same page since the two implementations would compete with one another in invoking Javascript to act upon the same DOM elements.

  • Duplication of effort

    The same or similar patterns were being developed for both frameworks, and when one framework had a pattern which the other wanted, it could only be used after being modified such that it couldn't be used in the original framework anymore.

  • A splitting of the available workforce.

    Developers were either working on Mockup or Patternslib, but almost never on both, which meant that the expertise and experience of developers wasn't being shared between the two projects.

How could this be fixed?

To solve the 3 main problems mentioned above, we needed to merge the common elements of Mockup (specifically the registry, scanner and parser) back into Patternslib.

This will allow developers from both projects to work on the same codebase and enable us to use patterns from both projects together.

At the Alpine City Sprint in Innsbruck, I worked on achieving these goals.

Changes brought in by Mockup

After the fork, Mockup introduced various changes and features which set it apart from Patternslib.

In order to merge Mockup back into Patternslib, I studied these changes and with the help of others came up with strategies on what needed to be done.

Here are some differences and what was done about them:

Mockup allows patterns to also be configured via JSON, whereas Patternslib used a keyword:argument; format

A week before the sprint I added JSON parsing ability to the Patternslib parser, thereby resolving this difference.

Leaves first parsing versus root first parsing

Mockup parses the DOM from the outside in ("root first"), while Patternslib parses the DOM from the inside out ("leaves first").

According to Rok Garbas, the creator of Mockup, the outside-in parsing was done because it reduced complexity in the scanner and the individual patterns.

Wichert Akkerman who originally wrote the Patternslib scanner however provided IMO a very good reason why he chose "leaves first" DOM scanning:

If I remember correctly there are several patterns that rely on any changes in child nodes to have already been made.This is true for several reasons: 1) a pattern may want to attach event handlers to child nodes, which will break of those child nodes are later replaced, and 2) child nodes changing size might impact any measurements made earlier.

Indeed, while refactoring the Mockup code during the merge, I ran into just such a case where a pattern couldn't yet be initialized because patterns inside it weren't yet initialized. By turning around the order of DOM scanning, this problem was resolved and the code in that pattern could be simplified.

So, now after the merge, scanning is "leaves-first" for Mockup patterns as well.

Mockup patterns are extended from a Base object, very similar to how Backbone does it

Patternslib patterns on the other hand are simple Javascript objects without constructors.

The patternslib patterns are conceptually very simple and more explicit.

However, what I like about the Mockup approach is that you have a separate instance with its own private closure for each DOM element for which it is invoked.

After merging, we now effectively have two different methods for writing patterns for Patternslib. The original "vanilla" way, and the Mockup way.

The Mockup parser was completely rewritten

The Mockup parser looks nothing like the Patternslib one and also supports one less configuration syntax (the so-called "shorthand" notation).

This was one piece of code which could not be merged within the time available at the Alpine City Sprint.

So currently we still have two different argument parsers. The Patternslib parser needs to be configured much more expicitly, while the Mockup one is more implicit and makes more assumptions.

Merging these two parsers will probably have to be done at some future sprint.

There are some other more minor differences, such as that every Mockup pattern is automatically registered as a jQuery plugin, but merging these was comparatively easier and I'll won't go into further detail on them.

What I did during the Alpine Sprint

So, in summary:

I refactored the Patternslib registry, scanner and some core utils to let them handle Mockup patterns as well. Luckily the eventual patch for this was quite small and readable.

I changed the Mockup Base pattern so that patterns derived from it now rely on the registry and scanner from Patternslib.

I fixed lots of tests and also wrote some new tests.

This whole task would have been much more difficult and error prone if either Patternslib or Mockup had fewer tests. The Mockup team deserves praise for taking testing very seriously and this allowed me to refactor and merge with much more confidence.

What else is there to still be done?

Being able to use patterns from both projects and merging most of the forked code was a big win, but there are still various things that need to be done to make the merge complete and viable.

I've ranked them from what I think is most important to least important.

1. Update the documentation

Currently documentation is scattered and silo'd in various places (the Patternslib website, the Plone Intranet developer docs and the Mockup developer docs).

The Mockup docs are now out of date ater this merge and need to be brought up to date on these recent changes.

The Patternslib docs are less of a problem because they don't have to deal with Mockup (which can now be seen as an enhancement suite for it), but they can definitely still be improved upon, specifically with an eye on Mockup developers who will start relying on them.

The Plone Intranet consortium also has a useful walkthrough explaining how to create a Patternslib pattern from scratch..

2. Devise a way to also use the Patternslib parser for Mockup patterns

As mentioned, the Mockup patterns still use their own argument parser.

Letting them use the Patternslib's parser will either require extending the Mockup Base pattern to configure the Patternslib parser on a pattern's behalf or instead doing it explicitly in each of the individual patterns.

3. Decide on coding and configuration standards

Unfortunately the coding standards between the two projects differ significantly.

  • The Mockup developers use camelCase as declarative argument names while Patternslib uses dash-separated names.
  • The Mockup source code uses 2 spaces for indentation, Patternslib uses 4 spaces.

4. Remove unnecessary, duplicated patterns

Both projects have patterns for modals, injection, select2 and file upload. These should be merged to get rid of duplication.

5. Move generic patterns (e.g. pat-pickadate, pat-moment etc.) out of Mockup

Mockup has some generic patterns which might also be useful as Patternslib core patterns, in which case they should ideally be moved there.

Conclusion

The sprint was a big success and I'm very happy that all the work has already been merged into the master branches of the matternslib, mockup and mockup-core repositories.

Core code from Mockup is now succesfully merged back into Patternslib and we can use patterns from both projects in the same site.

I had a lot of fun working with talented and motivated software developers and had a lot of opportunities to improve my German.

Thanks again Jens and Christine for organising a great sprint and I look forward to doing such a sprint again!

Innsbruck, the alpine city
Read and Post Comments

The Solution to Jaron Lanier's "Siren Servers"

| categories: economics, foss, open-source, surveillance | View Comments

In his book Jaron Lanier claims that the information on the web is undervalued and underpriced.

According to him this is hollowing out the middle class because people aren't being sufficiently valued for their information-producing work, while a new rich elite is making vast sums of money by collecting and analysing this cheap information.

Who owns the future? by Jaron Lanier

In many cases the information being collected and analysed is given freely to companies by the users themselves in return for email, calendaring and social networking features.

This results in information assymmetry (e.g. who knows what about whom) and therefore a large disparity in power and wealth. Google knows everything about you and yet you know nothing of value about Google.

Siren Servers

These online services are hosted on servers, which Lanier has termed Siren Servers [1], after the Sirens of Greek Mythology. The Sirens are beautiful but dangerous creatures who lure nearby sailors with their enchanting music and voices, thereby letting them shipwreck on the rocky coast of their island.

These Siren Servers lure us into opening up our lives, so that they might freely gather data from us, on a truly massive scale, which they then analyse and profit from.

The results of this analysis is "kept secret and used to manipulate the rest of the world to advantage."

So much value is extracted from this data, that these Siren Servers are valued in the billions of dollars.

The Siren, Edward Armitage, 1888

The Siren, Edward Armitage, 1888

Lanier claims that the users of these servers are shortchanged, in that they don't get their data's worth in return. Not to mention the broader societal damage wrought by the surveillance-as-business-model engendered by this approach as well as the decimation of the middle class as software takes over many industries.

Siren Servers reduce risk for themselves, but in the process increase the risk for all the other smaller players. Uber's business model for example is all about shifting risks onto its drivers. They would not be profitable if they had to carry the costs of commercial insurance, licensing, vehicle maintenance and oversight of drivers.

"The total amount of risk in the market as a whole stays the same, perhaps, but it's not distributed evenly. Instead the smaller players take on more risk while the player with the biggest computer takes on less."

Lanier's solution to the emergence of Siren Servers is to "properly account" for the wealth created by people online. In other words, people need to be paid for what they do online, no matter how small and seemingly insignificant.

"If information age accounting were complete and honest, as much information as possible would be valued in economic terms. If, however "raw" information, or information that hasn't yet been routed by those who run the most central computers, isn't valued, then a massive disenfranchisement will take place."

The decimation of the middle class

Lanier says this process started in the creative industry, with musicians and photographers impacted quite early on, and that it's now spreading to other occupations, such as travel agents, estate agents and journalists.

Steve Albini playing guitar

Steve Albini playing guitar (2007)

Lanier writes:

"Making information free is survivable so long as only limited numbers of people are disenfranchised. As much as it pains me to say so, we can survive if we only destroy the middle classes of musicians, journalists and photographers. What is not survivable is the additional destruction of the middle classes in transportation, manufacturing, energy, office work, education and health care. And all that destruction will come surely if the dominant idea of an information economy isn't improved."

In other words, until we reconfigure the economic system to value raw data by compensating the people that generate it, we will have massive inequality.

Not everybody is of course as pessimistic or draws these same conclusions. Steve Albini, producer of Nirvana and Pixies albums and himself a touring musician, says in this keynote that the Internet has solved many problems for musicians and audiences alike and that things are actually better now than before.

Finance got networked in the wrong way

As an example of the destruction being wrought by the Siren Servers, Lanier asserts that the Great Recession was in part caused by their existence in the financial sector.

"Consider the expansion of the financial sector prior to the Great Recession. It's not as if that sector was accomplishing any more than it ever had. If it's product is to manage risk, it clearly did a terrible job. It expanded purely because of its top positions on networks. Moral hazard has never met a more efficient amplifier than a digital network. The more influential digital networks become, the more potential moral hazard we'll see, unless we change the architecture."

It's clear that Lanier believes that this problem can be solved by reconfiguring and improving the architecture of our digital networks.

Siren Servers and the rise of surveillance as a business model

As has become increasingly clear in the past few years, the dominant business model in Silicon Valley is one based upon surveillance.

Surveillance is the mechanism by which the Siren Servers gather much of the data they analyse and profit from.

Surveillance cameras

According to Lanier, many Silicon Valley techno-libertarians handwave any concerns about corporate surveillance.

"Surveillance by the technical few on the less technical many can be tolerated for now because of hopes for an endgame on which everything will become transparent to everyone. Network entrepeneurs and cyber-activists alike seem to imagine that today's elite network servers in positions of information supremacy will eventually become eternally benign or just dissolve."

However, what good reason is there for owners of these immensely powerful and valuable Siren Servers to one day willingly become transparent and open up in order for this power imbalance to be resolved?

On the supposed emergence of some kind of sharing/socialist utopia arising out of the aggressive right-wing libertarian practices of Silicon Valley, Lanier mocks:

"Free Google tools and free Twitter are leading to a world where everything is free because people share, but isn't it great that we can corner billions of dollars by gathering data no on else has?" If everything will be free, why are we trying to corner anything? Are our fortunes only temporary? Will they become moot when we're done?"

These "network servers in positions of information supremacy" hide their code and algorithms behind patents, IP-laws and copyright.

A whole legal facade has been erected precisely to prevent the hopeful scenario of "elite network servers" dissolving into an utopian abundance.

Lanier's proposed solutions

Cyber-keynesianism

Lanier describes his proposed solution as a sort of Cyber-Keynesianism. Based upon the idea from the economist J.M. Keynes that "stimulus" might be able to kick an economy out of a rut.

Lanier provides a qualitative explanation of the stimulus theory which I haven't heard before.

On a multi-dimensional mathematical landscape (as could be plotted based on the parameters of an economy), there are peaks and valleys. Let's assume that peaks are "optimal" economical configurations, where the greater good is best served, and valleys are horror scenarios (depressions, recession, collapse etc.).

Not all peaks are of equal hight, and when you are on a peak, you are usually surrounded by valleys. There might be a higher peak (i.e. better configuration) available, but it's not clear how to get there, since moving there might mean moving through valleys.

The idea behind stimulus is to provide the impetus, a kick if you will, to make this transition to a higher peak.

Technical Solution

Laniers solution on a technical level is to use a network with bidirectional links, instead of unidirectional links as we currently have with the web. The architecture of this network is inspired by Project Xanadu already founded by Ted Nelson in 1960.

Whenever someone publishes something on the web they would be informed if someone else links to their work, due to the bidirectionality of the links. This would supposedly put a stop to copyright infringement and also allow content creators to correctly identify and bill the consumers of their media.

Of course, this would only work if there were no anonymity on the web.

Lanier further proposes a single, public, digitally networked marketplace where every participant is tied to his personal real-life identity, and where anything anyone creates online has to be paid for.


My Criticism

I thoroughly enjoyed reading Lanier's analyses, and his descriptions of the mechanisms behind Siren Servers in particular. This was for me the best part of the book.

However, he often hand-waves obvious counter arguments or advances his own arguments in a non-rigorous way, often relying on personal anecdotes.

Digital networks' supposed role in hollowing out of the middle class

Considering his central thesis, that digital networking is hollowing out the middle class in developed societies, I'm not sure how much blame for that can be assigned to digital networking. Other plausible factors include demographic change, globalisation and the money printing policies of Central Banks which are fueling asset price bubbles while real wages remain stagnant, thereby making wage-earners poorer.

My expectation is that no particular factor can be singled out, and that the causes are multiple, varied and intertwined in complex and difficult to understand ways, precisely because the economy is a complex, chaotic system (in a mathematical sense).

Lanier fails to meaningfully address Free and Open Source software

Lanier only very superficially addresses the role of free and open source software (FOSS) and doesn't even mention (let alone extensively cover or deconstruct) the ideology of Free Software at all.

As a quick recap to people who don't know the difference between "Free Software" and "Open Source software":

Both groups concern themselves with creating software where the source code is visible, as opposed to software where you cannot inspect the source code, and are therefore unable to modify or know what it is really doing.

Gnu and Linux

The GNU and Linux mascots. The GNU represents Free Software

"Free Software" (as in freedom, not price) however stresses a moral and ethical dimension of software development and usage. It basically boils down to the fact that non-free software creates a dangerous and immoral power imbalance, slanted in favor of the creator of the non-free software and against the user of that software. [2]

Open Source software on the other hand ignores the moral dimension completely and takes a much more expedient approach. It simply asserts that developing software in the "open", in other words, with the source exposed, is a superior form of software development which will result in better code.

In the index of this book, there is no entry for "Free Software" or "Open Source Software". There is one entry "open source applications".

Lanier doesn't even attempt to address the viewpoint of the Free Software movement, which has been warning about these issues (structural power imbalance, Siren Servers, ubiquitous online surveillance) for decades. This is in my opinion a glaring omission from the book.

Lanier might have coined the catchy phrase Siren Servers, but his analysis of their role and negative effects on users and society might have well come from someone from the Free Software movement. See for example the article Who does that server really serve?.

As the saying goes, "There is no cloud, only other people's computers" [3]

Lanier of course, comes to a completely different conclusion on what needs to be done to resolve the situation, and appears to hold proponents of Free and Open Source Software (dimissively calling them "racals", "openness crusaders" and Pirate Party types), not only in contempt, but partly responsible for the emergence of Siren Servers and rise of surveillance as business model.

It is clear that if Lanier were to meaningfully address all ideological opponents of his vision for the future, then he would have had to address the ideas behind Free Software in a methodical and rigorous manner and not with dismissive hand-waving.

The fact of the matter is that Free Software does provide an alternative solution to the current mess of online surveillance, increasing risk and fragilization, extreme power imbalances and centralization into Siren Servers.

The solution is to ensure that the software you use is free. If software being used by a service is free and issued under a license that ensures its freeness, then that server will not have the ability to eventually become a Siren Server.

The reason for this is that users will know exactly what the software does with their data and will have the ability to switch to a different provider (while taking all their data with them), or to host it themselves, at a moment's notice.

This means that the power balance is no longer slanted in favor of the service provider, and users are much more in control of their data and content.

Lanier's hypocrisy

Throughout the book, Lanier criticises in particular Google and Facebook for their creepy surveillance business models and their usage of Siren Servers.

Lanier himself works for Microsoft, a company that is feverishly competing with Google and Facebook in surveilling users and in establishing their own Siren Servers.

However, Lanier of course does not at all address this conflict of interest or that fact that his employer is just as complicit as the companies he calls out. This apparent hypocrisy severely detracts from the integrity of his book.

Lanier's superficial and misconstrued criticisms of "openness"

Lanier's refusal to meaningfully engage with the different ideas and philosophies behind what he calls "openness" is for me one of the most disappointing aspects of the book.

Wikipedia

He deceptively equates Wikipedia and Facebook, saying that both are a culmination of the "information should be free" meme and that both contribute to the devaluation of user created content. In a previous book, he even calls Wikipedia "Digital Maoism".

Wikipedia is concerned with creating a digital Commons. By voluntarily contributing to the commons (as Wikipedia contributors do), you are not devaluing your work, you are making it priceless, in both the literal and figurative senses of the word.

Conversely, the information on Facebook is not free and is not part of a commons.

I find it good to know that there are some things in this world on which one cannot put a price tag, despite the efforts of people like Lanier.

As the saying goes. "A cynic is someone who knows the price of everything and the value of nothing."

Wikileaks

Lanier incorrectly claims that Wikileaks is concerned with abolishing all privacy. Anyone who has made even a half-arsed effort to understand what Wikileaks is about would know that they stand for transparency of the powerful and their institutions, in order to keep them democratically accountable, and that they don't advocate the elimination of all personal privacy.

Of course, this is a common mischaracterization of this organization, often done to intentionally discredit them. Either Lanier is also trying to discredit them or he couldn't be bothered to properly research their stated goals.

Linux

Lanier claims: "A linux always begets a Google" as if it's is somehow responsible for the emergence of Siren Servers. This doesn't explain Siren Servers built upon proprietary software, such as StackOverflow which uses Microsoft.

Inasmuch as one could argue that Siren Servers are the result of open source software (note the distinction) it is exactly due to open source's gutting of the moral aspect out of free software.

Lanier wants to remove anonymity on the web

Lanier criticises tech companies for their creepy surveillance business models, and yet his proposal for "fixing" the internet would eradicate all privacy online and have all human interactions facilitated via a single, public market place. It would be a surveillance and privacy nightmare.

Concerning anonymity on the web, Lanier says this is the result of the "pot-smoking liberals" and "paranoid conservatives" who originally built the system and who thought anonymity was "cool". Such a perfect example of his dismissive, non-rigorous and arbitrary reductionism which completely fails to meaningfully confront the potential benefits or dangers of anonymity on the web.

Lanier completely fails to address the fact that anonymity is often the only defence people may have against corrupt and malevolent state power. Whether this applies to activists during revolutions or to conscientous whistleblowers, doing away with privacy would be throwing away one of the few ways of ensuring free speech online.

Tahrir Square on November 27, 2012

Tahrir Square protests in 2012.

Monetizing all human interactions debases them

By wanting to turn the web into a giant marketplace where everyone operates with an exposed identity and every digital creation, no matter how trivial, needs to be paid for, Lanier wants to further the already ongoing monetization of the public commons. As Charles Eisenstein writes sacred economics, monetizing all human interactions actually debases them.

Lanier calls his approach "humanitarian"; it is anything but. Instead, by monetizing and thereby commodifying every online human interaction, whether it's a joke, a compliment or a statement of support, these interactions are stripped of their inherent sacred quality of humanity.

If you know that someone who sends you a message of support or love will receive a micropayment for that message, could the original intent of the message not be called into doubt?

My proposed solutions

It's about freedom, not cost

Lanier's proposed solution ultimately has to do with costs and trying to find a way to get people paid for anything they write and then publish online, no matter how trivial.

However, in so doing he comes up with a proposed solution that does away with all privacy and anonymity.

His proposal of using a network with bidirectional links might threaten the Siren Server status of services such as Facebook and Google. However, it's not clear how such a network will solve the problem of Uber or AirBNB becoming Siren Servers, thereby pushing out the middle class jobs of taxi drivers and guest house operators.

It's not only about the network architecture, as Lanier suggests. What's more important is Software and Computing Freedom.

If you don't have the freedom to study, use and modify the software that you use, then you will be systematically exploited and preyed upon, no matter what the network architecture looks like.

Free software specifically, is defined by these four freedoms:

  • The freedom to run the program as you wish, for any purpose
  • The freedom to read the source code and to modify it as you wish.
  • The freedom to redistribute original copies so you can help your neighbor.
  • The freedom to distribute your own modified copies to others.

Being able to read the source code is essential for software to be free. An organisation is however only compelled to share their source code if they intend to distribute compiled copies as well. If the software is used privately, i.e. in-house, then they are under no such obligation.

Open Source on the other hand, also gives the right to read the source code, but many open source licenses do not ensure the last two freedoms, and therefore offer weaker protection.

There is however a large overlap between the two groups. Most open source software is also free software, and vice versa.

Free software coupled with free and open protocols means that users are not beholden to specific service providers. It's exactly this aspect of it which is why the dominant tech companies avoid free software like the plague and instead embrace the watered down notion of "open source". Their goal is to maintain strict vendor lock-in, making it difficult for users to switch to other service providers by holding their data and online personas ransom. Something which they could not do if their software respected the above four freedoms.

Who pays for fee software?

When discussing free software, the question often arises how one should make money with software if you cannot enforce artificial scarcity, vendor lock-in, holding user's data ransom or spying on them.

This is perhaps a topic for another blog post. In short, it definitely is possible to make money writing freedom respecting software and the more people insist on using only free software, the more economic opportunities will arise.

Software is better seen as a living, changing and evolving ecosystem, than as a static object. It needs constant care and attention, to keep it relevant, accurate, applicable and functioning. This care and attention is a service which can and should be paid for.

Who pays for this service? The users do. Whether they are governments, universities, corporations or private individuals.

How do they pay for it? Organisations sign service and support contracts. They also commission the writing of new features or entirely new applications.

Private individuals can pay for free software through crowdfunding, bounties, and donations. [4]

Public institutions and foundations can issue grants or fund the development of free software for usage in government and publically owned enterprises.

If all governments decided to only fund and use free software, it would create a massive amount of investment in the sector. Additionally, money spent on developing free software can be invested in a country's own programmer workforce, instead of it being sent overseas to oversized foreign corporations.

However, those companies that currently spy on us, hold our data ransom and restrict our digital freedoms, won't change by themselves. It's up to the users to take matters into their own hands, to demand freedom and to support those companies, organizations and people who develop free software.

Decentralization

The problems Lanier identify are largely caused by data-assymetry (who has data on whom) and the resulting siren servers.

An architectural approach to solving this problem, which complements free software, is decentralization.

Decentralization refers to a network structure (called topology) where each node in the network can connect directly with any other node, without the intervention of a middle man (such as Facebook or Google).

In the network topologies pictured here, the fully connected topology is the most decentralized, while the star network is the most centralized.

Network Topologies

Network Topologies

Creating large decentralized networks is more difficult than creating centralized ones, which is why the web has come to rely on centralized Siren Servers to such a large degree.

However, much exciting work is being done on decentralizing the web and putting people back in control.

Consider Finance, and Lanier's assertion that Siren Servers contributed to the financial crisis preceding the Great Recession.

It's more difficult to have a financial Siren Server when financial data is decentralized and therefore effectively available to everyone.

This is already the case with decentralized cryptocurrencies such as Bitcoin. The bitcoin protocol is an open and free protocol, the original bitcoin wallet is free software and the transaction information contained in the blockchain is available to all.

An improvement upon Bitcoin would be to allow truly anonymous transactions, and this appears to be on the horizon.

Bitcoin and cryptocurrencies are however only one application which can be built on top of the revolutionary distributed consensus blockchain technology upon which it rests.

There's already work on distributed cloud hosting, distributed microblogging and a myriad of other services running in a decentralized (non-Siren Server) manner.

Federation

An older idea than blockchain-based decentralization is federation. Think of the way in which you can send an email from a Gmail account to someone with a Yahoo account, or to someone who hosts their own mail server.

Why can we send email to people with different service providers, but we cannot send chat messages from a Google account to an iMessage account, or between Whatsapp and Yahoo messenger?

This ability of email is the result of something called federation. The ability of servers from different providers and with different implementations, to communicate and relay messages between one another.

Email is federated but facebook messages are not, because you cannot send a facebook message to someone outside of facebook.

There is no technical reason why instant messaging cannot be federated. In fact, XMPP, the protocol on which many of the messaging services are originally based, allows federation.

The reason we don't have federation in messaging is because it's not in the interest of Siren Servers. It would actively undermine their dominance.

If any of the big email providers, Google, Yahoo, Microsoft or Apple, could get away with only allowing emails within their own system, they would have done it by now. The proof is in the fact that they actively prevent federation in their messaging apps. The reason they cannot do it with email is because federated email achieved critical mass before any single company could kill it off.

Unfortunately, this was not the case with instant messaging. Therefore, if you actively want to work against Siren Servers, you should only use instant messaging servers which support federation, such as XMPP.

You can sign up for a free XMPP chat account here. And if you'd like to chat with me, add me as a contact: jc@opkode.com

Conclusion

As you can see, I believe in a completely different remedy than the one which Lanier suggests. One that's based on a methodology, legal framework and software which is already being used instead of something which Wired magazine in 1995 called the longest-running vaporware project in the history of computing.

I thoroughly enjoyed Lanier's analyses and his unique perspective on things which I think made the book worth reading.

However, his non-rigorous approach, his dismissiveness or ignorance of alternative narratives and approaches to resolving the issues he mentions, as well as his hypocritical omission of his own employer's complicity, left me disappointed and uninspired.

Footnotes:

[1]Bruce Sterling calls them "The Stacks", vertically integrated social media. See Bruce Sterling At SXSW 2013: The Best Quotes.
[2]If you're interested in more detail about Free Software, watch this presentation by Richard Stallman:

Introduction to Free Software and the Liberation of Cyberspace

[3]The earliest reference to this that I could find, is this article
[4]See Kickstarter, IndieGogo, Startjoin, Patreon, Bountysource, Gratipay and Snowdrift.
Read and Post Comments

Next Page »