Updated: Sept. 21, 2015
Origins of web technology
When Tim Berners Lee
invented web he was looking for a system to publish scientific documents remotely accessible,
visually attractive, easy to code and easy to use for a non-technical person.
In a scientific document, external cites to other documents is indispensable,
in order that the reader may optionally develop the theme in question.
For these reasons the World Wide Web was conceived as a page (document) based system
with hyperlinks.
Initially the Web was a world of static pages and links but soon the generation of
dynamic pages and in general the use of the Web as a support for designing web-based
applications complicated everything.
The arrival of web applications
For many years there has been a strong effort to adapt the web paradigm of
pages and links to application development. In a web application the Berners' view
of static documents and simple links do not exist.
Different application development approaches have been happening:
- Model 1: direct translation of the original model of pages and links,
where pages are dynamically generated.
- Model 2 o MVC: now links are not directly pointing to a concrete target
page, in this case a controller decide what the next page is depending on the
operations taken place in page transition.
- MVC based on components (Model 3?): is the sophisticated version of Model 2
simulating how desktop applications work. It is based on components and events, so
any user action implies the complete rebuild and reload of the page partially changing
some part according with the action performed.
The page and page transition is now managed by components which now what changes
take place according to the event, simulating how components work on desktop GUI programming.
In recent years the AJAX technique has been introduced, this technique with the help
of JavaScript allows partial changes in pages obtaining new data from server without reloading.
In spite of partial page change technique is long before the introduction of XMLHttpRequest
in Internet Explorer (base of AJAX programming), it has been the boost of its massive use.
Now millions of web sites and web applications use AJAX to provide a better experience
to end users thanks to more responsive user interfaces partially avoiding the annoying page reloads.
In spite of massive use of AJAX, we can say the Web follows a development model we could
name as "Model 2 (MVC) enriched with AJAX". When using AJAX, "Model 3" has not much sense
because AJAX largely reduces the need of page management based on components. Because
AJAX is usually used alongside components (not necessarily present in Model 2),
we may classify the current state of art of web development as Model 3.5,
where page navigation is partially avoided in case of minor state transitions performed
by AJAX and JavaScript.
What are the disadvantages of page based navigation and development?
Every web developer knows how problematic is page navigation in a web application,
besides of bandwidth wasting and process time rebuilding entire pages more problems
make web development painful like unwanted caching, back/forward buttons, desynchronized
forms caused by the "form auto-fill" feature of some browsers and so on.
It is not uncommon to see web applications that hide the menus and buttons of the browser
or using frames or iframes (e.g. banks) to avoid the problem of Back/Forward buttons.
Page based development forces a style of coding weird, repetitive (plenty of includes)
and inefficient (both bandwidth and computing power) not found in desktop development.
What is what prevents intensive use of AJAX?
In the field of web development we are used to distinguish two kind of web solutions:
web applications and web sites.
In the first case AJAX is becoming more and more used because this kind of applications
do not share some requisites imposed for web sites. In web sites intensive use of AJAX
is a problem.
In public web sites end users are used to the page concept, bound to the pages
some requisites and services are required in any web site like:
- Bookmarking:
Every web page has a different URL, this URL can be saved as bookmark. Because
AJAX can partially change the page the URL is the same, the end user cannot save
as bookmark a concrete view (state) of the page.
- Search Engine Optimization (SEO):
Any web site wants to be fully indexed by search engines like Google Search.
Current crawlers see the Web as Web 1.0, that is, JavaScript code is fully ignored,
thereby any partial change performed via AJAX loaded from server is not executed
then not indexed by crawlers traversing the web site.
- Services based on page visits:
For instance advertisement services like Google AdSense and page visit monitoring such as
Google Analytics, in both cases the number of page loads is important.
Therefore any partial change done by AJAX does not count as a new visit.
- Occasional need of pop-pup windows
Because these requisites intensive AJAX is discouraged in web sites.
However the difference between a "web site" and a "web application" is becoming
smaller because almost any web site is a sort of "web application"...
Should we give up AJAX-intensive applications?
NO.
There are technical solutions for all above listed requisites.
Development of web sites based on a single Web page (SPI), is it possible?
YES !
This is the time to start this transition, developers and end users all of us
will gain. We have the technology and modern browsers are qualified to achieve this objective.
To succeed in this "new" way of web development we must to accomplish all of
previous requisites of any web site.
Goodbye pages, welcome states
In an web application without JavaScript, state sequence is equivalent to pages,
in a SPI application any partial change implies a new "state" of "the page".
Among states we can distinguish two categories of states:
- Fundamental states
- Secondary states
Differencing between both state types is very important, because fundamental
states will become web pages when needed. Fundamental and secondary differentiation
is web site dependent.
To better understand both types of states we can study a real example:
login validation.
In a classic page based applications typical login is built
by using two pages, one for user and password and one showing user options if
login validation was correct; the login page will be reloaded showing some error
messages alongside the login form when login entry is wrong.
In a SPI web, initial login and the user options page could be the fundamental
states, and error messages alongside login could be secondary states.
Another example, a web site based on pages to be converted to SPI, in this case
fundamental states will be the pages and secondary states will be page states
with minor changes, not important enough for bookmarking or to be traversed by crawlers.
Single Page Interface and Bookmarking
Different pages have different URLs, following the SPI route how can we change
a state and at the same time the URL without reloading to allow this new state can be
bookmarked by end users?.
There is a trick, using the "reference" part of URLs ("hash fragment", shebang or hashbang), this is the last
part, if present, following the # character. This reference is used to scroll the page
to the concrete location specified by some <a name="ref"></a> mark.
This reference part if changed does not reloads the page, hence if the URL reference
is changed by using window.location
alongside the page state (in this case this new state is "fundamental")
with JavaScript and AJAX, then no reload is performed. Because the URL and the fundamental state
have changed, end users can save this URL, in some way containing the new state info,
as bookmark.
When end user wants to come back again to the bookmarked page, the target state
is specified in the reference part of the URL, the server will be requested,
unfortunately the reference part is not sent to the server because reference part
does nothing to do with remote location using HTTP, hence we will need a post-load
process.
The server will return an initial page in which the target state is not
the specified, however the window.location
object contains the original
URL including the reference part. When loading the target page we can detect with JavaScript
whether window.location
contains a reference part and whether
this reference has the required target state info, if true we can rewrite the URL
adding some kind of normal parameter to specify the target state to load. Because
the URL has actually changed a new server request is executed, this time the
state to load is in a parameter and the server returns a new page with the required state.
Another option, better than hashbangs, arises with the advent of HTML 5, the HTML 5 History API.
Single Page Interface and Search Engine Optimization (SEO)
The easiest way to get our web site is processed by search engine crawlers
is to offer two different navigation modes, SPI for end users, pages for web crawlers.
The next example shows a link with this idea:
<a href="URL page" onclick="return false">…</a>
This link will do nothing in a browser with JavaScript enabled because
navigation is disabled by "return false
" of onclick
attribute,
but when a bot indexes this link ignores the onclick
attribute
because JavaScript code is not executed and will process the specified URL as
the next page to process.
In the field of a SPI application, URLs being used for page/state navigation must contain the target state,
the same type of URLs used in SPI bookmarking that is using the reference part to indicate
the target state, or the target is directly written as a normal parameter, the later is preferred
because it avoids a server request, of course "pretty URLs" can also be used.
Currently Google already crawls "AJAX URLs", that is, URLs containing the target state
in the reference part following #!
as specified in
Making AJAX Applications Crawlable,
in this case the web site/application must return the expected page being requested
with a _escaped_fragment_
parameter.
At the same time the SPI web framework can add specific code to the onclick handler
before return false
or can bind an event listener to the link
being used for state/page navigation, registered with addEventListener
or attachEvent
depending on the browser. This event listener will execute some action to command
the server, usually using AJAX, to change the page state. When the link is clicked this state change
is not a new page because the attribute onclick="... return false"
avoids the default behavior.
The technique described before is the simplest and immediate by using visible links
compatible with bots and SPI. You can ever separate both functions, for instance using
hidden links for end users but not for bots alongside other clickable elements to change SPI states
by using JavaScript invisible for bots.
The most important feature of a SPI capable framework is page generation as HTML
with the required state on loading time and at the same time the same change state
must be performed with JavaScript and partial page updating. These requisites
are fundamental to provide SPI and page simulation.
SPI and Back/Forward buttons
Back/Forward buttons are a source of problems on conventional page based web sites
and should be avoided as soon as possible. In spite of users are used to avoid Back and Forward buttons
when submitting a form with user data (because it carries the risk of buying twice the same plane ticked or book),
use of Back/Forward buttons is very widespread.
Apparently the SPI paradigm breaks the traditional way of navigation of a web site,
because in theory Back/Forward buttons has no sense in SPI (no pages) and web browsers do not provide a good control of
these buttons.
This is not fully true, Back/Forward behavior can be simulated, instead of page navigation
Back/Forward (and history navigation in general) can be used to change the current
state to the previous/forward state. In this case a JavaScript code can detect
when the reference part of the URL changes and requests the application to change the state
accordingly. Because the browser does not change the page your application is now
fully responsible of Back/Forward behavior avoiding the typical problems of unexpected
Back/Forward use of end users when submitting a form, now in SPI there is no such form
and no uncontrolled page navigation by the web application/web site.
SPI and services based on page visits
Ads services and page visit counters are based on how many pages have been loaded.
In both cases you can use hidden <iframe> elements containing an empty
web page with the required scripts to execute this kind of services.
In the case of advertisement services such as Google AdSense, dynamic insertion
of <iframe> implies loading new ads therefore every change state could imply
a new reload of <iframe> with ads. Google AdSense seems to detect when the
AdSense script is executed within a <iframe> and takes into account the
contents of the container page. It may be desirable to add some kind of parameter
that identifies the fundamental state that is loading the <iframe>.
In the case of visit counters, we can use them to monitor user visits to
fundamental states of our SPI web site. In this case we need a hidden <iframe>
containing an empty web page with the monitoring scripts. With a simple parameter
we can indicate the fundamental state being visited. Our <iframe> should be
global (always the same in the page).
When the page is first loaded, the fundamental state being loaded (specified in URL)
should be indicated to the <iframe> with a parameter. After page loading,
every fundamental state change could be notified to <iframe> changing the URL via JavaScript
according to the new fundamental state, this URL change will cause a reload of
<iframe> (indicating a new visit).
SPI and pop-up windows
When a new page window is created the SPI model is broken. Fundamentalism is bad,
there is no problem if the state of this new window has nothing to do with
the state of the parent window, in this case pop-up windows are fine.
The problem arises when any action performed on the pop-up window (modal or not modal)
has some influence on the parent window, coordination between pages is complicated.
For instance there is no a web standard to create modal windows because the
page concept has traditionally always been an independent element and therefore
its life cycle is difficult to coordinate from another page.
Fortunately this problem has solution for some time in SPI, you can simulate
modal or not modal windows inside the same web page, no new real page window
is created. In the case of non-modal windows, any HTML element with absolute positioning
can be a "non-modal window" and you can create modal windows by using absolute
positioning, controlling z-index and opacity of elements "on top" of the page ("modal layers").
These solutions are valid in a SPI context.
With a little effort, even the state which shows a modal window may be a fundamental
state and therefore navigable by search engine bots.
A cultural shift for web developers
Most of web developers (and web frameworks) think the Web as based on pages,
page reduction to a single page implies a radical change of mind and how we make
web sites and applications. This change is not so radical thanks to AJAX, AJAX is
today mainstream and has reduced the number of pages of typical web sites,
in summary it has brought us near of this "new" SPI development model.
In the new SPI web the <form> tag disappears and in general the need of sessions
used as data managers following page sequences. Now the protagonist is the page client
with some symmetry in server (the page in server). In fact, because we get rid of page coordination
with sessions we are freed of a source of problems like the bad practice of some users who open
several windows with the same page, this practice usually breaks the session and the application
in general.
SPI programming is based on events the same as in desktop, because in desktop
most of applications runs in the same frame window and when child windows exist
these are fully managed by the main window and genuinely modal.
Following the paradigm evolution of web development, this "new" approach could be
named Model 4.
A cultural shift for end users?
Not very much, with bookmarking and Back/Forward simulation end users
are not going to differentiate a SPI web site and the same page based,
furthermore the SPI site will be more responsive and the typical blinking and scrolling
of page navigation is removed.
Technical viability today
This manifesto is not a statement of intentions but the expression of the desire
to promote a "new" way of building websites that are already real. The above technical
study has always had the Java web framework ItsNat
as the technological base of SPI web site development. In spite of ItsNat was
conceived from day one to this kind of applications/sites, previous techniques
could be applied with other web frameworks or these frameworks could evolve
to provide facilities for this kind of SPI web sites with page simulation requirements.
Some requirements of these SPI web sites to be able of replacing traditional
page based web sites, such as page simulation of fundamental states on loading time,
are just possible with server centric web frameworks because HTML rendering must be
done in server on load time. HTML rendering on load time and the same dynamically
loaded and inserted with JavaScript are the key characteristics of a web framework
ready to build SPI web sites. Client centric frameworks could have a major
role for the realization of so-called secondary states.
Two real world examples
The web innowhere.com/jnieasy
It is made with ItsNat in server and a good example of SPI web site because
it sums up all requirements of a SPI web site, explained in this document,
to be a satisfactory substitute of a traditional site. In fact, the new SPI version
replaced, with no significative aesthetic of functional change, the previous version
based on pages. It is based on hashbangs.
Characteristics
- Single Page Interface: Back and Forward buttons are simulated changing to
previous or forward visited state.
- Fundamental states can be saved as bookmarks.
- SEO compatible: fundamental states are reachable with JavaScript disabled
including a modal window.
- The hashbang
#!
format is used, that is, Google SEO compatible "AJAX URLs", the page is also
requested following the Google convention of _escaped_fragment_
parameter. For instance
this state is crawled by Google
requesting this URL.
- Works with JavaScript disabled.
- Shows an ads banner based on Google AdSense
- Despite being SPI, browsing through fundamental states is monitored by Google Analytics
by using a hidden <iframe> which URL changes when the current fundamental state changes.
- A simulated modal window avoids creating a new window page, this simulated window
is also reachable with a direct URL
or a hashbang version
with text already in markup on load time consequently SEO compatible.
The web www.itsnat.org
It is also made with ItsNat in server. In this case JavaScript History API is used. This is the most perfect approach to convert a conventional
web site to a SPI SEO compatible version. If History API is not supported by a concrete old browser, conventional page navigation is automatically used.
All modern web browsers support the JavaScript History API. The SPI characteristics of this web site are basically the same as the previous example.
The Manifesto in other languages
Spanish
Note: these translations may be slightly out of date because this manifesto is "alive".
Ukranian thanks to Mario Pozner
Russian thanks to Andrey Geonya
Serbo-Croatian thanks to Jovana Milutinovich
Slovakian translation thanks to Knowledge Team
German thanks to Valeria Aleksandrova.
Romanian translation provided by Science Team.
Macedonian thanks to Katerina Nestiv
Hungarian thanks to Elana Pavlet
Estonian thanks to Valerie Bastiaan
Links pointing to the manifesto
Discussion at DZone
Discussion at TheServerSide.com
Discussion at YCombinator.com
Discussion at JavaHispano.org (in Spanish)
Modern Principles in Web Development presentation based on the SPI manifesto
Author of manifesto: Jose Maria Arranz Santamaria