Prior to the popularity of the web, client/server applications often involved the creation of native applications which were deployed to clients. In this model, developers had a great deal of freedom in determining which parts of the entire client/server application would be in the client and which in the server. Consequently, very mature models for client/server development emerged, and often well designed optimal distribution of processing and logic could be achieved. When the web took off, the client was no longer a viable application platform, it was really more of a document viewer. Consequently the user interface logic existed almost entirely on the server. However, the web has matured substantially and has proven itself to be a reasonable application platform. We can once again start utilizing more efficient and well-structured client/server model design. There are certainly still technical issues, but we are in a position to better to build true client/server applications now.
The client/server model can be categorized into three parts:
- User Interface
- Business or Application Logic
- Data Management
Traditional web application development has distributed the implementation of the user interface across the network, with much of the user interface logic and code executed on the server (thin client, fat server). This has several key problems:
- Poor distribution of processing – With a large number of clients, doing all the processing on the server is inefficient.
- High user response latency – Traditional web applications are not responsive enough. High quality user interaction is very sensitive to latency, and very fast response is essential.
- Difficult programming model – Programming a user interface across client/server is simply difficult. When every interaction with a user must involves a request/response, user interface design with this model is complicated and error prone. The vast number of web frameworks for simplifying web development testifies to this inherent difficulty. Some have mitigated this difficulty to some degree.
- Increased vector of attack – Unorganized mingling of user interface code with business code can increase security risks. If access rules are distributed across user interface code, as user interface code grows and evolves, new vectors of attack emerge. With mixed code, new user interface features can easily create new security holes.
- Heavy state management on the servers – When client user interface state information is maintained by the server, this requires a significant increase in resource utilization as server side sessions must be maintained with potentially large object structures within them. Usually these resources can’t be released until a session times out, which is often 30 minutes after a user actually leaves the web site. This can reduce performance and scalability.
- Offline Difficulties – Adding offline capabilities to a web application can be a tremendous project when user interface code is predominantly on the server. The user interface code must be ported to run on the client in offline situations.
- Reduced opportunity for interoperability – When client/server communication is composed of transferring internal parts of the user interface to the browser, it can be very difficult to understand this communication and utilize it for other applications.
With the massive move of development to the web, developers have frequently complained of the idiosyncrasies of different browsers and demanded more standards compliance. However, the new major shift in development is towards interconnectivity of different services and mashups. We will once again feel the pain of programming against differing APIs. However, we won’t have browser vendors to point our fingers at. This will be the fault of web developers for creating web applications with proprietary communication techniques.
Much of the Ajax movement has been related to the move of user interface code to the client. The maturing of the browser platform and the availability of HTTP client capabilities in the XMLHttpRequest object, have allowed much more comprehensive client side user interface implementations. However, with these new found capabilities, it is important to understand how to build client/server applications.
So how do you decide what code should run on the client and what should the run on the server? I have mentioned the problems with user interface code running on the server. Conversely, running business logic and/or data management on the client is simply not acceptable for security reasons. Therefore, quite simply, user interface code is best run on the browser, and application/business logic and data management is best run on the server side. We can take a valuable lesson from object oriented programming to guide this model. Good OO design involves creating objects that encapsulate most of their behavior and have a minimal surface area. It should be intuitive and easy to interact with a well designed object interface. Likewise, client and server interaction should be built on a well-designed interface. Designing for a modular reusable remote interface is often called service oriented architecture (SOA); data is communicated with a defined API, rather than incoherent chunks of user interface. A high quality client/server implementation should have a simple interface between the client and server. The client side user interface should encapsulate as much of the presentation and user interaction code as possible, and the server side code should encapsulate the security, behavior rules, and data interaction. Web applications should be a cleanly divided into two basic elements, the user interface and the web service, with a strong emphasis on minimal surface area between them.
An excellent litmus test for a good client/server model is how easy is it to create a new user interface for the same application. A well designed client/server model should have clearly defined web services such that a new user interface could easily be designed without having to modify server side application logic. A new client could easily connect to the web services and utilize them. Communication should be primarily composed of data, not portions of user interface. The advantages of a clean client/server model where user interface logic and code is delegated to the browser:
- Scalability – It is quite easy to observe the significant scalability advantage of client side processing. The more clients that use an application, the more client machines that are available, whereas the server processing capabilities remain constant (until you buy more servers).
- Immediate user response – Client side code can immediately react to user input, rather than waiting for network transfers.
- Organized programming model – The user interface is properly segmented from application business logic. Such a model provides a cleaner approach to security. When all requests go through user interface code, data can flow through various interfaces before security checks take place. This can make security analysis more complicated, with complex flows to analyze. On the other hand, with a clean web service interface, there is well-defined gateway for security to work on and security analysis is more straightforward, holes can be quickly found and corrected.
- Client side state management – Maintaining transient session state information on the client reduces the memory load on the server. This also allows clients to leverage more RESTful interaction which can further improve scalability and caching opportunities.
- Offline applications – If much of the code for an application is already built to run on the client, creating an offline version of the application will almost certainly be easier.
- Interoperability – By using structured data with minimal APIs for interaction, it is much easier to connect additional consumers and producers to interact with existing systems.
Difficulties with the Client Server Model on the Web
There are certainly difficulties with applying the client/server model to the web. Accessibility and search engine optimization can certainly be particular challenges with the web services approach. Handling these issues may suggest a hybrid approach to web applications, some user interface generation may be done on the server to create search engine accessible pages. However, having a central architectural approach based around a client/server model, with extensions for handling search engines may be a more solid and future oriented technique for many complex web applications.
Our Efforts to Facilitate the Client Server Model
SitePen is certainly not alone in working to facilitate client/server architecture. However, since I am familiar with the projects we help create, I did want to mention our approaches to the client/server model:
DWR – From inception, DWR has provided an excellent framework for building client side user interfaces that can easily connect with server side business logic. DWR was years ahead of its time in establishing a framework that encouraged good client/server modeling. DWR has a solid structure for interacting with Java business logic objects. DWR has continued to progress, providing means for bi-directional Comet based communication (Reverse Ajax), and is adding more interoperability capabilities as well.
Dojo Toolkit – It should be obvious that building a good client-side user interface can benefit from a good toolkit, and Dojo has long provided just that. However, Dojo is more than just a library and set of widgets. Dojo provides real tools for building client/server applications. Dojo RPC can provides tools for connecting to web services, and can even auto-generate services based on SMD definitions. Dojo Data provides a powerful API for interacting with a data model. Dojo has lead the way with Comet technology, creating standards around browser based two-way communication. Recently we have built the JsonRestStore which allows one to connect to a REST web service and interact with it using the Dojo Data read and write API. This greatly simplifies the construction of user interfaces by simplifying the user interface-business logic interaction, and encouraging standards-based communication that can easily be used by others. Furthermore, Dojo provides comprehensive tools for robust data-driven applications; even templating can be done on the client instead of the server with Dojo’s DTL support.
The benefits of using standards-based client/server communication has facilitated integration with server frameworks like Zend, jabsorb, Persevere, and interoperability with other frameworks will be coming soon.
Cometd – Cometd provides real-time duplex communication between clients and servers. However, the distinguishing characteristic of the Cometd project is the focus on not only achieving Comet-style duplex communication, but doing so with an interoperable standard protocol, Bayeux. Comet uses a quintessential client/server approach. Any Cometd (Bayeux implementing) server can interact with any Cometd/Bayeux client. One can easily connect various different client implementations to a single server, by using the Bayeux standard.
Persevere – Persevere is a recently launched project, built with this service oriented client/server approach. Persevere is a web object database and application server with RESTful HTTP/JSON interfaces, allowing applications to quickly be built with a database backend that can be directly and securely accessed through Ajax. Persevere is focused on provided a comprehensive set of web services interaction capabilities through standard interoperable communication. Data can be accessed and modified with basic RESTful JSON interaction, clients can invoke methods on the server with simple JSON-RPC, and data can be queried with JSONQuery/JSONPath. With Dojo’s new REST data store, and SMD driven RPC services, Dojo clients can seamlessly build applications and interact with Persevere services using the Dojo APIs. Complex application logic can be added to the persisted objects in Persevere to facilitate building service oriented applications with a straightforward interface to the user interface code on the browser. Persevere is integrated with Rhino, so model and application logic can be written in JavaScript, providing a consistent language and environment for distributing client and server roles.
For more information about any of these open source projects, visit the SitePen Labs
Summary
As the web platform matures, as applications evolve to use more interactive and rich interfaces, and as web services increasingly interact, architecting web applications with an intelligent client/server model will become increasingly important. A properly designed client/server model will provide a foundation for modular, adaptable, and interoperable applications equipped for future growth.