Showing posts with label controller. Show all posts
Showing posts with label controller. Show all posts

Sunday, December 18, 2011

Cloud Controllers

The cloud is all about making new resources available to computing consumers. Not hardware consumers, but processing, memory, and storage consumers.  This distinction, I think, is sometimes overlooked by those of us in the industry.  These resources, ultimately, belong to the application — the customer doing the provisioning doesn't necessarily care about these things.  In the end, their job is to make sure the software they're administering runs effectively.

The availability of physical hardware, or in the cloud's case, virtual hardware, is how we're able to achieve optimal usage of deployed software systems.  Of course, as a consumer, I want to give my application everything it needs to perform.  Not just at the barely acceptable level, but at the screaming fast level.

If I'm to get any of these things from a cloud service, I've got to make sure that the resources my application needs are there ahead of time.  So maybe I'll be proactive and give it more memory than it actually needs?  The issue is that I'm looking at the application from outside of the box.  I can peek my head inside to get a general idea of what's happening and how I can improve the situation.  But I get nothing more than a general idea.  The question is, how can I really know what the application is expecting based on current conditions?  What does it need?  And can I automate this process without writing any code?  Maybe, but I think we need to step back and look at cloud services and what they offer to the applications, not just the users who're initiating the environment.

The User Focus
Cloud infrastructure services have friendly control panels for customers.  Control panels, at least in the context of a cloud environment, should hide some of the ugliness involved with provisioning new resources.  A customer sees a limited set of choices in how they can deploy their application — select from a list of different memory profiles.  Select your required bandwidth.  How much storage will you need?  A form not all that different from something typical of any web application we're used to.

The end result of this process?  The application is deployed — with the hardware it needs.  All the work involved with finding the right physical node in which to place this new virtual machine takes place under the covers.

As with any application, this is a sound principle — take a complex task such as provisioning new virtual machine resources and encapsulate the complexity behind a user interface.  This is what successful applications do well — they're successful because they're easy to use.  There is one broad category of user and we're catering to them — to making life easier in how they interact with the cloud.

The problem that I see, or oversight anyway, with this approach is one of priority. When we're designing software systems, one of the first activities is identifying who'll be using the system.  So who uses cloud service providers?  Well, folks who want to provision new applications without the overhead involved with allocating physical hardware to fit their needs.  An opportunity I see with cloud is automation. Not just a simplified interface for system administrators, but a means for deployed applications to start making decisions on what needs to happen in order to perform optimally.

The Application Focus
Some environmental changes that take places are obvious — like the number of registered users jumping from one hundred thousand to five.  These changes are somewhat straightforward to handle by the administrator — they're not exactly critical to how the service will respond to demand over the next few hours.  This type of environmental change takes place over larger amounts of time — a duration suitable for humans to step in a relieve the situation.  If we're seeing a growth trend in terms of registered users, maybe we'd be smart to assume that we'll need a more robust collection of hardware in the near term.

Now, what about when the timeline of these events — the inclining demand of our application's availability — is compressed into something much smaller, like under an hour for example.  If all available resources our applications has to store, compute, and transfer aren't enough to handle the current state of usage, than we'll see a change in behavior.  Sorry, the users will see a change.  But, thankfully, advanced monitoring tools we deploy to the cloud beside our applications can easily tell us when the application is experiencing trouble and the cloud needs to send more virtual hardware to the rescue.

Even if this isn't an automated procedure, it's still something trivial for the application's monitoring utilities to notify the administrator to go and provision another instance of the application server to cope with the request spike.  In this scenario, there may only be a limited window in which users experience unacceptably poor response times.  But this is often automated too — it doesn't take a system administrator to determine that there aren't enough available resources to fulfill the application's requirements based on current situations.

Google App Engine is a good example of how something like this is automated. Each application deployed to app engine has what are called serving instances. These are the decoupled application instances of the application that doesn't share state with other services.  As the load increases, so do the number of serving instances to help cope with the peak.  Just as importantly, as the peak slowly winds down and the pattern of user behavior returns to normal, app engine kills off superfluous serving instances that aren't necessary.

There are many ways to automate application components to help cope with what users are doing — to prevent one user from sucking available CPU cycles away from others.  Provisioning new instances of the serving instance within the cloud environment for example.  But, does this really take into account what the application is really doing and what's likely to change in the eminent future?  To do that, code inside the application needs to take samples of internal state and propagate these changes outward — toward the outer shell of the application — perhaps even into the cloud operating environment in some circumstances.

The trouble isn't that it's not possible to take into account the inner workings of our applications — it's that it isn't a high-priority for cloud service providers.  It's easy to alter applications deployed to the cloud — to take measurements and make them available to other services that could potentially react to those measurements.  The trouble is, there is simply too much code to write — too much of the burden is put on the customer and not the service provider to offer APIs that can help applications operate effectively in the cloud.

Monday, October 26, 2009

GUI Controller Design

The introduction of graphical user interfaces, or GUIs, have made a huge impact on the way humans interact with computer software. The command line, or terminal, interface is intimidating to many people. You can't exactly do anything intuitively with the command line unless you have several years experience using it. With the GUI, widgets, the components that make up the screen that is displayed to the user, are designed in such a way that they users can infer how to interact with them. For instance, with a button widget in a GUI, it is more often than not, obvious that the button should be clicked. In addition to the actions that the user must take in order to interact with the interface, the GUI allows for descriptive text to be easily placed. This helps the user determine why this button should be clicked instead of that one.

On the development side of things, there is no shortage today of GUI libraries available for use. Most of these libraries are available free of charge as open source software. Also very popular these days is the web browser as an application GUI platform. This is simply because most machines have a web browser capable of rendering HTML. It makes sense to take this approach to reach the widest audience possible.

The GUI library of choice, be it Qt or the web browser, is just one layer in the GUI design structure. In fact, it is the lowest level. Beyond the GUI library layer, that is a lower level still, are all aspects that the application developer doesn't want to deal with. What about the opposite direction in the logical layout of the GUI design structure? The next layer up could potentially be the application controlling layer itself. In many applications, this is in fact how components are layered. But this may not always be ideal. It can be beneficial for design purposes to implement a facade type abstraction in between the application logic and the various GUI widgets that make up application GUI. Illustrated below are potential layers that might be used to tie the GUI to the application itself.



Here, the outermost layer are the App Controllers. This is the heart of the application logic. It is the brain of the program that lives here. Next, we have GUI Controllers. This is another abstraction created by developers for interacting with the GUI library. Finally, at the lowest layer sits the GUI Lib. With this layout, the application logic never interacts directly with the GUI library which is an ideal design trait. GUI controllers created by the developers of the application offer more flexibility in almost every way imaginable.

Firstly, the application logic doesn't need to concern itself with assembling the GUI. Chances are that a given GUI library isn't going to provide the screens that you want to display to your users. They do, however, provide all widgets required to make for a consistent look and feel in the GUI. It is the responsibility of the GUI controlling layer to assemble these GUI widgets in a coherent manor. Again, the application logic only needs to know that it needs to display something to the user. It asks the GUI controlling layer to carry out this task faithfully. There is also the potential for technology independence. If the application controlling layer is interacting directly with the GUI library, modifying the application to support another GUI library is going to be nearly impossible. If, however, this is the responsibility of the GUI controlling layer, this suddenly becomes feasible. Not only does this help with technological independence, but also with platform portability. Chances are that subtle differences in how the widgets are created and displayed will be necessary across platforms. This should be done by the GUI controlling layer and not the application layer as it should function as-is on any platform.

Illustrated below is an application controller and a GUI controller interacting. The idea here is to show that the application controllers do not interact directly with the GUI library. In addition, the application controller servers as a communication channel to other lower layers. For instance, here, the page widget data is retrieved from the database by the application controller. The application controller then sends a message to the GUI controller to construct a GUI component. It sends data retrieved from the database as part of the message.

Wednesday, December 3, 2008

New tools in the TG controllers module

After taking a look at some changes made to the TurboGears controller module in trunk, I must say I'm impressed with the improvements over the current 1.0.x branch.

The first change I noticed was that all classes are now derived from the WSGIController class from the Pylons Python web framework. Also new, and the most interesting in my view, are the hook processing mechanism implemented by the DecoratedController class. What this means is that developers writing TurboGears applications can define hooks that are processed in any of these controller states:
  • before validation
  • before the controller method is called
  • before the rendering takes place
  • after the rendering has happened
If nothing else, I think this will add great value in monitoring the state transitions in larger TurboGears applications. Some requests can be quite large; especially during development time and it is handy to know where this requests are failing. You can now easily log attribute values of your controller instance before validation takes place. This could give some insight as to why the validation is failing with valid values. These hook processors also allow for pre and post processing for every state transition within the controller life-cycle.

It looks like using a controller is not all that different from the current TurboGears. Simply extend the TGController class and expose your methods as needed.