I was having trouble understanding the various states that a given Backbone model, and all it's attributes, might be in. Specifically, how does an attribute change, and trigger the change
event for that attribute. The change
event isn't triggered when the model is first created, which makes sense because the model had to previous state. In other works, creating the model also creates a state machine for the model attributes. However, this can be confusing if you're adding a new attribute to a model.
Showing posts with label event. Show all posts
Showing posts with label event. Show all posts
Monday, July 7, 2014
How Change Events Are Triggered In Backbone
Thursday, June 19, 2014
Cleanup Events With Backbone Routers
Backbone routers allow you to listen to route events. These are triggered when one of the defined paths in the router matches the URL. For example, the paths of your application can be statically defined and named in the routes
object. The route events triggered by the router use the route name — route:home
, or route:help
. What's nice about this is that different components can subscribe to router events. This means I don't have to define all my route handling code in one place — each component of my application is able to encapsulate it's own routing logic. One piece of the puzzle that's missing is cleaning up my views once the route that created them is no longer active.
Monday, December 9, 2013
Responding To Dialog Open and Close Events
The jQuery UI dialog widget, just like every other widget, triggers events to reflect changes in state. For example, when a dialog is opened, it enters an "open" state, and the
dialogopen
event is how this change is communicated to the outside world. This means that any element can listen to these events, including the body element. Listening for dialog events after they've bubbled up the DOM is handy for implementing generic behavior.Thursday, November 29, 2012
jQuery UI Widgets, Events, and Cleanup
If you're implementing a custom widget, or extending an existing widget, using the widget factory, care must be taken to cleanup when the widget is destroyed. This includes any event handling. The jQuery UI base widget has an _on() function that's inherited by all widgets to help with not only binding event handlers, but with the removal of them as well. That is, you don't need to explicitly unbind the handlers at destroy time if you're using _on() — something I'm apt to forget about.
Monday, June 27, 2011
Monitoring CPU Events With Python
I decided to carry out a little experiment in Python - watching for CPU events. By CPU event, I'm referring to a change in utilization my application might find interesting - large spikes for example. To do this, we need to monitor the CPU usage. However, my earlier approach at doing this wasn't exactly optimal. For one thing, it only works on Linux. This excellent explanation pointed me in a new direction. So I've adapted it for my purposes of monitoring CPU events.
If I'm able to monitor CPU usage, I'll need something that'll run periodically in the background, checking for changes. My basic need is this - I have a main application class I want notified when the CPU load meets a given threshold. This should be relatively straightforward, especially since we've got a handy times() function that'll give us everything we need. Here is an example of what I came up with.
The basic idea is this - when the CPU utilization reaches 10%, my application is notified, and can adjust accordingly. The CPUMonitor class is meant to extend any application class I might come up with.
CPUMonitor is a thread that runs in the background. By default, it checks for CPU load changes every second. If the threshold is matched, the application is notified by calling jump(). Obviously the application needs to provide a jump() implementation.
In my very simple example scenario, App extends CPUMonitor. So when the App class is first instantiated, the CPU monitor runs behind the scenes. Jump is only called if the resources are being over-utilized. The great thing about this is that I decide what over utilization is. Maybe 25% is perfectly acceptable to the operating system, but maybe my application doesn't think so. This value, along with the polling frequency can be altered on the fly.
Try giving this a go, it shouldn't get past 12% utilization or so. You could also play around with the frequency and threshold settings. I've only implemented one event. It wouldn't be too difficult to extend this to, say, trigger a changed by X event.
If I'm able to monitor CPU usage, I'll need something that'll run periodically in the background, checking for changes. My basic need is this - I have a main application class I want notified when the CPU load meets a given threshold. This should be relatively straightforward, especially since we've got a handy times() function that'll give us everything we need. Here is an example of what I came up with.
from threading import Thread
from time import sleep, time
from os import times
class CPUMonitor(Thread):
def __init__(self, frequency=1, threshold=10):
super(CPUMonitor, self).__init__()
self.daemon = True
self.frequency = frequency
self.threshold = threshold
self.used, self.elapsed = self.cputime
self.cache = 0.0
self.start()
def __repr__(self):
return '%.2f %%'%self.utilization
def run(self):
while True:
self.events()
sleep(self.frequency)
def events(self):
if self.utilization >= self.threshold:
self.jump()
def jump(self):
pass
@property
def cputime(self):
cputime = times()
return sum(cputime[0:4]), cputime[4]
@property
def utilization(self):
used, elapsed = self.cputime
try:
result = (used-self.used) / (elapsed-self.elapsed) * 100
except ZeroDivisionError:
result = self.cache
self.used = used
self.elapsed = elapsed
self.cache = result
return result
class App(CPUMonitor):
def __init__(self):
super(App, self).__init__()
self.power = 1000
while True:
try:
print 'APP: Computing with %s...' % self.power
10**self.power
sleep(0.1)
self.power += 1000
except KeyboardInterrupt:
break
def jump(self):
print 'CPU: Jumped - %s' % self
self.power = 1000
if __name__ == '__main__':
app = App()
The basic idea is this - when the CPU utilization reaches 10%, my application is notified, and can adjust accordingly. The CPUMonitor class is meant to extend any application class I might come up with.
CPUMonitor is a thread that runs in the background. By default, it checks for CPU load changes every second. If the threshold is matched, the application is notified by calling jump(). Obviously the application needs to provide a jump() implementation.
In my very simple example scenario, App extends CPUMonitor. So when the App class is first instantiated, the CPU monitor runs behind the scenes. Jump is only called if the resources are being over-utilized. The great thing about this is that I decide what over utilization is. Maybe 25% is perfectly acceptable to the operating system, but maybe my application doesn't think so. This value, along with the polling frequency can be altered on the fly.
Try giving this a go, it shouldn't get past 12% utilization or so. You could also play around with the frequency and threshold settings. I've only implemented one event. It wouldn't be too difficult to extend this to, say, trigger a changed by X event.
Wednesday, March 31, 2010
jQuery Proxy
The jQuery.proxy() method allows event handlers to use the this object as something other than the event object itself. That is, you can change the context of a specified function. Some view this technique as somewhat dangerous. This is probably true if not treated with care.
If you're a developer coming from another object-oriented programming language, you're probably used to the this object referring to the current object or self. The whole purpose of the jQuery.proxy() method is to change that meaning.
It can be quite powerful in certain situations where the event doesn't provide all the data required by the event handler. Or maybe the original event object provides nothing of value at all and it is beneficial to use jQuery.proxy() everywhere. In this case you would only be updating some application model instead of reading the event properties. All you would care about is that the event occurred. Either way, useful feature to have available.
If you're a developer coming from another object-oriented programming language, you're probably used to the this object referring to the current object or self. The whole purpose of the jQuery.proxy() method is to change that meaning.
It can be quite powerful in certain situations where the event doesn't provide all the data required by the event handler. Or maybe the original event object provides nothing of value at all and it is beneficial to use jQuery.proxy() everywhere. In this case you would only be updating some application model instead of reading the event properties. All you would care about is that the event occurred. Either way, useful feature to have available.
Labels:
event
,
handle
,
javascript
,
jquery
,
proxy
Thursday, March 11, 2010
Handling Callback Variables
With Javascript applications that make asynchronous Ajax requests, does it make sense to store a global variable that indicates the callback that should be executed? That is, depending on the current state of the application, a different callback function may need to execute for the same API call. Should a global variable be set before the request is made and checked each time by the response?
That doesn't feel right to me. Neither does sending application data along with the API request just to be used by the callback. That would mean sending application state to the server which isn't a good thing as far as I'm concerned.
I like the idea of listening for certain events that the callback might emit during execution. The callback might emit a certain type of event each time it is run. The events are always emitted but the application may not always be listening. When the application is in a certain state, it will be listening. The handlers of these events could be considered secondary callbacks.
That doesn't feel right to me. Neither does sending application data along with the API request just to be used by the callback. That would mean sending application state to the server which isn't a good thing as far as I'm concerned.
I like the idea of listening for certain events that the callback might emit during execution. The callback might emit a certain type of event each time it is run. The events are always emitted but the application may not always be listening. When the application is in a certain state, it will be listening. The handlers of these events could be considered secondary callbacks.
Labels:
ajax
,
application
,
callback
,
event
,
javascript
,
state
Tuesday, November 17, 2009
Twisted System Events
One of the key features of the Twisted Python web framework is the ability to define reactors that react to asynchronous events. One concept of the the Twisted reactor is the system event. The ReactorBase class is inherited from all reactor types in Twisted, as the name suggests. It is this class that provides all other reactors with a system event processing implementation.
An event, in the context of the Twisted reactor system, has three phases, or states. These states are "before", "during", and "after". What this provides for developers is a means to conceptually organize triggers that are executed when a specific event is fired. The "before" state should execute triggers that are supposed to verify certain data, or perform setup tasks. Anything that would be considered a pre-condition is executed here. The "during" state is overall goal of the event. Triggers that are executed in this state should do the heavy processing. Conceptually, this is the main reason a trigger was registered to execute with the specified event type in the first place. Finally, the "after" state executes triggers that should perform post-condition testing, or clean-up type tasks.
Illustrated below are the various states that a Twisted system event will go through during its' lifetime. The transitions between states are quite straightforward. When there are no more triggers to execute for the current state, the next state is entered.
Event triggers are registered with specific event types by invoking the ReactorBase.addSystemEventTrigger() method. This method accepts an event state, callable, and event type parameters. The callable can be any callable Python object.
The type of event in which triggers can be registered to can be anything. The event type is only the key for a stored event instance. The _ThreePhaseEvent class is instantiated if not already part on the reactor. That is, if a trigger has already been registered for the same event type, that means an event instance has been created. The _ThreePhaseEvent instance for each event type is responsible for executing all event triggers in the correct order. Using the Twisted system event functionality means that dependencies between event states may be used to achieve desired functionality.
An event, in the context of the Twisted reactor system, has three phases, or states. These states are "before", "during", and "after". What this provides for developers is a means to conceptually organize triggers that are executed when a specific event is fired. The "before" state should execute triggers that are supposed to verify certain data, or perform setup tasks. Anything that would be considered a pre-condition is executed here. The "during" state is overall goal of the event. Triggers that are executed in this state should do the heavy processing. Conceptually, this is the main reason a trigger was registered to execute with the specified event type in the first place. Finally, the "after" state executes triggers that should perform post-condition testing, or clean-up type tasks.
Illustrated below are the various states that a Twisted system event will go through during its' lifetime. The transitions between states are quite straightforward. When there are no more triggers to execute for the current state, the next state is entered.
Event triggers are registered with specific event types by invoking the ReactorBase.addSystemEventTrigger() method. This method accepts an event state, callable, and event type parameters. The callable can be any callable Python object.
The type of event in which triggers can be registered to can be anything. The event type is only the key for a stored event instance. The _ThreePhaseEvent class is instantiated if not already part on the reactor. That is, if a trigger has already been registered for the same event type, that means an event instance has been created. The _ThreePhaseEvent instance for each event type is responsible for executing all event triggers in the correct order. Using the Twisted system event functionality means that dependencies between event states may be used to achieve desired functionality.
Thursday, October 29, 2009
Gaphor Idle Threads
The Gaphor UML modeling tool written in Python uses GTK as its' user interface library. GTK is a good choice as it is portable across many platforms and has a nice feature set. What is also nice about the GTK library, from a development perspective, is it as fairly straightforward to handle events that are triggered from the user interface. Such events may be user-generated, such as mouse clicks on widgets. Other events, may be generated by the widgets themselves. Either way, adding a handler for any event such as these is trivial. Using gobject, developers can also add handlers that are executed in between events, such as when the event loop is idle.
In any GTK application, there is a main event loop that must be instantiated within the application code. This is going to be one of the very first actions executed because without it, there will be no responses to GTK events. Every time an event is triggered inside a GTK main event loop, the event instance is placed in a pending event queue. Once the event has been processed, or handled, the event is no longer considered pending and is removed from the queue. So what this means is that the GTK main loop can be viewed, at a high level, as having two distinct states; pending and free. These states are illustrated below.
Here, the initial state represents the instantiation of the GTK main event loop while the final state represents the termination of the main loop. The termination often means that the user has exited the application successfully but could also mean that the application has exited erroneously. Regardless, there are longer any GTK events that will be processed once the main loop has exited, even if the containing application has not exited.
As illustrated, the GTK main loop as to states while running and two transitions between these states. The GTK main event loop will transition to a pending state when there are one or more pending events. The GTK main event loop will transition to a free state when there are zero pending events.
Gaphor defines an idle thread class that makes good use of all this GTK event machinery. The GIdleThread class uses gobject.idle_add() to add a callback to the GTK main event loop. This callback is only executed when there are zero pending events. Actually, it will still execute if there are pending events with a lower priority but that doesn't necessarily concern the concept here. The key concept is that the callbacks created by GIdleThread are only executed when the GTK main loop is idle. The GIdleThread class is illustrated below.
So the nagging question from developers is, why add this abstraction layer on top of the gobject.idle_add() function? Simply put, the GIdleThread class is used to assemble queues when the GTK main loop isn't busy. The obvious benefit here being that queues of arbitrary size can be assembled without sacrificing responsiveness to the end user.
An example use of this class is to read and parse data files. The generator function that yields data is passed, along with the queue that will eventually contain all the parsed data to the GIdleThread constructor. This abstraction also provides the thread-like feeling for developers that use it. Although not a real thread, it looks and behaves like one and is ideal for constructing queues.
In any GTK application, there is a main event loop that must be instantiated within the application code. This is going to be one of the very first actions executed because without it, there will be no responses to GTK events. Every time an event is triggered inside a GTK main event loop, the event instance is placed in a pending event queue. Once the event has been processed, or handled, the event is no longer considered pending and is removed from the queue. So what this means is that the GTK main loop can be viewed, at a high level, as having two distinct states; pending and free. These states are illustrated below.
Here, the initial state represents the instantiation of the GTK main event loop while the final state represents the termination of the main loop. The termination often means that the user has exited the application successfully but could also mean that the application has exited erroneously. Regardless, there are longer any GTK events that will be processed once the main loop has exited, even if the containing application has not exited.
As illustrated, the GTK main loop as to states while running and two transitions between these states. The GTK main event loop will transition to a pending state when there are one or more pending events. The GTK main event loop will transition to a free state when there are zero pending events.
Gaphor defines an idle thread class that makes good use of all this GTK event machinery. The GIdleThread class uses gobject.idle_add() to add a callback to the GTK main event loop. This callback is only executed when there are zero pending events. Actually, it will still execute if there are pending events with a lower priority but that doesn't necessarily concern the concept here. The key concept is that the callbacks created by GIdleThread are only executed when the GTK main loop is idle. The GIdleThread class is illustrated below.
So the nagging question from developers is, why add this abstraction layer on top of the gobject.idle_add() function? Simply put, the GIdleThread class is used to assemble queues when the GTK main loop isn't busy. The obvious benefit here being that queues of arbitrary size can be assembled without sacrificing responsiveness to the end user.
An example use of this class is to read and parse data files. The generator function that yields data is passed, along with the queue that will eventually contain all the parsed data to the GIdleThread constructor. This abstraction also provides the thread-like feeling for developers that use it. Although not a real thread, it looks and behaves like one and is ideal for constructing queues.
Tuesday, October 13, 2009
jQuery Custom Events
One of the more powerful features of the jQuery javascript toolkit is the ability to create custom event types. In addition to the built-in events that are triggered by standard DOM elements, developers have the ability to create custom events. These events that are defined by developers using jQuery are not all that different from the standard DOM events. The main difference being that the developers can have control over the event data and when they are triggered.
The custom jQuery events can be created using the jQuery.Event() function. The one required parameter of this function is the event name. The instance of the custom event is returned. Another parameter that might be passed to the function is an object that contains the event data. However, an event instance created by jQuery.Event() can also have attributes set on it directly. Either way, these values will be passed along with the event once it has been triggered.
Event driven programming is a very useful model. Especially when the application is user interface centric. Here, the application consists almost exclusively of events. The user does something, some event is fired and some handler acts accordingly. With javascript applications, there is also the prospect of triggering events when data arrives from some API request. In fact, without an event to notify the application that the data has arrived, the application would be considered synchronous and would not respond well to user interaction.
Custom javascript events created by developers can be broadcast across the entire application or sent as a signal to a specific DOM element. The more useful method is to broadcast the event because it allows any application component to subscribe to the event. This offers flexibility and reduces the amount of code needed to accomplish a specific task.
The custom jQuery events can be created using the jQuery.Event() function. The one required parameter of this function is the event name. The instance of the custom event is returned. Another parameter that might be passed to the function is an object that contains the event data. However, an event instance created by jQuery.Event() can also have attributes set on it directly. Either way, these values will be passed along with the event once it has been triggered.
Event driven programming is a very useful model. Especially when the application is user interface centric. Here, the application consists almost exclusively of events. The user does something, some event is fired and some handler acts accordingly. With javascript applications, there is also the prospect of triggering events when data arrives from some API request. In fact, without an event to notify the application that the data has arrived, the application would be considered synchronous and would not respond well to user interaction.
Custom javascript events created by developers can be broadcast across the entire application or sent as a signal to a specific DOM element. The more useful method is to broadcast the event because it allows any application component to subscribe to the event. This offers flexibility and reduces the amount of code needed to accomplish a specific task.
Labels:
custom
,
event
,
javascript
,
jquery
,
userinterface
Friday, July 3, 2009
Subscribing To Event Subscritions In Python
New in the latest version of the boduch Python library is the Subscription class. This abstraction is used to represent active event subscriptions to event handles. A Subscription instance ties together the Event class and the Handle class. The Handle providing the callback functionality that is executed when a given event takes place that the handle as a subscription to. As with previous versions, the event handle still uses the subscribe() function to subscribe to particular events. However, in previous versions, this function didn't actually return anything. The function will now return a Subscription instance. This functionality was added so that developers can have an easier method in which to reference which events will cause a given behavior to take place. The following is an example of a Subscription instance being returned.
In the example above, we define a simple event handle called MyHandle. We then subscribe this handle to the EventSetPush event to instantiate a new Subscription instance. Each Subscription instance holds a reference to both the handle and the event that the handle has subscribed to. Additionally, Subscription instances also define behavior. Since a Subscription instance holds a reference to the given event, we can use this instance hold build further subscriptions for this event. The core event handles inside the library already define and expose Subscription instances and can be used to create subscriptions for new event handles as the example below illustrates.
#Example; Subscribing to a boduch Set event.
#Import required objects.
from boduch.event import subscribe, EventSetPush
from boduch.handle import Handle
from boduch.data import Set
#Simple handle.
class MyHandle(Handle):
def __init__(self, *args, **kw):
Handle.__init__(self, *args, **kw)
def run(self):
print "Running my handle."
if __name__=="__main__":
#Create a new subscription instance by subscribing to the event.
print "Subscribing"
sub=subscribe(EventSetPush, MyHandle)
print "Subscribed", sub
#Make sure the simple handle works.
Set().push("data")
#Example; Subscribing to a boduch Set event via subscription.
#Import required objects.
from boduch.subscription import SubSetPush
from boduch.handle import Handle
from boduch.data import Set
#Simple handle.
class MyHandle(Handle):
def __init__(self, *args, **kw):
Handle.__init__(self, *args, **kw)
def run(self):
print "Running my handle."
if __name__=="__main__":
#Create a new subscription instance by subscribing to the event.
print "Subscribing"
sub=SubSetPush.subscribe(MyHandle)
print "Subscribed", sub
#Make sure the simple handle works.
Set().push("data")
Monday, June 15, 2009
Combining Multiprocessing And Threading
In Python, there are two ways to achieve concurrency within a given application; multiprocessing and threading. Concurrency, whether in a Python application, or an application written in another language, often coincides with events taking place. These events can be written directly in code much more effectively when using an event framework. The basic need that the developer using this framework has is the ability to publish events. In turn, things happen in response to those events. Now, what the developer most likely isn't concerned with is the concurrency semantics involved with these event handlers. The circuits Python event framework will take care of this for the developer. What is interesting is how the framework manages the concurrency method used; multiprocessing or threading.
With the multiprocessing approach, a new system process is created for each logical thread of control. This is beneficial on systems with more than one processor because the Python global interpreter lock isn't a concern. This gives the application potential to achieve true concurrency. With the threading approach, a new system thread, otherwise known as a lightweight process is created for each logical thread of control. Applications using this approach means that the Python global interpreter lock is a factor. On systems with more than one processor, true concurrency is not possible within the application itself. The good news is that both approaches can potentially be used inside a given application. There are two independent Python modules that exist for each method. The abstractions inside of each of these modules share nearly identical interfaces.
The circuits Python event framework uses an approach that will use either the multiprocessing module or the threading module. The circuits framework will attempt to use the multiprocessing module method to concurrency in preference to the threading module. The approach to importing the required modules and defining the concurrency abstraction is illustrated below.
As you can see, the core Process abstraction within circuits is declared based on what modules exist on the system. If multiprocessing is available, it is used. Otherwise, the threading module is used. The only downfall to this approach is that as long as the multiprocessing module is available, threads cannot be used. Threads may be preferable to processes in certain situations.
With the multiprocessing approach, a new system process is created for each logical thread of control. This is beneficial on systems with more than one processor because the Python global interpreter lock isn't a concern. This gives the application potential to achieve true concurrency. With the threading approach, a new system thread, otherwise known as a lightweight process is created for each logical thread of control. Applications using this approach means that the Python global interpreter lock is a factor. On systems with more than one processor, true concurrency is not possible within the application itself. The good news is that both approaches can potentially be used inside a given application. There are two independent Python modules that exist for each method. The abstractions inside of each of these modules share nearly identical interfaces.
The circuits Python event framework uses an approach that will use either the multiprocessing module or the threading module. The circuits framework will attempt to use the multiprocessing module method to concurrency in preference to the threading module. The approach to importing the required modules and defining the concurrency abstraction is illustrated below.
As you can see, the core Process abstraction within circuits is declared based on what modules exist on the system. If multiprocessing is available, it is used. Otherwise, the threading module is used. The only downfall to this approach is that as long as the multiprocessing module is available, threads cannot be used. Threads may be preferable to processes in certain situations.
Wednesday, June 10, 2009
Sending Django Dispatch Signals
In any given software system, there exist events that take place. Without events, the system would in fact not be a system at all. Instead, we would have nothing more than a schema. In addition to events taking place, there are often, but not always, responses to those events. Events can be thought of abstractly or modeled explicitly. For instance, the method invocation "obj.do_something()" could be considered an invocation event or a "do something" event. This would be an abstract way of thinking about events in an object oriented system. Developers may not even think of a method invocation as an event taking place. However, the abstraction is there if needed. A method invocation is an event when it needs to be because it has a location in both space and time. Events can also be modeled explicitly in code. This is the case when designing a system that employs a publish-subscribe event system. Events are explicitly published while the responses to events can subscribe to them. Another form of event terminology that is often used is to replace event with signal. This is the terminology used by the Django Python web application framework dispatching system.
Django defines a single Signal base class in dispatcher.py and is a crucial part of the dispatching system. The responsibility of the Signal class is to serve as a base class for all signal types that may dispatched in the system. In the Django signal dispatching system, signal instances are dispatched to receivers. Signal instances can't just spontaneously decide to send themselves. There has to be some motivating party and in the Django signal dispatching system, this concept is referred to as the sender. Thus, the three core concepts of the Django signal dispatching system are signal, sender, and receiver. The relationship between the three concepts is illustrated below.
Senders of signals may dispatch a signal to zero or more receivers. The only way that zero receivers receive a given signal is if zero receivers have been connected to that signal. Additionally, receivers, once connected to a given signal, have the option of only accepting signals from a specific sender.
So how does one wire the required connections between these signal concepts in the Django signal dispatching system? Receivers can connect to specific signal types by invoking the Signal.connect() method on the desired signal instance. The receiver that is being connected to the signal is passed to this method as a parameter. If this receiver is to only accept these signals from specific senders, the sender can also be specified as an parameter to this method. Once connected, the receiver will be activated once any of these signal types have been sent by a sender. A sender can send a signal by invoking the Signal.send() method. The sender itself is passed as a parameter to this method. This is a required parameter even though the receiver may not necessarily care who sent the signal. However, it is good practice to not take these chances. If, from a signal sending point of view, there is always a consistency in regards to who the sender is, there is a new lever of flexibility on the receiving end. Illustrated below is a sample interaction between a sender and a receiver using the Django signal dispatching system to send a signal.
The fact that the signal instances themselves are responsible for connecting receivers to signals as well as the actual sending of the signals may seem counter-intuitive at first. Especially if one is used to working with publish-subscribe style event systems. In these event systems, the publishing and subscribing mechanisms are independent from the publisher and subscriber entities. However, in the end, the same effect is achieved.
Django defines a single Signal base class in dispatcher.py and is a crucial part of the dispatching system. The responsibility of the Signal class is to serve as a base class for all signal types that may dispatched in the system. In the Django signal dispatching system, signal instances are dispatched to receivers. Signal instances can't just spontaneously decide to send themselves. There has to be some motivating party and in the Django signal dispatching system, this concept is referred to as the sender. Thus, the three core concepts of the Django signal dispatching system are signal, sender, and receiver. The relationship between the three concepts is illustrated below.
Senders of signals may dispatch a signal to zero or more receivers. The only way that zero receivers receive a given signal is if zero receivers have been connected to that signal. Additionally, receivers, once connected to a given signal, have the option of only accepting signals from a specific sender.
So how does one wire the required connections between these signal concepts in the Django signal dispatching system? Receivers can connect to specific signal types by invoking the Signal.connect() method on the desired signal instance. The receiver that is being connected to the signal is passed to this method as a parameter. If this receiver is to only accept these signals from specific senders, the sender can also be specified as an parameter to this method. Once connected, the receiver will be activated once any of these signal types have been sent by a sender. A sender can send a signal by invoking the Signal.send() method. The sender itself is passed as a parameter to this method. This is a required parameter even though the receiver may not necessarily care who sent the signal. However, it is good practice to not take these chances. If, from a signal sending point of view, there is always a consistency in regards to who the sender is, there is a new lever of flexibility on the receiving end. Illustrated below is a sample interaction between a sender and a receiver using the Django signal dispatching system to send a signal.
The fact that the signal instances themselves are responsible for connecting receivers to signals as well as the actual sending of the signals may seem counter-intuitive at first. Especially if one is used to working with publish-subscribe style event systems. In these event systems, the publishing and subscribing mechanisms are independent from the publisher and subscriber entities. However, in the end, the same effect is achieved.
Labels:
dispatch
,
django
,
event
,
python
,
signal
,
system
,
webapplication
,
webframework
Tuesday, March 24, 2009
Why we need a thread-safe publish/subscribe event system
Publish-subscribe event systems are a fairly common design pattern in modern computing. The concept becomes increasingly powerful in distributed systems where many nodes can subscribe to an event or topic emitted from a single node. The name publish-subscribe, or pub-sub, is used because it has a tight analogue in the real world. People with magazine or newspaper subscriptions receive updates when something is published. Because of this analogue, developers are more easily able to reason about events and why they occurred in complex software systems. In any given software system, some code will need to react to one or more events. These events can range from anything as simple as a mouse click to a complete database failure. The publish-subscribe pattern is infinitely extensible because any number of observers may subscribe to a single event. Subscriptions can also be canceled to as to offer architectural scalability in both directions; up and down. One bottleneck in a publish-subscribe framework can occur while the publishing object needs to wait until all subscribers have finished reacting to the event. In some cases, this is unavoidable such as when the publisher is expecting a value to be returned from one of the subscribers. In other cases, however, the publisher doesn't care about the subscribers or how they react to a published event. In a localized publish-subscribe system, that is, not a distributed publish-subscribe system, we could use threads for subscribers. If we were to build and use a framework such as this, where subscriptions react to events in separate threads of control, we would also need the ability to turn threading off and use the framework in the same way and have it still be functional. This is because threading is simply not an option in every scenario.
The boduch Python library offers a publish-subscribe event system such as this. The library is still in it's infancy but has the ability to run subscription event handles in new threads on control. The threading capability can also be switched on and off. The same code using the library can be run in either mode. Events are declared by specializing a base "Event" class. Likewise, event handles, or subscriptions, are declared by specializing a base "Handle" class. Developers can then subscribe to an event by passing an event class and a handle class to the subscribe function. Multiple handles may be listening to a given event type and if running in threaded mode, each handle will start a new thread of control. There are limits on the number of threads that are allowed to be run at a given time but this can be adjusted either manually or pragmatically. When running in threaded mode, or non-threaded mode for that matter, published events may be specified as atomic. This really only as an effect when the event system is running in threaded mode because it forces all handles for that particular event to run in the publisher's thread. When running in non-threaded mode, atomic publications are idempotent.
As mentioned earlier, there are several limitations to the boduch library since it is still in its infancy as a project. For instance, there is no way to specify a filter for event subscriptions. Subscribers may want to react to event types based on data contained within the event instance. In turn, there is no way to tack the source object that emitted the event. Finally, there is no real guarantee that proper ordering will be preserved when running in threaded mode. However, this can be worked-around. I haven't actually encountered a scenario where the ordering of instructions have been defective when running in threaded mode. This doesn't mean it is possible. I actually hope I do some day so I can incorporate more built in safety in the library.
The boduch Python library offers a publish-subscribe event system such as this. The library is still in it's infancy but has the ability to run subscription event handles in new threads on control. The threading capability can also be switched on and off. The same code using the library can be run in either mode. Events are declared by specializing a base "Event" class. Likewise, event handles, or subscriptions, are declared by specializing a base "Handle" class. Developers can then subscribe to an event by passing an event class and a handle class to the subscribe function. Multiple handles may be listening to a given event type and if running in threaded mode, each handle will start a new thread of control. There are limits on the number of threads that are allowed to be run at a given time but this can be adjusted either manually or pragmatically. When running in threaded mode, or non-threaded mode for that matter, published events may be specified as atomic. This really only as an effect when the event system is running in threaded mode because it forces all handles for that particular event to run in the publisher's thread. When running in non-threaded mode, atomic publications are idempotent.
As mentioned earlier, there are several limitations to the boduch library since it is still in its infancy as a project. For instance, there is no way to specify a filter for event subscriptions. Subscribers may want to react to event types based on data contained within the event instance. In turn, there is no way to tack the source object that emitted the event. Finally, there is no real guarantee that proper ordering will be preserved when running in threaded mode. However, this can be worked-around. I haven't actually encountered a scenario where the ordering of instructions have been defective when running in threaded mode. This doesn't mean it is possible. I actually hope I do some day so I can incorporate more built in safety in the library.
Saturday, March 14, 2009
Using predicates with the boduch library
With the latest release of the boduch Python library, there are two new predicate classes available; Greater and Lesser. These predicates do exactly what the name says. The Greater predicate will evaluate to true if the first operand is greater than the second. The Lesser predicate will return true if the first operand is less than the operand. Here is an example of how we would use these predicates.
Here, we have two predicate instances, is_greater and is_lesser. The is_greater variable is an instance of the Greater predicate and will evaluate to true in this case. The is_lesser variable is an instance of the Lesser predicate and will evaluate to true in this case.
With the latest release the library, predicate instances can also accept function objects as parameters. For example, consider the following modified example.
Here, we now have two variables, number1 and number2 that will act as operands. Next, we have two functions that will return these values, op1() and op2(). Next, the results() function simply prints the result of evaluating the predicates. In the main program, we construct the two predicate instances, passing the op1() and op2() functions as operand parameters. Next, we initialize the number1 and number2 variables and print the result of evaluating the predicates. Finally, we change the value of number1 and number2 and once more print the results. You'll notice that the results will have reflected the change in number1 and number2.
#Example; boduch predicates
from boduch.predicate import Greater, Lesser
if __name__=="__main__":
is_greater=Greater(2,1)
is_lesser=Lesser(1,2)
if is_greater:
print "is_greater is true."
else:
print "is_greater is false."
if is_lesser:
print "is_lesser is true."
else:
print "is_lesser is false"
Here, we have two predicate instances, is_greater and is_lesser. The is_greater variable is an instance of the Greater predicate and will evaluate to true in this case. The is_lesser variable is an instance of the Lesser predicate and will evaluate to true in this case.
With the latest release the library, predicate instances can also accept function objects as parameters. For example, consider the following modified example.
#Example; boduch predicates
from boduch.predicate import Greater, Lesser
number1=0
number2=0
def op1():
global number1
return number1
def op2():
global number2
return number2
def results():
global is_greater
global is_lesser
if is_greater:
print "is_greater is true."
else:
print "is_greater is false."
if is_lesser:
print "is_lesser is true."
else:
print "is_lesser is false"
if __name__=="__main__":
#Construct predicate instances using function objects as operands.
is_greater=Greater(op1,op2)
is_lesser=Lesser(op1,op2)
#Change the value of the operands.
number1=2
number2=1
#Print results.
results()
#Change the value of the operands.
number1=1
number2=2
#Print results.
results()
Here, we now have two variables, number1 and number2 that will act as operands. Next, we have two functions that will return these values, op1() and op2(). Next, the results() function simply prints the result of evaluating the predicates. In the main program, we construct the two predicate instances, passing the op1() and op2() functions as operand parameters. Next, we initialize the number1 and number2 variables and print the result of evaluating the predicates. Finally, we change the value of number1 and number2 and once more print the results. You'll notice that the results will have reflected the change in number1 and number2.
Wednesday, March 11, 2009
Interesting bug found in the boduch Python library
In the latest release of the boduch Python library, Set instances can now be iterated over. This is done by defining a custom iterator class, SetIterator, that is returned by the Set.__iter__() method. I thought I would further test out this new functionality in a hope that I would discover some new unit tests that I can include with the library. But before I could even get to the Set iteration testing, I discovered an entirely new bug with the Set class.
Firstly, here is the code I used to find the bug.
Here, we defined a custom event handle called MyHandle. I the run method doesn't actually do anything because I discovered the bug before I wrote any handling code. In the main program, we set the event manager to threaded mode. Next, we subscribe our custom event handle to the EventSetPush event. This means that every time Set.push() is invoked, so is MyHandle.run() (in a new thread since we are running in threaded mode here). We then create two set instances and push some data onto each set. Finally, we print the underlying Python lists associated with each set instance.
Here is my initial output.
Slightly different from what was expected. Each set instance should have had one element each. Instead, the lists look identical. Naturally, I assumed that they were the same list. This lead me to start examining the thread manager, thinking that since I was testing in threaded mode, there must be some sort of cross-thread data contamination. Thankfully, the problem got much simpler since I was able to eliminate this as a potential cause. Next in line, the event manager. I tried everything to try and prove that the Set instances were in fact the same instance. Not so. The instances had different memory addresses.
I then realized that Set inherits from Type but the constructor for type is not invoked. Odd. I tried to think of a reason why I would want to inherit something with no static functionality and not initialize it. I think I may have done this because the underlying list instances of Set objects are stored in an attribute called data. Type instances also define a data attribute. I must have thought, during the original implementation, that defining a data attribute for the Set class would have some adverse effect on the Type functionality. Not so. So now, the Type constructor is invoked but with no parameters. This means that the initial value of the Set.data attribute is actually an empty dictionary since this is what the Type constructor will initialize it as. The Set constructor will then initialize the data attribute to a list accordingly.
This however, wasn't the problem either. I was still getting the strange output that pointed so convincingly at the fact that the Set.data attribute was pointing to the same list instance. So, I took a look at the way in which the data attribute is initialized for Set instances. The Set constructor will accept a data keyword parameter. The default value of this parameter is an empty list. This parameter then becomes the Set.data attribute. Just for fun, I decided to take away this parameter and have the data attribute be initialized as an empty list inside the constructor.
Sure enough, that did it. I got the correct output for my two set instances. The data attribute must have been pointing to the same keyword parameter variable. I have a felling that this may be caused somewhere in the event manager. Or maybe not. I haven't tested this scenario outside the library yet.
I'm going to get this release out as soon as possible. The data keyword parameter for the Set constructor will be removed for now. As a side note, this will also affect the iteration functionality for Hash instances in the next release since the Hash.__iter__() method will return a SetIterator instance containing the hash keys. Each key will simply need to be pushed onto the set instead.
Firstly, here is the code I used to find the bug.
#Example; boduch Set bug.
from boduch.data import Set
from boduch.handle import Handle
from boduch.event import subscribe, threaded, EventSetPush
class MyHandle(Handle):
def __init__(self, *args, **kw):
Handle.__init__(self, *args, **kw)
def run(self):
pass
if __name__=="__main__":
threaded(True)
subscribe(EventSetPush, MyHandle)
set_obj1=Set()
set_obj2=Set()
set_obj1.push("data1")
set_obj2.push("data2")
print "SET1",set_obj1.data
print "SET2",set_obj2.data
Here is my initial output.
SET1 ['data1', 'data2']
SET2 ['data1', 'data2']
I then realized that Set inherits from Type but the constructor for type is not invoked. Odd. I tried to think of a reason why I would want to inherit something with no static functionality and not initialize it. I think I may have done this because the underlying list instances of Set objects are stored in an attribute called data. Type instances also define a data attribute. I must have thought, during the original implementation, that defining a data attribute for the Set class would have some adverse effect on the Type functionality. Not so. So now, the Type constructor is invoked but with no parameters. This means that the initial value of the Set.data attribute is actually an empty dictionary since this is what the Type constructor will initialize it as. The Set constructor will then initialize the data attribute to a list accordingly.
This however, wasn't the problem either. I was still getting the strange output that pointed so convincingly at the fact that the Set.data attribute was pointing to the same list instance. So, I took a look at the way in which the data attribute is initialized for Set instances. The Set constructor will accept a data keyword parameter. The default value of this parameter is an empty list. This parameter then becomes the Set.data attribute. Just for fun, I decided to take away this parameter and have the data attribute be initialized as an empty list inside the constructor.
Sure enough, that did it. I got the correct output for my two set instances. The data attribute must have been pointing to the same keyword parameter variable. I have a felling that this may be caused somewhere in the event manager. Or maybe not. I haven't tested this scenario outside the library yet.
I'm going to get this release out as soon as possible. The data keyword parameter for the Set constructor will be removed for now. As a side note, this will also affect the iteration functionality for Hash instances in the next release since the Hash.__iter__() method will return a SetIterator instance containing the hash keys. Each key will simply need to be pushed onto the set instead.
Wednesday, March 4, 2009
Using predicates with the boduch library
The boduch Python library has a new Predicate class that can be used to evaluate predicates. The Predicate class is meant to be an abstract super class. As of the 0.1.2 release of the library, there is only an Equal predicate. This predicate can be used to test the equality of two specified values. For example, consider the following.
In this example, we define two string values; val1 and val2. We then use the Equal predicate as our loop condition. The Equal predicate will always evaluate to true as long as the two specified operators are equal. The Equal predicate uses the == Python operator to evaluate the result. So, why bother with the boduch predicates? One reason would be consistency if you are using the library elsewhere in the application. Another reason may be readability.
The Equal predicate is actually a Python class. Every time we instantiate Equal(), we are actually evaluating against an Equal instance. The equal predicate defines overloaded operators that handle the comparison when the instance is used in that context. There are also events and handles for predicates in the boduch library. If necessary, other handles may subscribe to these events. However, the events published by predicates in the boduch library are atomic. This means that no new threads will be started for these handles.
The next release of the library should have some more interesting predicates such as Greater and Less.
#Example; Using the boduch.predicate.Equal predicate.
from boduch.predicate import Equal
if __name__=="__main__":
val1="My Value"
val2="My Value"
while Equal(val1, val2):
print "Changing val1 in order to exit our loop."
val1="Exit"
print val1
The Equal predicate is actually a Python class. Every time we instantiate Equal(), we are actually evaluating against an Equal instance. The equal predicate defines overloaded operators that handle the comparison when the instance is used in that context. There are also events and handles for predicates in the boduch library. If necessary, other handles may subscribe to these events. However, the events published by predicates in the boduch library are atomic. This means that no new threads will be started for these handles.
The next release of the library should have some more interesting predicates such as Greater and Less.
Thursday, February 12, 2009
boduch 0.1.1
The 0.1.1 version of the boduch Python library is now available. Some changes include:
As you can see, we can now treat Set and Hash instances as though they were Python list and dictionary instances respectively. The main difference being of course, all actions are carried out by event handlers. This enables threaded behaviour for some of the actions. For example, pushing new items and updating existing items will be handled in separate threads if threading is enabled.
- New Set and Hash functionality. Both object types now support the Python key/index notation.
- More unit tests.
#Example; boduch data types.
from boduch.data import Set, Hash
from boduch.event import threaded
def test_set_data():
set_obj=Set()
set_obj.push("test1")
set_obj.push("test2")
print "SET:",set_obj[0]
print "SET:",set_obj[1]
set_obj[0]="updated test1"
set_obj[1]="updated test2"
print "SET:",set_obj[0]
print "SET:",set_obj[1]
del set_obj[0]
del set_obj[0]
def test_hash_data():
hash_obj=Hash()
hash_obj.push(("test1", "value1"))
hash_obj.push(("test2", "value1"))
print "HASH:",hash_obj["test1"]
print "HASH:",hash_obj["test2"]
hash_obj["test1"]="updated value1"
hash_obj["test2"]="updated value2"
print "HASH:",hash_obj["test1"]
print "HASH:",hash_obj["test2"]
del hash_obj["test1"]
del hash_obj["test2"]
if __name__=="__main__":
test_set_data()
test_hash_data()
threaded(True)
test_set_data()
test_hash_data()
Sunday, February 1, 2009
boduch 0.1.0
The 0.1.0 release of the boduch Python library is now available. Changes:
- Minor release.
- Refactored the interface package.
- More API documentation.
Monday, January 26, 2009
boduch 0.0.9
The 0.0.9 release of the boduch Python library is now available. Changes include:
- Completely replaced the LockManager class. The locking primitives for exchanging data between event threads is now handled by the Python queue module.
- Added a new atomic parameter to the EventManager.publish() method. This allows handles to be executed by the same thread that published the event. Event when the event manager is running in threaded mode.
- Added a new max_threads attribute to the ThreadManager class. This is the maximum number of threads allowed to execute.
Subscribe to:
Posts
(
Atom
)