Is there a fundamental limit to the size and complexity of Javascript applications? Or are these rich, web applications just like any other program? If they work, they just work.
Javascript applications have an added problem in comparison to desktop applications. They aren't installed on a users hard drive. They need to be delivered through the we browser. They applications, the Javascript source files may be cached, but this only periodically saves in page load time.
Javascript functionality seems to be growing ever more complex. So where does it end? Will users just need to wait for the modules to download once the source size gets to be huge? Or is there another solution in sight?
Showing posts with label webbrowser. Show all posts
Showing posts with label webbrowser. Show all posts
Wednesday, March 10, 2010
Thursday, July 9, 2009
Multiprocessing Firefox
With newer, modern hardware systems, it is likely that they will contain either multiple physical processors or a single physical processor with multiple cores. When writing applications with these systems as the target platform, it is often a good idea to utilize more than a single process within the application. Why is it that multiple processes make sense within a single application? One may be inclined to think of the concept of a process as being designated for a single running application, or a one-to-one cardinality if you will. On top of moving away from the comfortable idea of a single process for a single application that so many developers are used to, there is also the messy problem of inter-process communication. This particular problem isn't quite as bad as it is made out to be. We just need to use the appropriate abstractions on top of the inter-process communication functionality. Going back to the question of why it is a good idea in the first place, the chief benefit of using multiple processes within a single application is the performance gains. While the same application concurrency logic implemented using threads will offer better responsiveness, the multiprocessing approach offers an opportunity for true concurrency on systems with multiple processors.
The Firefox web browser is currently implementing a version of the browser which incorporates multiple processes rather than having the entire application run in a single process. As discussed above, this entry provides some rationale behind why the development team decided to implement this functionality. Aside from the inherent performance gains offered by systems with multiple processors, there is increased stability. The stability is increased because independent processes decouple the entire browser architecture. This helps provide a degree of isolation not available in a single process. The stability gains are especially apparent when considering the multiple tabs used to view disparate web pages. Security could potentially be improved as a side effect of the isolation provided by processes. Finally, not mentioned in the entry but equally relevant in design terms, the distribution of responsibilities within the web browser system becomes very clear. It sometimes takes a drastic move to improve this design principle such as moving entire pieces of code to a separate process.
In this entry, a demonstration of the browser running a separate process for the tab content shows a clear stability improvement. Even after killing the content process, the user interface process remains in tact.
The Firefox web browser is currently implementing a version of the browser which incorporates multiple processes rather than having the entire application run in a single process. As discussed above, this entry provides some rationale behind why the development team decided to implement this functionality. Aside from the inherent performance gains offered by systems with multiple processors, there is increased stability. The stability is increased because independent processes decouple the entire browser architecture. This helps provide a degree of isolation not available in a single process. The stability gains are especially apparent when considering the multiple tabs used to view disparate web pages. Security could potentially be improved as a side effect of the isolation provided by processes. Finally, not mentioned in the entry but equally relevant in design terms, the distribution of responsibilities within the web browser system becomes very clear. It sometimes takes a drastic move to improve this design principle such as moving entire pieces of code to a separate process.
In this entry, a demonstration of the browser running a separate process for the tab content shows a clear stability improvement. Even after killing the content process, the user interface process remains in tact.
Thursday, April 30, 2009
Producing Stable Javascript with jQuery
If one were to view the page source of ten random different web pages, a large percentage, if not all, of those pages would contain some form of javascript. Applications that use the web browser has the delivery medium are using javascript more and more. Of course, the developers creating these pages aren't using javascript just for the sake of it. Javascript is often introduced to a web page that is lacking in a rich, interactive, user experience. In order for any javascript code to effectively change the way the user experiences the web page, the DOM elements that make up the page need to be manipulated. This means that the javascript needs to be able to locate a specific DOM element or search for a set of DOM elements that meet some specified criteria. One we have obtained a DOM element, or a set of DOM elements in our code, we need to perform actions on these elements in order to manipulate them. In the case of a set of DOM elements, this set needs to be iterated while invoking behavior on each element. Using tradition javascript DOM manipulation can lead to nothing short of a nightmare. On top of the messy code, developers have the added burden of making it work on all browsers. Thankfully, the jQuery toolkit helps alleviate some of this madness by providing some consistency.
At the core of jQuery is the jQuery() query function. This function allows developers to execute DOM element quires at any level of complexity. The function can be used to fetch a single DOM element by id, or using other query constraints such as class or attribute to retrieve a set of DOM elements. The result set returned by the jQuery() function is actually another javascript object. Methods may be invoked on this object. The interesting aspect of this is that the invoked behavior is called for each element in the result set, not just the result set object. If the result set contains a single DOM element, the invoked behavior is called for that single element. If the result set contains several DOM elements, the invoked behavior is applied to every element. If the result set contains no elements, the behavior isn't invoked. This is extremely useful because behavior will never be invoked on a non-existent DOM element. No error handling or looping constructs need to be implemented by the developer. jQuery also projects a shorthand $() function for the main jQuery() function. This, however can be hard on the eyes after a while.
If your javascript also employs logic other than manipulating the DOM, jQuery is also powerful in this area. The jQuery() function also accepts javascript arrays. Developers can then invoke the each() function on the result set. This provides a consistent way to iterate through javascript arrays. This is mostly important because trying to get the same javascript array iteration code to work on multiple browsers isn't easy. Why write the cross-browser code when it is already done?
At the core of jQuery is the jQuery() query function. This function allows developers to execute DOM element quires at any level of complexity. The function can be used to fetch a single DOM element by id, or using other query constraints such as class or attribute to retrieve a set of DOM elements. The result set returned by the jQuery() function is actually another javascript object. Methods may be invoked on this object. The interesting aspect of this is that the invoked behavior is called for each element in the result set, not just the result set object. If the result set contains a single DOM element, the invoked behavior is called for that single element. If the result set contains several DOM elements, the invoked behavior is applied to every element. If the result set contains no elements, the behavior isn't invoked. This is extremely useful because behavior will never be invoked on a non-existent DOM element. No error handling or looping constructs need to be implemented by the developer. jQuery also projects a shorthand $() function for the main jQuery() function. This, however can be hard on the eyes after a while.
If your javascript also employs logic other than manipulating the DOM, jQuery is also powerful in this area. The jQuery() function also accepts javascript arrays. Developers can then invoke the each() function on the result set. This provides a consistent way to iterate through javascript arrays. This is mostly important because trying to get the same javascript array iteration code to work on multiple browsers isn't easy. Why write the cross-browser code when it is already done?
Labels:
dom
,
javascript
,
jquery
,
stability
,
webbrowser
Subscribe to:
Posts
(
Atom
)