There are a variety of open source modeling tools in existence today. Some of these tools are more popular than others for various reasons. One reason for the variation in popularity is the support of the UML specification or lack thereof. Another reason is the overall user experience while building models with these tools. Proprietary UML tools are much more advanced than their open source counterparts in several dimensions. This is why they are the preferred choice by most developers who use the language. They are usually the right tool for the job. If an open source UML modeling tool does not implement part of the UML specification that is important to the developer, this is a show-stopper. Software developers want to use the right tool for the job. Anything that involves less modification and less duplication is always a bonus. However, open source UML modeling tools do exist and are in wide use today so there must be a reason for this. There are some advantages and disadvantages to choosing an open source modeling tool. Two questions to ask before choosing a UML modeling tool are as follows. If a tool doesn't support some part of the UML specification, how will that affect me? How many non-modeling features does the tool have that I will probably never use? I'd now like to explore these questions a little further.
One question to ask yourself that should probably precede the two above is what am I using the UML for? This will obviously influence your choice of modeling tool because different features serve different needs. I'm not going to dig too deeply into what the various uses of the UML are and why they matter. Instead, I'd only like to consider two broad categories; UML as a sketch and UML as a specification. The former category is the more widely used of the two as it requires less of an investment. The latter has much more of an impact on the success of the project because the model has to be formally correct. Otherwise, the cascading modeling errors have a major disruption. What you want to use the UML for is irrelevant here. What is important is the style of modeling you want to use; sketch or specification.
Lets take a look at the UML specification itself. Some of it is essential for tool vendors to implement such as classes and relationships. Others aren't as important such as profiles and timing diagrams. This level of importance is relative to any given project domain. For instance, if we were modeling the specification of a real-time system, these sections of the UML specification are suddenly much more important than if we were simply sketching ideas using UML notation.
Proprietary modeling tools have better support for the UML specification itself. In the rare case that you require a modeling tool to support the full UML language specification, the choice is easy. You need to invest in a proprietary tool. This is the exception rather than the rule. The majority of UML users do not need support of the full specification. Open source UML tools have good support for essential modeling elements such as classes, packages, relationships, use cases, interactions, and state machines. Some are better than others for modeling certain elements and they are all different from one another in terms of user experience.
What if you want to build UML models as a software system specification? Can you still do this if you only require a subset of the UML specification be implemented? Indeed, you can. If you like a given tool, whatever the reason may be, you can't select it for use and hope that it will support the full specification in the future because that may never happen. Open source modeling tools are perfectly acceptable for using UML as a specification. Lets do a quick run-through of which UML elements can be used as a specification with open source modeling tools. Classes, packages and relationships? Check. These elements are ranked the highest because it would be next to impossible to model an object-oriented system without them. Actions, activities, control-flow, and data-flow? Check. We need these elements of the language when it comes time to build smaller, atomic computations of the system. Use cases. Check. These are a must for visualizing simple requirements. This only touches on the very fundamental elements of the language. Support for the UML specification in open source software goes much beyond our purposes here.
Open source tools have these areas pretty well covered without much deviation from the UML specification itself. Open source UML tool support in other areas of the UML specification, like interactions and state machines, are still lacking or are inconsistent. Again, how important are these areas of the language to you? Even with incomplete or missing implementations, using UML as a sketch is possible with open source UML modeling tools. As an example, consider nesting modeling elements. With some open source tools, dragging one element into another to show a parent-child relationship has no effect in the semantic model. That is, internally, from the tool's perspective the two elements are at the same level. From the end user's perspective, these elements are at the same level so the sketch still serves it's purpose.
Up to this point, we've only touched upon modeling elements that exist within the UML specification and the tools that implement them. This is, after all, the primary goal of a UML modeling tool. There are, however, some features that fall outside the scope of the specification itself. The value of these additional features are something to consider when choosing a modeling tool. Many of these features are probably not needed.
Code generation is a must have for any enterprise-grade UML modeling tool. Is this actually a must have? Or is it a feature just for the sake of having a feature? Generating code is also supported by open source UML tools. The claimed benefit of generating code from a model is that the skeleton of the classes and their relationships can be built automatically. This saves us some tedious typing but it also introduces a level of coupling between the model and the code that may not be desired. This is because at the code level, there are going to be small hand-made changes that aren't reflected by the model. If you are building models as sketches, this is definitely not a feature you'd be willing to pay for.
XMI support enables the exchange of modeling meta-data. In theory this is a must-have feature for any modeling tool, open source or proprietary. It is a must-have because it is the standard format used to store and transfer semantic model data. If you are sketching UML diagrams, the underlying semantic model isn't as valuable to you. Therefore, the need for XMI importing and exporting isn't that great. Organizations tend to standardize on a modeling tool once one has been chosen. Even if you are modeling rigid, software system specifications, your need for XMI support may not be that great. In the spectrum of open source UML tools, support for XMI isn't quite there. Different tools support the interchange standard at different levels. It is comparable to the support for various web standards among various browser vendors. The little differences create more problems than the standard solves.
The overall user experience is a criterion that is sometimes overlooked in regard to UML software. Aside from what features are supported, what is needed, and what we can do without, usability has an impact on the quality of the models we produce. Usability isn't necessarily isolated from the feature set of the software. If any given UML modeling tool has too many features that are edging away from the UML specification, we're distracted. I would go so far as to say that this makes the software intimidating when it doesn't have to be. The number of steps required to construct a basic diagram should be small. If that number is too high, think about using something simpler. Building models is no different than writing code in that it is never going to be right the first time around. Modeling is an inherently iterative process. If your chosen modeling software can lead to building models in a more timely manor due to a clean, simple user interface, give that software high marks.
As you can see there are more attributes of a good UML modeling tool to consider than the number of features it has. Deciding on what you are using the UML for, sketching or specifying, changes the evaluation criteria when choosing software. Open source UML modeling tools are a good alternative to purchasing a proprietary tool in some cases, even though these tools still haven't reached maturity yet. When open source UML software doesn't fit your needs, consider how many unnecessary features you will be paying for when choosing your tool of choice.
Showing posts with label opensource. Show all posts
Showing posts with label opensource. Show all posts
Wednesday, June 9, 2010
Monday, March 22, 2010
Open Source Democracy
This post discusses the fact that open source software projects are not a democracy, using an Ubuntu development disagreement as an example.
I always assumed open source meant you have almost any freedom you can imagine with regards to the code itself, not the community.
I always assumed open source meant you have almost any freedom you can imagine with regards to the code itself, not the community.
Monday, March 15, 2010
ArgoUML 0.30
It looks like ArgoUML 0.30 is now available for download. I look forward to testing it out. This is another step toward UML 2 compliance with open source software modeling tools.
Labels:
argouml
,
milestone
,
model
,
opensource
,
uml
Tuesday, January 12, 2010
Proprietary Threats
In a recent entry about the state of the Postgres project, the author explains that the project is still in good health. This explanation was brought about by the recent idea that Oracle could simply buy them out by head-hunting the core developers. This is a lot more difficult to do than it sounds. Especially for open source projects such as Postgres.
The fear seems to be that Postgres will go the way that MySQL did. But, as the entry points out, they are entirely different in terms of development culture. Postgres is different in that it doesn't have a single authority in which to make decisions.
Proprietary vendors can threaten open source projects in more ways than one. It can overcome open source projects by nothing more than intimidation. They have a lot of resources. These resources include buying power. Large corporations can typically purchase their way out of potential competition.
Open source developers really shouldn't concern themselves with this threat for two reasons. One, open source projects are largely distributed both geographically and in interests. New people join open source projects every day. Again, as the entry suggests, there are more than enough developers that will be willing to pick up the slack of those who leave any given project.
Another reason why the open source community shouldn't concern themselves with proprietary vendor threats is that competition can also be a good thing. It can be a good thing for both parties.
In a given domain, lets say databases, users have a choice between proprietary products and open source projects. Both sides can benefit from one anther because they provide a reference point in which to do comparisons. So the open source project can say "you can pay for feature X or you can get feature YZ for free" while the proprietary vendors can say "sure, feature YZ is stable, but you paying for stability in feature X".
The list goes on. There is never going to be an entirely open source or an entirely proprietary software world. Threats will be posed from both directions. If you are a developer, your best bet is to just use what is made available to you to make the best software possible. Better than the last version anyway.
The fear seems to be that Postgres will go the way that MySQL did. But, as the entry points out, they are entirely different in terms of development culture. Postgres is different in that it doesn't have a single authority in which to make decisions.
Proprietary vendors can threaten open source projects in more ways than one. It can overcome open source projects by nothing more than intimidation. They have a lot of resources. These resources include buying power. Large corporations can typically purchase their way out of potential competition.
Open source developers really shouldn't concern themselves with this threat for two reasons. One, open source projects are largely distributed both geographically and in interests. New people join open source projects every day. Again, as the entry suggests, there are more than enough developers that will be willing to pick up the slack of those who leave any given project.
Another reason why the open source community shouldn't concern themselves with proprietary vendor threats is that competition can also be a good thing. It can be a good thing for both parties.
In a given domain, lets say databases, users have a choice between proprietary products and open source projects. Both sides can benefit from one anther because they provide a reference point in which to do comparisons. So the open source project can say "you can pay for feature X or you can get feature YZ for free" while the proprietary vendors can say "sure, feature YZ is stable, but you paying for stability in feature X".
The list goes on. There is never going to be an entirely open source or an entirely proprietary software world. Threats will be posed from both directions. If you are a developer, your best bet is to just use what is made available to you to make the best software possible. Better than the last version anyway.
Labels:
mysql
,
opensource
,
oracle
,
postgres
,
proprietary
Tuesday, January 5, 2010
Free MySQL Free Internet
Monty, the creator of the MySQL database, says that Oracle acquiring Sun could be the end of MySQL. MySQL is a major competitor for Oracle's proprietary offerings.
LAMP stacks, which make up a huge part of the Internet depend hugely on MySQL and would be affected in a big way by this acquisition.
But would it affect the Internet as a whole? Sure, there would be a sizable impact but it would by no means shut things down for good. There are other open source databases out there. MySQL happens to be a great one and it would be a shame to see it go.
LAMP stacks, which make up a huge part of the Internet depend hugely on MySQL and would be affected in a big way by this acquisition.
But would it affect the Internet as a whole? Sure, there would be a sizable impact but it would by no means shut things down for good. There are other open source databases out there. MySQL happens to be a great one and it would be a shame to see it go.
Tuesday, October 6, 2009
Palm And Open Source
An interesting read over at IT wire about how Palm is apparently turning down applications built to run on its' platform from appearing in its' application catalog. There is nothing inherently wrong with this as they have a right to do so. The strange thing is that the rejected applications included an open source license.
If nothing else, this shows that there is still a bias against the fact that open source applications are not unique to specific vendors. That is, users can go out and get these applications, even modify and rebuild them, all on there own. Why this scares companies so much baffles me to no end. Especially given the huge momentum the open source movement has been gaining over the past few years.
If nothing else, this shows that there is still a bias against the fact that open source applications are not unique to specific vendors. That is, users can go out and get these applications, even modify and rebuild them, all on there own. Why this scares companies so much baffles me to no end. Especially given the huge momentum the open source movement has been gaining over the past few years.
Labels:
application
,
catalog
,
opensource
,
palm
,
software
Wednesday, September 30, 2009
Open Source Quality
In an interesting entry over at PC world, they mention a study that shows an overall decrease in defective open source software. Over the past three years, defects in open source software are down. This is great news, even if not entirely accurate, because I doubt the study is so flawed that there are more defects in open source software today. Every day, new open source projects poof into existence. How can all the complexities of the open source ecosystem be reliably measured? The truth is that they cannot. But some larger open source projects are much more popular than others and have been deployed in production environments. These are the interesting projects to monitor for defects because chances are, when a defect is fixed in one large project, that same defect will be fixed in countless others due to the dependency.
What I find interesting is the willingness to study and publish the results of code quality. To most users, even some developers, the code quality isn't that high on the requirement list for using the software. They don't see the code. Even most developers only see the API, and, arguably, that is all they should have to see. The code quality does effect more than just maintainability.
This brings up another question. Does the improved code quality improve the perceived used experience? Not likely, in most cases. But in some, yes. Even if it isn't obvious to the developers who fix a bug that wouldn't have any apparent effect on usability. Looking at these subtle usability enhancements in the aggregate might be interesting.
What I find interesting is the willingness to study and publish the results of code quality. To most users, even some developers, the code quality isn't that high on the requirement list for using the software. They don't see the code. Even most developers only see the API, and, arguably, that is all they should have to see. The code quality does effect more than just maintainability.
This brings up another question. Does the improved code quality improve the perceived used experience? Not likely, in most cases. But in some, yes. Even if it isn't obvious to the developers who fix a bug that wouldn't have any apparent effect on usability. Looking at these subtle usability enhancements in the aggregate might be interesting.
Labels:
api
,
design
,
opensource
,
qa
,
quality
,
userinterface
Saturday, September 19, 2009
Open Source Accounting
An entry over at linux magazine discusses some of the financial side of the Gnome and KDE open source projects and how both organizations might be spending their money. It, for myself, is interesting to think about because not very often is the financial side of big open source projects considered. I say big open source projects because the smaller projects generally don't have a financial side to them. In fact, many large open source projects do not have a financial backing. Does this mean they are of lower quality than the projects who put money where their mouth is? I think not. They are just different. Besides, for most end users of most open source projects, the financial situation of the project is of little to no importance. The software is free.
Another notable aspect of Gnome and KDE specifying where their money comes from is that most people don't care who the big corporate sponsors of the project are. And, even the ones who do care, the acquisition of such insight is unlikely to influence them positively toward the project. Big open source projects should advertise the fact that the smaller contributions of users are what matter. They are more influential toward other users making a similar small donation. I think both Gnome and KDE do a good job of this.
Another notable aspect of Gnome and KDE specifying where their money comes from is that most people don't care who the big corporate sponsors of the project are. And, even the ones who do care, the acquisition of such insight is unlikely to influence them positively toward the project. Big open source projects should advertise the fact that the smaller contributions of users are what matter. They are more influential toward other users making a similar small donation. I think both Gnome and KDE do a good job of this.
Labels:
accounting
,
financial
,
gnome
,
kde
,
opensource
Tuesday, September 15, 2009
Moving Away From Open Source
This entry over at IT world discusses some of the forces behind users that fully intend on using an open source application and make the jump over to the proprietary software world instead. It seems counter-intuitive that any user would will give up something that is free just to pay for it. But as the entry states, there are many reasons users do this. Open source applications tend to have a lack of support, lack of features, and lack of documentation. Oddly enough, there are some open source applications that share the same qualities as their proprietary counterparts. They both have a great feature set, documentation, etc. But like all software, the subtle differences can simply make one application better than the other.
The biggest stumbling block is installation and initial configuration of open source applications as far as I'm concerned. Proprietary installation and configuration procedures are generally a more enjoyable procedure. This is the first step to using any application and is important to get right because it gives the user an impression of what the rest of the application experience is going to be like was it actually gets installed if ever.
The biggest stumbling block is installation and initial configuration of open source applications as far as I'm concerned. Proprietary installation and configuration procedures are generally a more enjoyable procedure. This is the first step to using any application and is important to get right because it gives the user an impression of what the rest of the application experience is going to be like was it actually gets installed if ever.
Friday, September 11, 2009
Open Source Economy
An entry over at linux insider talks about the current boom that the open source market is experiencing despite the global recession. Even in difficult financial times, the open source market continues to display positive numbers. Is open source absolutely bullet-proof no matter what the global economic state is? Absolutely not. Large companies are not going to use a product or service simply because it is free. It needs to solve a real world problem and it needs to do it well.
As stated in the entry, the fact that open source software is free isn't the only determinant driving the open source market. Could this recession act as a gateway for those of us who are still timid in the face of open source? Absolutely yes. And it will only perpetuate further as more and more companies become aware of others thriving on open source.
One question that is difficult to answer is will it last? One would like to hope so although such a question would be nearly impossible to answer at this point. If one thing is clear, it is that this recession could be an important point in the open source movement's history.
As stated in the entry, the fact that open source software is free isn't the only determinant driving the open source market. Could this recession act as a gateway for those of us who are still timid in the face of open source? Absolutely yes. And it will only perpetuate further as more and more companies become aware of others thriving on open source.
One question that is difficult to answer is will it last? One would like to hope so although such a question would be nearly impossible to answer at this point. If one thing is clear, it is that this recession could be an important point in the open source movement's history.
Labels:
economy
,
footinthedoor
,
linux
,
opensource
,
recession
Friday, August 7, 2009
Suing Open Source
In an interesting entry, Bill Snyder talks about a new set of guidelines written by the American Law Institute. These guidelines state that the developers of software should be held responsible for "knowingly" shipping buggy software. Needless to say, something like this is bound to raise some controversy in both the proprietary and the open source software worlds. As the article states, there is some ambiguity in the chosen language of these guidelines. For instance, what constitutes "knowingly". Additionally, what constitutes "buggy"? When it comes to software, both these turns are open for interpretation. f there were a standard set of criteria in which to evaluate software in order to determine its "buggyness", this way be feasible. However, the end user is responsible for reporting bugs. That is, no matter how much quality assurance a given software package goes through, there is bound to be something "wrong" with the software. Now, this comes back to the notion of what constitutes "buggy" or "wrong" or "incorrect". If there were a contract in place that had the written requirements in stone, there might be something substantial here. But these upfront requirements are unrealistic. There is going to be changes needed and it is really up to the developers to decide if they fall into the bug category.
In the open source world, it is stated in virtually any end user agreement that this software should be used at your own risk. If you are an open source end user, file bug reports, be helpful about it and you will be happier in the long run than if you were to take the complaining road.
In the open source world, it is stated in virtually any end user agreement that this software should be used at your own risk. If you are an open source end user, file bug reports, be helpful about it and you will be happier in the long run than if you were to take the complaining road.
Wednesday, July 22, 2009
Open Source For America
In an interesting entry about open source software adoption, the announcement of the OSFA (Open Source For America) is discussed. So what exactly is the OSFA? The OSFA is a group of organizations who have invested heavily in open source technologies. The reason this group has formed is to promote the use of open source technology in the US federal government. This is simply a fantastic idea and a landmark in open source history. One may ask why a group of organizations is necessary to help a government adopt open source technology. The answer is that open source is relatively new. There aren't many professionals employed by the US government. The organizations that form the OSFA, however, have plenty of expertise in regards to open source. The US government is concerned with running a country, not choosing which software is best suited for the job.
The benefits of using open source technology apply to the government just as they would to any other organization or individual. That is, any government that chooses open source technology over the proprietary alternative gets treated just as though they are a regular user. There isn't any real discrimination in the open source world with who uses what.
The primary motivation for the US government is to cut costs. They savings are obviously in the fact that there aren't any license fees to pay with open source. However, they also gain everything else that comes with open source including generally better software. Additionally, there are hidden cost savings that generally occur on a per deployment basis.
The Canadian government seems to be falling behind in the information technology department. They certainly need to show more interest in advancing their infrastructure which can be achieved through open source. The OSFA serves as a good example of the next step that should be taken.
The benefits of using open source technology apply to the government just as they would to any other organization or individual. That is, any government that chooses open source technology over the proprietary alternative gets treated just as though they are a regular user. There isn't any real discrimination in the open source world with who uses what.
The primary motivation for the US government is to cut costs. They savings are obviously in the fact that there aren't any license fees to pay with open source. However, they also gain everything else that comes with open source including generally better software. Additionally, there are hidden cost savings that generally occur on a per deployment basis.
The Canadian government seems to be falling behind in the information technology department. They certainly need to show more interest in advancing their infrastructure which can be achieved through open source. The OSFA serves as a good example of the next step that should be taken.
Friday, July 10, 2009
How To Volunteer Code
Have you ever had someone ask you what open source is all about? Once you tell them, do they in turn ask why would people volunteer their time for free? The latter is a much more difficult question to answer because more often than not, you have to be part of the open source community to "get it". Some people might answer along the lines of "its a cause we are fighting for" or "I simply don't like Microsoft". The good news is that you don't really need a justifiable reason for becoming part of the open source community. One answer to the question of why bother contributing my efforts to the open source community that isn't likely to be heard that often is that it is a world leading learning environment for technology. Open source is, well, open. So, by that virtue alone, one can take the necessary steps to learn. If you have a question about some aspect of some project, it is there for you to figure out on your own if need be.
Trying to dive right into open source project source code may not be the best approach to newcomers to the open source community. If not, then how do people get started with open source? The thing is, the method in which to contribute something back to the community varies on a project to project basis. This can be both good and bad. It is good because there are no restrictions on the development methodology used and other annoying restrictions found in proprietary environments. It is bad because some projects do a great job of letting newcomers know how they can contribute and other projects not so much. As for the projects that don't make clear how additional help could be applied, it is unfortunate since there are many very talented developers out there who are just getting started in their careers. If they would like to put their skills to use in the open source community but don't have a good starting point, those skills are waisted.
Also, as discussed here, There is also the prospect of starting a new open source project. This is another challenging problem since identifying valuable problems in which to solve in the open source world isn't easy. Another problem is there aren't any learning resources from other developers when starting a new project. The lack of mentors available isn't a big a problem for more experienced developers who start their own open source projects. However, younger, inexperienced developers might have a rough go of it own their own.
The best way to join an existing project of interest is simply to ask. But also give those concerned an idea of what you are capable of at the same time. This will also give you an indicator of what working with this particular community would be like.
Lastly, people can also volunteer data as is described here. This is geared toward non-developers who have an interest in contributing to the open source community.
Trying to dive right into open source project source code may not be the best approach to newcomers to the open source community. If not, then how do people get started with open source? The thing is, the method in which to contribute something back to the community varies on a project to project basis. This can be both good and bad. It is good because there are no restrictions on the development methodology used and other annoying restrictions found in proprietary environments. It is bad because some projects do a great job of letting newcomers know how they can contribute and other projects not so much. As for the projects that don't make clear how additional help could be applied, it is unfortunate since there are many very talented developers out there who are just getting started in their careers. If they would like to put their skills to use in the open source community but don't have a good starting point, those skills are waisted.
Also, as discussed here, There is also the prospect of starting a new open source project. This is another challenging problem since identifying valuable problems in which to solve in the open source world isn't easy. Another problem is there aren't any learning resources from other developers when starting a new project. The lack of mentors available isn't a big a problem for more experienced developers who start their own open source projects. However, younger, inexperienced developers might have a rough go of it own their own.
The best way to join an existing project of interest is simply to ask. But also give those concerned an idea of what you are capable of at the same time. This will also give you an indicator of what working with this particular community would be like.
Lastly, people can also volunteer data as is described here. This is geared toward non-developers who have an interest in contributing to the open source community.
Labels:
code
,
community
,
contribute
,
opensource
,
volunteer
Tuesday, June 30, 2009
Release Early, Release Often, Break The Interface
Release early, release often. The credo of open source software development in most cases. This didn't become the case simply because open source developers enjoy the task of doing project releases. In fact, this task is often burdensome, especially for projects with many components. This open source development philosophy became so common because it works. The idea behind releasing early and releasing often is to generate early feedback from the user community. This feedback is invaluable to developers and user interface designers alike. But why does it matter if the feedback is early? Does the feedback lose something if it comes much later after the software has been released? I think it certainly does. Although feedback regarding any open source project is valuable at any time during the development life cycle, the earlier this feedback is received, the lesser the likelihood of the concerns being pushed back. That is, if a given open source software project is released one week and users of this software express their concerns the following week, it is likely that these concerns will be addressed in the following release.
This entry raises an interesting usability problem with open source software projects. That is, users of sloppy, or, disorganized user interfaces get used to it. They simply accept the fact that this is the way the user interface is and is unlikely to change. The entry is slightly dated, although, unfortunately, the problem persists in many modern day open source software projects. So the glaringly obvious question is, how is something like this fixed? Aren't there countless usability experts out there that can design beautiful user interfaces? Surely some of these experts are willing to work on some of these open source projects that lack in the usability arena? The answer would be yes, there exists extraordinary usability talent in the open source community but as the entry suggests, there are different pressures face in open source projects than in the proprietary counterparts.
So what do these projects do with the user interfaces that lack in usability? They leave it up to the end user to figure out how they want the user interface to look by adding an endless stream of user preferences. These preferences allow the users to turn off undesired behavior. I say this is an endless stream because once one user interface preference is added, it is hard not to add more preferences of other user interface components. Additionally, this leads to a recursive user interface design problem because of the need to maintain these preferences. Again, as the entry suggests, this can become quite the mess to maintain because users of the software can quite easily grow accustomed to these "patch" type settings.
So what is the solution to this problem? Unfortunately, I do not have a solid answer as I'm no usability expert. However, reducing the number of moving parts on any user interface is always a good idea. I find the "less is more" principle invaluable in regards to user interface design. When it comes to user interface preferences that have been put in place for the end user to customize the "look and feel", they are most likely to hide components that clutter the user interface. Developers shouldn't take the decision to add new preference or configuration values lightly. Once added, new configuration options should be there to stay. When adding a new preference that allows end users to hide a given user interface component, think long and hard about the necessity of the component in question.
This entry raises an interesting usability problem with open source software projects. That is, users of sloppy, or, disorganized user interfaces get used to it. They simply accept the fact that this is the way the user interface is and is unlikely to change. The entry is slightly dated, although, unfortunately, the problem persists in many modern day open source software projects. So the glaringly obvious question is, how is something like this fixed? Aren't there countless usability experts out there that can design beautiful user interfaces? Surely some of these experts are willing to work on some of these open source projects that lack in the usability arena? The answer would be yes, there exists extraordinary usability talent in the open source community but as the entry suggests, there are different pressures face in open source projects than in the proprietary counterparts.
So what do these projects do with the user interfaces that lack in usability? They leave it up to the end user to figure out how they want the user interface to look by adding an endless stream of user preferences. These preferences allow the users to turn off undesired behavior. I say this is an endless stream because once one user interface preference is added, it is hard not to add more preferences of other user interface components. Additionally, this leads to a recursive user interface design problem because of the need to maintain these preferences. Again, as the entry suggests, this can become quite the mess to maintain because users of the software can quite easily grow accustomed to these "patch" type settings.
So what is the solution to this problem? Unfortunately, I do not have a solid answer as I'm no usability expert. However, reducing the number of moving parts on any user interface is always a good idea. I find the "less is more" principle invaluable in regards to user interface design. When it comes to user interface preferences that have been put in place for the end user to customize the "look and feel", they are most likely to hide components that clutter the user interface. Developers shouldn't take the decision to add new preference or configuration values lightly. Once added, new configuration options should be there to stay. When adding a new preference that allows end users to hide a given user interface component, think long and hard about the necessity of the component in question.
Monday, June 22, 2009
Turning Off Desktop Innovation
An interesting entry brings up the always controversial discussion of innovation in the open source desktop domain. I'm not entirely convinced that this topic should be nearly as controversial as it seams to be. And who knows, maybe it isn't. Putting the desktop operating system environment aside for a moment, innovation in software as a whole is hard. It is also a requirement of doing software development. Do nothing, and nothing will happen. If there were no innovation in desktop computing environments, in open source Linux distributions specifically, the end users would be stuck in the same situation. However, as the entry asks this very question, perhaps the users are stuck where they are for a reason. Maybe they have zero need to innovation that would serve their particular purpose. They use what they are using because it helps them reach their ultimate goal. Sometimes with innovative software, users are presented with features they didn't no they needed until they became available. This, not always, but often enough, translates to they don't really need it at all. However, users aren't going to be able to use the same piece of software indefinitely in the majority of cases. So, it seems that the logical thing to strive for here is a balance between stability and new features (innovation).
When attempting to strike a balance between stability and new features, developers are faced with an additional challenge. Toward the tail end of this entry, the option of turning these new innovative features off entirely is mentioned. I think an important characteristic to think about when considering new innovative features. Think about it. You ship your existing stable features along with the brand new innovative stuff. If something blows up in the new feature set, the user simply turns it off. Simply of course not being quite accurate. This ability to turn features on and off is no easy feat. Consider the notion of extension modules. The whole idea behind them is that they extend some piece of core functionality. They can also be turned off. However, this is generally done with configuration files that a typical desktop end-user should never be expected to interface with. So, there is the the technical aspect of modularity of features.
Assuming there were a robust, modular desktop architecture that allowed developers to turn features on and off, how would the desktop compel the user to use the new "better" features? Do the new features default to "on"? There is the whole usability question in addition to a very challenging technical problem.
When attempting to strike a balance between stability and new features, developers are faced with an additional challenge. Toward the tail end of this entry, the option of turning these new innovative features off entirely is mentioned. I think an important characteristic to think about when considering new innovative features. Think about it. You ship your existing stable features along with the brand new innovative stuff. If something blows up in the new feature set, the user simply turns it off. Simply of course not being quite accurate. This ability to turn features on and off is no easy feat. Consider the notion of extension modules. The whole idea behind them is that they extend some piece of core functionality. They can also be turned off. However, this is generally done with configuration files that a typical desktop end-user should never be expected to interface with. So, there is the the technical aspect of modularity of features.
Assuming there were a robust, modular desktop architecture that allowed developers to turn features on and off, how would the desktop compel the user to use the new "better" features? Do the new features default to "on"? There is the whole usability question in addition to a very challenging technical problem.
Labels:
desktop
,
idea
,
innovation
,
linux
,
opensource
,
problem
Monday, June 1, 2009
Open Source Freeloaders
In an interesting entry about leeches in open source software, the question of big corporations and open source software freeloading is raised. Does such a thing as freeloading on open source software exist? Well, according to the entry, some members of certain open source communities believe that doing so in a corporate environment without ever contributing back to the community would be considered freeloading. However, the open source license used in many popular open source projects does not require any contribution back to the community. Is this an ethical concern then? Do corporations feel bad for not contributing back to a software project that they are allowed to use for free? No. Individuals, maybe.
When you have put a significant time and effort investment anything, you generally want it appreciated. It is easy to see how the core developers of a successful project become essentially unimaginative toward it. The willingness of someone to contribute back any kind of artifact boosts the overall project motivation. The project team no longer feels that they are working toward something that has already become a lost cause. However, there are also implicit contributions made to open source projects.
The mere public knowledge that a large corporation is using any given open source project is probably worth more to the project than anything tangible the corporation would be willing to contribute. People within large corporations didn't decide to use a particular open source solution for the good of their health. They use it because it does what it is supposed to do. This should be very motivating. I'm always impressed by the fact that I use a programming language NASA considers useful.
What about when large corporations complain loudly and thoroughly about a open source project? Well, this does two things for the project. First, it demonstrates that corporation is using the software otherwise they would never take the time to complain about it. Second, it sets the stage for the project. The corporation does all the leg work by pointing public attention toward the flaws in the software. Now all eyes are on the project. All that's left to do is fix it quickly deliver in front of everyone. It seems that there isn't too much damage that freeloading can do to the open source software industry.
When you have put a significant time and effort investment anything, you generally want it appreciated. It is easy to see how the core developers of a successful project become essentially unimaginative toward it. The willingness of someone to contribute back any kind of artifact boosts the overall project motivation. The project team no longer feels that they are working toward something that has already become a lost cause. However, there are also implicit contributions made to open source projects.
The mere public knowledge that a large corporation is using any given open source project is probably worth more to the project than anything tangible the corporation would be willing to contribute. People within large corporations didn't decide to use a particular open source solution for the good of their health. They use it because it does what it is supposed to do. This should be very motivating. I'm always impressed by the fact that I use a programming language NASA considers useful.
What about when large corporations complain loudly and thoroughly about a open source project? Well, this does two things for the project. First, it demonstrates that corporation is using the software otherwise they would never take the time to complain about it. Second, it sets the stage for the project. The corporation does all the leg work by pointing public attention toward the flaws in the software. Now all eyes are on the project. All that's left to do is fix it quickly deliver in front of everyone. It seems that there isn't too much damage that freeloading can do to the open source software industry.
Tuesday, April 21, 2009
Software, Patents, and Innovation
An interesting entry over at the open source initiative asks if patents hinder or encourage innovation. This is an interesting question regardless of the field in question. The entry talks about the centuries-old case of the steam engine. Once a patent for an idea has been established, anyone wishing to employ this idea will be in debt to the patent holder. How likely is anyone wishing to use or further develop the idea to get involved? Also, how does the patent holder benefit from this situation? They don't. In fact, the encouragement often works in the opposite direction. In many cases, there is no alternative than to use an idea that has been patented. This restriction then grows to resentment toward the patent holder and that is not something any designer, especially in the software industry wants. The idea of openness and community collaboration not withstanding, software suffers the same patent-type problems.
Early on in the development of the steam engine, the idea was patented. This brought about an era of the steam engine where there was no spectacular innovation. Years later, the steam engine designers lifted the patent. Sure enough, this brought about an era of design innovation. Such much so that the innovation rate of the steam engine doubled over the previous patented era. Designers and engineers are not turned away by the thought of patent infringements.
It seems that patents in general do not benefit anyone. The main motivation patented ideas offer is to create the patent in the first place. This is an extremely flawed approach to building anything well. The concept of "do something you love" is tossed out the window. Who loves to patent things? Why not do something well and have the rest follow. In the end it all comes down to the designer's attitude. In the case of the steam engine, once the patent was lifted, attitudes changed. That, in turn, changed everything.
In software, patents are hard to define. There is simply no way around it. Especially trying to patent an algorithm of some sort. If I were to take an existing algorithm that has been patented, completely re-factor it, and end up with the same result, would that be considered patent infringement? If so, what is really being patented is a specific output paired with a specific input. That would be very unjust.
Early on in the development of the steam engine, the idea was patented. This brought about an era of the steam engine where there was no spectacular innovation. Years later, the steam engine designers lifted the patent. Sure enough, this brought about an era of design innovation. Such much so that the innovation rate of the steam engine doubled over the previous patented era. Designers and engineers are not turned away by the thought of patent infringements.
It seems that patents in general do not benefit anyone. The main motivation patented ideas offer is to create the patent in the first place. This is an extremely flawed approach to building anything well. The concept of "do something you love" is tossed out the window. Who loves to patent things? Why not do something well and have the rest follow. In the end it all comes down to the designer's attitude. In the case of the steam engine, once the patent was lifted, attitudes changed. That, in turn, changed everything.
In software, patents are hard to define. There is simply no way around it. Especially trying to patent an algorithm of some sort. If I were to take an existing algorithm that has been patented, completely re-factor it, and end up with the same result, would that be considered patent infringement? If so, what is really being patented is a specific output paired with a specific input. That would be very unjust.
Thursday, March 12, 2009
More open source migration
According to this entry, the French police force is in the middle of migrating their entire desktop infrastructure from Windows to Ubuntu and the move has already saved them millions of euros. A fascinating conclusion they came to after adopting the migration strategy is that it would have cost more to upgrade the existing Windows infrastructure! I would say that is a good indication that the Windows operating system needs some serious work to stay afloat in the coming decade.
Another interesting aspect of this migration strategy is how it all started by replacing Microsoft Office with Open Office. Open Office can run on Windows, like many other open source software solutions, and by so doing gives users an opportunity to get familiar with the interface. Not only is the general user experience eased into the migration away from proprietary software, but the concept of open source alternatives is introduced. Some of the users may already be familiar with how open source software works, but many wouldn't be.
Open Office is a good example of open source software that is tightly designed after its' proprietary equivalent. There is almost zero learning curve involved. What I think would be really cool to see is the reaction from the users not familiar with Ubuntu after the switch is made. After switching to Open Office, the user may think "wow, this is really cool software, I can't believe its free". After switching to Ubuntu, they might think "what were we paying for earlier?".
I think this story serves as an excellent example for companies currently engulfed in proprietary software to move to open source.
Another interesting aspect of this migration strategy is how it all started by replacing Microsoft Office with Open Office. Open Office can run on Windows, like many other open source software solutions, and by so doing gives users an opportunity to get familiar with the interface. Not only is the general user experience eased into the migration away from proprietary software, but the concept of open source alternatives is introduced. Some of the users may already be familiar with how open source software works, but many wouldn't be.
Open Office is a good example of open source software that is tightly designed after its' proprietary equivalent. There is almost zero learning curve involved. What I think would be really cool to see is the reaction from the users not familiar with Ubuntu after the switch is made. After switching to Open Office, the user may think "wow, this is really cool software, I can't believe its free". After switching to Ubuntu, they might think "what were we paying for earlier?".
I think this story serves as an excellent example for companies currently engulfed in proprietary software to move to open source.
Labels:
microsoft
,
migration
,
office
,
opensource
,
ubuntu
Tuesday, March 10, 2009
More Linux help for Windows users
Just further proof that if you are a Windows user, there is light at the end of the tunnel toward using an entirely open source computing environment.
A good entry, targeted toward Windows users, explains how software is installed on Linux distributions. A great introduction to package managers.
A good entry, targeted toward Windows users, explains how software is installed on Linux distributions. A great introduction to package managers.
Labels:
install
,
linux
,
opensource
,
packagemanager
,
windows
Monday, March 9, 2009
Presto
The Presto operating system looks to be a good alternative for Windows users. It is basically a slimmed down Linux distribution that comes with commonly used applications that Windows users would be familiar with. By commonly used applications, I mean what the good majority of office employees would use on a day-to-day basis.
The Presto operating system looks like it would be quite painless to install and is very reasonably priced.
The main reason I'm interested in Presto is not to use the software myself, but at the prospect of more open source users. I'm already using 100% open source software every day. But I only reached this point after years of lost data and experimentation to see how open source components fit together. I think this could be one of those jumping off points for many folks who hear a lot about open source and would like to try it out.
Something else I find interesting is in the slashdot entry about the operating system. Presto doesn't mention the fact that it is based on a Linux distribution. Is this a clever marketing-ploy? Marketing-ploy? Yes. Clever? Absolutely yes.
I think that hiding the intimidation of "Linux" may be a good thing for their target audience.
The Presto operating system looks like it would be quite painless to install and is very reasonably priced.
The main reason I'm interested in Presto is not to use the software myself, but at the prospect of more open source users. I'm already using 100% open source software every day. But I only reached this point after years of lost data and experimentation to see how open source components fit together. I think this could be one of those jumping off points for many folks who hear a lot about open source and would like to try it out.
Something else I find interesting is in the slashdot entry about the operating system. Presto doesn't mention the fact that it is based on a Linux distribution. Is this a clever marketing-ploy? Marketing-ploy? Yes. Clever? Absolutely yes.
I think that hiding the intimidation of "Linux" may be a good thing for their target audience.
Labels:
distribution
,
linux
,
opensource
,
operatingsystem
,
presto
,
windows
Subscribe to:
Posts
(
Atom
)