During the development life cycle of a software product, how does your team veering off into unknown territory? Scope creep — as we're taught early on as software developers — is one obstacle plaguing software projects. This is where even small, minute changes, if allowed to accumulate, yield something something maligned with the original goal.
On the other hand, how does your product evolve if you're not allowed to innovate? Because, often enough, late breaking ideas coalesce during development — not after it's finished, not during requirements gathering — but while you're coding. Taking these great ideas that pop into mind and pushing them out until a later date — just so you can stick with your plan — kills their momentum.
So it turns out that staying the course is a difficult thing to do. You need a concise goal and commitment to fulfill it — by keeping out unnecessary features and making sure the thing ships on time. Is it possible to sneak stuff in while honoring your stipulations to the project?
No time wasted
Time is of the essence for software development — do more with less. Otherwise, the bigger guys, your competition, will hit the mark. Freedom to muck around with experimental concepts isn't exactly palpable come crunch time — code needs to be rock solid, ready to pass any QA tests and make it into a production environment.
The question is — how do you make resilient code while under time pressure? That is a challenge and why writing code professionally is difficult — you have no such freedoms. You can't devise a selection of alternative solutions and evaluate the best one. There is no time. You have to go with what works initially and, if luck finds you, you'll get a chance to refactor and improve your code later on. That is a big if, of course. How often do you put TODOs in your comments that never go away?
So if you can understand why time is so important during the fragile embryonic development stage of a project, perhaps you can formulate better ways to do more with it. Better ways to improve your coding standards.
One way to look at it is this — it isn't so much a problem with time as much as it is with functionality — the behavior of your software. Because what your software does dictates how much time you'll need to implement it. Part of being agile is doing small development-release iterations — short time frames. So this means that the scheduled features need to take into account the timeline, and not the other way around. I think this is a major breakthrough in how we do software — understanding that yes, people do expect code on a regular basis and yes, there is a fixed number of accomplishments teams can make in that interval.
Time is of the essence, not the software. Considering the amount of time you have to do something — before it's considered production-ready and hits the shelves — is perhaps the first and foremost determinant of any project's success. If the interval means only small, minute changes can be done, then the reality is, the schedule needs to change, not the way we write code.
What does your software do?
You'd be surprised — I find I do this myself — how often I hear developers talk about what their software will do when asked what it does. I find it interesting that we're so fixated on the future — how do we make sure our software is future-proof? The problem with that sentiment is that it doesn't support the notion of problem-solving aspired software development.
We all know that mission statements can be loathsome at times — but they can be a powerful tool in staying focused on solving the problem at hand. All software solves a particular problem — if the mission of the software is to solve the problem, or to at lease appease it to some degree, then you should be able to state how it'll do that. What you need is a consolidated, fundamental kernel that your software derives from — the problem and what your software does about it.
This makes describing what your software does much easier. With a declarative foundation of what your software does, you can reference it throughout the entire lifespan of the project. Having such an immutable artifact means that you can put it to good use when it comes time to evaluate what features are going to make it in and which don't. Consider whether the proposed feature will have a positive impact on your mission statement before spending any real time on it. Having said that, we've now got two primordial tools for staying the course — the mission statement and the knowledge that time is of the essence.
Some things add up
How are we to treat these axiomatic rules — or restrictions — of software development? The two seem very prohibitive to making any sort of progress whatsoever. On the one hand, we've got time working against us — pushing forward, never slowing, always diminishing what resources we do have. On the other, we've got a bombardment of requests and other issues to figure out — all while making something that solves a particular problem.
We need to innovate around these constraints. That is, instead of trying to beat the competition by jumping ahead and pretending that time isn't of the essence or that building new features that don't support the mission statement, we should stay the course. Innovation means recognizing these constraints and beating the odds.
Showing posts with label process. Show all posts
Showing posts with label process. Show all posts
Tuesday, September 20, 2011
Monday, June 15, 2009
Combining Multiprocessing And Threading
In Python, there are two ways to achieve concurrency within a given application; multiprocessing and threading. Concurrency, whether in a Python application, or an application written in another language, often coincides with events taking place. These events can be written directly in code much more effectively when using an event framework. The basic need that the developer using this framework has is the ability to publish events. In turn, things happen in response to those events. Now, what the developer most likely isn't concerned with is the concurrency semantics involved with these event handlers. The circuits Python event framework will take care of this for the developer. What is interesting is how the framework manages the concurrency method used; multiprocessing or threading.
With the multiprocessing approach, a new system process is created for each logical thread of control. This is beneficial on systems with more than one processor because the Python global interpreter lock isn't a concern. This gives the application potential to achieve true concurrency. With the threading approach, a new system thread, otherwise known as a lightweight process is created for each logical thread of control. Applications using this approach means that the Python global interpreter lock is a factor. On systems with more than one processor, true concurrency is not possible within the application itself. The good news is that both approaches can potentially be used inside a given application. There are two independent Python modules that exist for each method. The abstractions inside of each of these modules share nearly identical interfaces.
The circuits Python event framework uses an approach that will use either the multiprocessing module or the threading module. The circuits framework will attempt to use the multiprocessing module method to concurrency in preference to the threading module. The approach to importing the required modules and defining the concurrency abstraction is illustrated below.

As you can see, the core Process abstraction within circuits is declared based on what modules exist on the system. If multiprocessing is available, it is used. Otherwise, the threading module is used. The only downfall to this approach is that as long as the multiprocessing module is available, threads cannot be used. Threads may be preferable to processes in certain situations.
With the multiprocessing approach, a new system process is created for each logical thread of control. This is beneficial on systems with more than one processor because the Python global interpreter lock isn't a concern. This gives the application potential to achieve true concurrency. With the threading approach, a new system thread, otherwise known as a lightweight process is created for each logical thread of control. Applications using this approach means that the Python global interpreter lock is a factor. On systems with more than one processor, true concurrency is not possible within the application itself. The good news is that both approaches can potentially be used inside a given application. There are two independent Python modules that exist for each method. The abstractions inside of each of these modules share nearly identical interfaces.
The circuits Python event framework uses an approach that will use either the multiprocessing module or the threading module. The circuits framework will attempt to use the multiprocessing module method to concurrency in preference to the threading module. The approach to importing the required modules and defining the concurrency abstraction is illustrated below.

As you can see, the core Process abstraction within circuits is declared based on what modules exist on the system. If multiprocessing is available, it is used. Otherwise, the threading module is used. The only downfall to this approach is that as long as the multiprocessing module is available, threads cannot be used. Threads may be preferable to processes in certain situations.
Subscribe to:
Posts
(
Atom
)