The question then becomes, can the system itself make the best call as to what is considered relevant overhead, and which actors are absorb the impact most? Sounds far-fetched, indeed, and even like meta-overhead. It's like we're taking premature optimization and routinizing it by even considering such a question. But as impractical as embedding such a monitor and decision maker inside our software might be, at the conceptual level, it might be worth considering the dimensions of the overhead optimization decisions we make. What would these look like if automated?
Think about brushing your teeth, and letting the water run. Wasting water is a necessary overhead of brushing your teeth. Or, perhaps we can save the water by turning it on and off again while shifting the overhead to the wear-and-tear of the faucet and the time taken to perform the on/off action. Brushing your teeth requires some overhead, and a decision as to who incurs that overhead.
These are the types of questions that programmers think about at the molecular level of their code. We cannot help it, despite the fact that we cannot know ahead of time who will feel the impact of these overhead decisons made at code-writing time. Will we provide a seamless experience for the majority while a handful experience unacceptable latency? And what about other systems running alongside ours? Do we even take them into consideration, or is that a kernel problem? All we can say for certain is that there will come a time when at the application level, some consideration of overhead generated by our code will surface in the form of self-monitoring. It's a dynamic decision about the running system, that is either handled by the operating system, or by the application internally in terms of how it requests resources.
No comments :
Post a Comment