Jump to content

DoubleX

Member
  • Content Count

    1,011
  • Joined

  • Last visited

  • Days Won

    10

Blog Comments posted by DoubleX


  1. 10 minutes ago, Kayzee said:

    Honestly I always thought the way people talk about refactoring was a little weird in general, but honestly I have only ever programmed on my own and it's probably a lot different the more people are involved.

    In a solo project, when to refactor usually has an easy, simple and small answer - When you start feeling the significant pain for a while of not doing the refactor.

    Actually, it's like when to clean your room when you're the only one using it - When you start feeling quite uncomfortable on your messy room for a while.

    But in a team project, that will be very different, at least because:

    1. Different team members have different pain threshold

    2. Different team members have different pain tolerance

    3. Most importantly, different team members have different pain points

    So in this case, some kind of a previously agreed upon protocols on refactoring in the team has to be made, even though the protocol can be very vague and ambiguous.

    Also, on the management level, the reasons to refactor will be very different from those of the team members, because the former usually cares about effectiveness and efficiencies on the end results as a team, and rarely cares the pains that impede productivity of the latter in the process :P


  2. 10 hours ago, Kayzee said:

    I think it should be noted that you really are simplifying quite a bit by viewing the problem as only one of productivity via a harvester analogy. The problem is really a lot more complected because adding features is not exactly like chopping trees. It's more like... juggling more things maybe? Maybe adding a new gear to a machine? Point is, every one you add makes the whole thing more and more complicated and messy. Eventually your probably gonna have to stop and figure out a better way to juggle everything or a better way to arrange the gears, but that involves lots of complected thinking and takes time away you could be juggling/using the machine.

    Yes, that's why I've written this:

    Quote

    Of course, the whole axe cutting tree model is highly simplified, at least because:

    1. The axe sharpness deterioration isn't a step-wise function(an axe becomes from having a discrete level of sharpness to another such level after cutting a set number of trees), but rather a continuous one(gradual degrading over time) with some variations on the number of trees cut, meaning that when to sharpen the axe in the real world isn't as clear cut as that in the aforementioned model(usually it's when the harvester starts feeling the pain, ineffectiveness and inefficiency of using the axe due to unsatisfactory sharpness, and these feeling has last for a while)
    2. Not all normal trees are equal, not all defective trees are equal, and not all compensatory trees are equal(these complications are intentionally simplified in this model because these complexities are hardly measurable)
    3. The whole model doesn't take the morale of the harvester into account, except the obvious one that that harvester will resign for using a fully dull axe for too long(but the importance of sharpening the axe will only increase will morale has to be considered as well)
    4. In some cases, even when the axe's not fully dull, it's already impossible to sharpen it to be fully or even just somehow sharp(and in really extreme cases, the whole axe can just suddenly break altogether for no apparent reason)

    Nevertheless, this model should still serve its purpose of making this point across - There's isn't always an universal answer to when to sharpen the axe to reach which level of sharpness, because these questions involve calculations of concrete details(including those critical parts that can't be quantified) on a case-by-case basis, but the point remains that the importance of sharpening the axe should never be underestimated.

    And personally, I use another model when it comes to considering refactoring on the architectural design level - building mansions, but this analogy is much, much more complicated and convoluted that, probably only those having several years of professional software engineering experiences will really fathom it.

    The number of stories is like the scale of the codebase, and clearly, different codebase scales demand different architectural designs.

    It's because, a single storey building might not need a foundation at all, while the foundations of a 10 storey building and that of a 100 storey building can be vastly different.

    Also, usually, the taller the building, the stricter the safety requirements and contingency plannings(like code quality requirements and exception handling standards in the codebase) will apply to it, because the risk(probability and severity of consequences) of collapse will increase as the building gets taller if nothing else changes.

    As the codebase scales, it's like increasing the number of stories of the building, eventually you'll have to stop and reinforce or even rebuild the entire foundations first before resuming, otherwise the building will eventually collapse.

    Also, it means that with the restrictions of the current technology, any codebase will always have an absolute maximum limit on its scale and the number of features it can provide, because having a 10B LoC scale codebase is as unimaginable as having a 10km tall building in the foreseeable future, even though they might eventually be the realities.

    So, even when the architectural designs are ideal on the current state of the codebase, one can't simply add more and more features without considering whether those architectural designs still work well with the increased codebase scale, and eventually some major refactoring involving architectural design changes have to be done.

    On the other hand, if each storey is modular enough(thanks to the ideal architectural design), as long as the pillar of strengths in that storey isn't damaged or even destroyed(maybe it's like the interface and the underlying core implicit assumptions of a module), reworking on a storey shouldn't affect too much on the adjacent, let alone other, stories, even though there are something like water pipes, electrical wires, air vents, etc, that are across multiple stories and even the whole building, which is like cross-cutting concerns in the codebase, that can get in the way of refactoring.

     

    However, I do think that my harvester analogy can serve the point by bringing at least the following across:

    1. Not considering the importance of refactoring can lead to long term disasters, and fatal ones in some cases

    2. Always refactoring when the codebase has less-than-ideal code qualities is usually sub-optimal on the effectiveness and efficiency of pumping out features

    3. Deciding when to refactor should be calculated on a case-by-case basis, and all the relevant factors should be considered

    4. Sometimes one has to sacrifice the long term for a short time to ensure the short term crisis will be solved well enough

    And perhaps, the most important point is that, the productivity of adding new features in the codebase will rarely be linear across the development lifecycles :)

×
Top ArrowTop Arrow Highlighted